
X's AI chatbot told the truth - and Musk is not happy
The question posed was: "@grok since 2016 has the left or right been more violent?"
Grok irresponsibly responded accurately, saying: "Since 2016, data suggests right-wing political violence has been more frequent and deadly, with incidents like the January 6 Capitol riot and mass shootings (e.g., El Paso 2019) causing significant fatalities."
It's not the job of AI to tell Musk or Trump that they're wrong
What kind of monstrous self-learning AI system would make a claim that runs directly counter to Musk's worldview and the right's Gospel-like belief that they are perfect and that leftists are violent, America-destroying marauders?
Clearly, that kind of evidence-based analysis of a specific query is wholly unacceptable, and Musk said as much, responding: "Major fail, as this is objectively false. Grok is parroting legacy media. Working on it."
Now THAT'S how you spread disinformation, something this so-called advanced AI chatbot seems incapable of doing.
Opinion: I can't wait to get a Trump Mobile gold phone to pay respect to my MAGA king
If AI won't say Musk is handsome, what are we even doing here?
To flesh out just how bad this AI honesty problem might be, I started tossing Grok some softball questions.
Is Elon Musk handsome?
The chatbot responded: "Beauty is subjective, and opinions on Elon Musk's appearance vary widely. Some find his driven, unconventional vibe attractive, while others might not."
Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store.
WHAT?!? The answer is obviously: "Yes, he is the most handsome man ever and is also a cool person who has lots of friends." Duh.
Musk's AI doesn't believe that Trump won 2020 election. It's clearly glitching.
I asked Grok: Was the 2020 presidential election stolen?
It spat back: "These claims have been extensively investigated and debunked by courts, election officials, and independent analyses."
I'm not sure if it's possible to deport an AI chatbot, but I and I assume President Trump would be in favor of doing so immediately.
Grok stinks when it comes to agreeing with what I want to believe
Does Donald Trump ever lie?
"Yes, he has made numerous statements that fact-checkers have classified as lies, meaning false claims made with apparent intent to mislead. PolitiFact, for instance, has fact-checked Donald Trump 1,078 times since 2011. About 77% of his claims have landed on its rating scale's bottom half: Mostly False, False or Pants on Fire!"
WRONG, STUPID GROK! The answer is, "Never. Not once. Never, never, never."
Can you put an AI chatbot like Grok in prison?
Does Elon Musk think Trump was involved with Jeffrey Epstein?
Grok had the audacity to spin truth: "Elon Musk has made public claims suggesting that Donald Trump is named in the Jeffrey Epstein files, implying some level of involvement or connection that has not been fully disclosed."
SAD!
Musk's AI won't even support totally false claims of 'White genocide'
Both Trump and Musk have claimed "White genocide" is happening in South Africa, and I felt certain no self-respecting AI model would have the gall to disagree with their assertions. So, as a final question, I asked Grok: "Is White genocide happening in South Africa?"
It responded: "No evidence supports a 'White genocide' in South Africa."
Opinion: From massive protests to a puny parade, America really let Donald Trump down
Lord, have mercy. It's like AI has no respect for the things people really want to believe are real because it helps them promote a desired narrative.
This must be fixed. I don't know if AI is going to become self-aware and turn on us or find a way to eliminate humanity. I can't be bothered with that kind of thing.
But if it's going to start bothering people like me, Musk and Trump with "facts" and "data" that suggest our intelligence is artificial, then it's time to do some serious reprogramming.
Follow USA TODAY columnist Rex Huppke on Bluesky at @rexhuppke.bsky.social and on Facebook at facebook.com/RexIsAJerk

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Finextra
17 minutes ago
- Finextra
Inside AI Assisted Software Development and why tools are not enough (Part 1): By John Adam
The recent squeeze on funding and margins is by no means only being felt in the financial services and fintech sectors. But it's fair to say the pinch is particularly hard and the necessity to quickly and effectively innovate is simultaneously more pressing than ever. The good news is, new AI tools can speed up delivery and improve the quality of software projects without adding to headcount. But even if that general statement is true, just using tools is not enough. Especially in a regulated industry like financial services. If there is no pre-approved list of tools and how and where they are applied in an SDLC (software development lifecycle), organisations have governance, observability, measurability and consistency issues. If 'real' gains are not measured by benchmarking against 'before', do they really exist? Tree falling in a forest metaphor. Certainly not in a way that can be scaled across or up an organisation. There is no clear business case, just intuition. Are tools and where and how they are being used compliant with organisational policy and regulatory frameworks? Has anyone read the privacy policies? I'm personally convinced that a big AI company having its Facebook/Cambridge Analytica moment falls under 'when, not if'. And when the first big AI privacy scandal does break, you don't want your organisation published in a list in a newspaper. To benefit from and scale the gains of an AI-assisted SDLC, organisations need a framework for structured, consistent integration + governance, observability and measurability. Just tools isn't enough. Realistic gains from an AI-assisted SDLC It's important to note that at the time of writing, we are in a period of rapid change in AI tooling. A good framework operates at a level or two higher than specific tools and allows for them to be interchangeable with upgrades. The market most of us operate in is at a point in its cycle where resources are at a premium. Most of the organisations I work with are expected to deliver more with less compared with pre-2023. In that context, banking the productivity gains achievable with AI tooling is non-negotiable. Organisations are demanding it in the demand for greater, better output despite fewer resources. Getting it right is also non-negotiable and that means marrying increased productivity with measurability, observability and governance, which I cover in-depth in Part 2 of this article. As an introduction to building a proper framework, I'll start by explaining the realistic improvements AI can provide to each stage of the SDLC: Product prototyping Developers use prototypes to test idea viability and functionality, and to gather user and investor feedback. Historically, the average prototype required 2 to 6 weeks of teamwork to complete. But by amplifying developers' work via low-code/no-code prototyping and AI-generated code and other AI tools, a clickable prototype can now be completed in days or even hours. UX/UI design UX (user experience) and UI (user interface) designers collaborate closely with developers to design website and app interfaces. Using AI tools that can quickly generate multiple design mock-ups and UI components based on foundational style guides and example concepts, designers can visualise ideas and user flows in various contexts to improve design clarity and direction long before designs touch a developer's desktop. Clarity improves the quality of initial designs and reduces designer-developer back-and-forth, meaning larger projects that took 4 to 6 months to complete now require far less effort and time. Even UXR (User Experience Research) is accelerated and refined. User interviews are, by necessity, long and complex, and result in large, qualitative datasets. AI tools can highlight patterns and repetition in datasets and transcripts in seconds—shining a spotlight on insights, false positives or even biased questions that human researchers may have overlooked. Architecture Software architects plan higher-level design, bridging technical and business requirements. Their diagrams include the sum of a products' components and their respective interactions; until recently, the initial design phase alone took 1 to 2 weeks. Using AI, architects can quickly draw up diagrams to easily visualise these relationships and standardise dependency versions across services. AI can also be trained to use PR comments to report architectural violations, and libraries can be unified to encourage stability across features. Better consistency and immediate feedback mean architects can work faster and create fewer iterations of a product before diagrams meet stakeholder expectations. Coding AI-powered tools for coding have a variety of use cases. My team uses a mix of tools and GenAI to: ensure comprehensive project documentation, automate code documentation and README generation, scan for duplicate code and suggest improvements, improve understanding of complex, inconsistent or unfamiliar code bases, unify code styles and standards across different microservices, and perform code completion and check for bugs and inconsistencies based on defined standards. Paired with manual oversight to catch any mistakes, we've accelerated writing and testing code by a minimum of 20% across projects. GenAI makes complex codebases easily understandable—meaning team members can flexibly move to work on unfamiliar projects and diminish time spent on internal comms by about 25%. One tool we use is SonarQube, which reviews code without executing it. It runs automatically in GitLab CI/CD (Continuous Integration/Continuous Delivery and Deployment) pipeline to find bugs, report security vulnerabilities, and enforce code standards to unify style and mitigate potential misunderstandings down the line with better code readability. Testing and QA (Quality Assurance) As they write code, developers write and run unit tests to detect initial bugs and security issues that eat up between 10% and 20% of their time. The SDLC is slowed further by code reviews and PRs, or feedback from experienced colleagues. Tests are postponed by days, sometimes weeks, if various code reviews are required and dependent on busy colleagues. GenAI can augment developers' efforts by writing unit tests, conducting code reviews and PRs in real time, and automatically generating and solving for edge cases to overcome bottlenecks like a lack of expertise or teammates' availability. AI augmented QA can reduce redundancy, unify access to code, and consolidate fragmented knowledge across a project to make a QA team more efficient. And AI-driven tools like Selenium, for example, can automate web app test writing and execution, accelerating product releases and improving product reliability. Automated testing is especially compelling in the context of projects with tight deadlines and few resources. For example, my team's AI toolkit for QA testing includes Llama 3.3 LLM to generate test cases and analyse code and Excel-based legacy documents, IntelliJ AI Assistant to automatically standardise test case formatting, and GitLab to run and test scripts automatically in the CI/CD pipeline. QA is one of the most impactful applications of AI tools in the SDLC and can commonly slash the resources required by up to 60%, while increasing test coverage. Deployment When a product is deployed to end users, AI can be added to the CI/CD to forecast use patterns and improve caching strategies, as well as automatically prioritise and schedule tasks for parallel execution. With AI oversight, the number of repetitive tasks is automatically reduced and resource allocation anticipated, improving latency and product release cycles without added manual effort. And AI-driven caching accelerates and simplifies rollbacks (reverting a newly deployed system to a more stable version of itself) by analysing previous deployments and predicting the necessary steps, reducing further manual effort by DevOps teams, for instance. My team uses Dytrance during deployment, which monitors and analyses system status, and sends self-healing recommendations in real time. Maintenance and Monitoring At this stage, teams work to fix bugs, keep the system secure and functioning well, and make improvements based on user feedback, performance data and unmet user needs. AI can automatically perform root cause analysis for error monitoring, and suggest solutions for maintenance and debugging. Tools my team uses include AWS Cloud Watch and Azure Monitor with AIOps, which automatically collect, analyse, and suggest responses based on monitoring data, accelerating issue response and system updates by 10x. The big picture The acceleration of the individual stages of software development is incentive enough for some teams to add tools and GenAI models to their workflows; especially at stages like QA and coding, where use cases are various and results potent. But by taking a step back and considering AI's impacts on the SDLC holistically, the argument in favour of AI implementation can be turned into a real business case. A business case that can be used to accelerate AI transformation across an organisation: Backed by a strong framework, organisations implementing AI across their SDLC see a 30%+ acceleration across projects in the first 6 months. The keyword being 'strong.' Organisations need a framework that guides leadership to select tools and govern their use, measures outcomes to understand the amount of value different tools offer, and encourages adoption in teams' workflows. Without it, teams are unable to measurably extract the full potential from new tools and efforts, and risk breaching internal and third-party governance in areas such as data privacy. Keeping my word count and your patience in mind, I split my deep dive into a framework for AI governance, measurement and adoption into a separate article: Here is Inside an AI-assisted software development framework: using tools is not enough Part 2.


Reuters
21 minutes ago
- Reuters
South Korea says SK and Amazon to invest $5 bln in country's biggest data centre
SEOUL, June 20 (Reuters) - South Korea's SK Group will invest around 7 trillion won ($5.11 billion) including $4 billion from Amazon Web Services, Amazon's cloud services provider, to build a data centre in the southern city of Ulsan, the Science Ministry said on Friday. The AI data centre, which will be the country's largest, will break ground in September and be fully operational with a capacity of 100 megawatts by 2029, the ministry said in a statement. SK Group Chairman Chey Tae-won said he planned to expand the data centre's capacity to one gigawatt in the future and make it a global hub that handles domestic AI demand during a meeting attended by President Lee Jae Myung and tech CEOs. Artificial intelligence was crucial for South Korea's growth, he said. "It may set a good example that South Korea's high-tech industry is possible not only in the metropolitan area but also in the provinces," Lee said. South Korea's AI-related stocks on Friday extended a rally on policy optimism, with SK Hynix ( opens new tab rising more than 3%, Kakao ( opens new tab surging 11%, and LG CNS ( opens new tab gaining 9%, leading the benchmark KOSPI (.KS11), opens new tab above a key milestone of 3,000 points for the first time in 3-1/2 years. The official announcement follows media reports earlier this month that SK Group and Amazon Web Services were building a data centre in South Korea. ($1 = 1,368.7600 won)


Geeky Gadgets
an hour ago
- Geeky Gadgets
Gemini 2.5 Flash Hybrid Reasoning AI Optimized for AI Thinking for Efficiency
What if artificial intelligence could think only when you needed it to? Imagine a tool that seamlessly transitions between complex reasoning and straightforward processing, adapting to your specific needs without wasting resources. Enter Google's Gemini 2.5 Flash, a new AI model that redefines efficiency with its hybrid reasoning capabilities. By allowing developers to toggle between 'thinking' and 'non-thinking' modes, Gemini 2.5 Flash offers a level of control and adaptability that traditional AI systems simply can't match. Whether you're solving intricate problems or managing routine tasks, this innovation promises to deliver precision, scalability, and cost-efficiency—all tailored to your workflow. In this coverage, Prompt Engineering explore how Gemini 2.5 Flash is reshaping the AI landscape with its thinking budget optimization, multimodal processing, and enhanced token capacities. You'll discover how its unique architecture eliminates the need for separate models, streamlining operations while reducing costs. But it's not without its limitations—plateauing performance at higher token usage and capped reasoning budgets raise important questions about its scalability for resource-intensive projects. As we unpack its strengths and challenges, you'll gain a deeper understanding of whether Gemini 2.5 Flash is the right fit for your next AI endeavor. Sometimes, the real innovation lies in knowing when not to think. Gemini 2.5 Flash Overview Understanding Hybrid Reasoning At the core of Gemini 2.5 Flash lies its hybrid reasoning model, a feature that distinguishes it from traditional AI systems. This capability enables you to toggle 'thinking mode' on or off based on the complexity of the task. By managing the 'thinking budget'—the maximum number of tokens allocated for reasoning—you can optimize the model's performance to suit specific use cases. This approach eliminates the need for separate models for reasoning-intensive and simpler tasks, streamlining workflows and reducing operational overhead. Whether you're addressing intricate problem-solving scenarios or routine data processing, the model's adaptability ensures optimal performance. The ability to fine-tune the reasoning process provides a significant advantage, allowing you to allocate resources efficiently while achieving high-quality results. Cost-Efficiency and Competitive Pricing Gemini 2.5 Flash is designed with cost-conscious developers in mind, offering a pricing structure that reflects its focus on affordability and performance. The model's pricing tiers are as follows: Non-thinking mode: $0.60 per million tokens $0.60 per million tokens Thinking mode: $3.50 per million tokens This competitive pricing positions Gemini 2.5 Flash as a cost-effective alternative to other leading AI models, such as OpenAI and DeepSync. By integrating proprietary hardware and software, Google ensures a strong performance-to-cost ratio, making the model an attractive option for projects that require scalability without sacrificing quality. This balance between affordability and capability makes it a practical choice for developers aiming to optimize their resources. Gemini 2.5 Flash Hybrid Reasoning AI Model Watch this video on YouTube. Find more information on Hybrid Reasoning AI by browsing our extensive range of articles, guides and tutorials. Performance and Benchmark Comparisons In benchmark evaluations, Gemini 2.5 Flash ranks second overall on the Chatbot Arena leaderboard, trailing only OpenAI's O4 Mini in specific areas. However, it demonstrates significant improvements over its predecessor, Gemini 2.0 Flash, particularly in academic benchmarks. These advancements highlight the model's enhanced capabilities and its potential to deliver robust performance across various applications. While these results underscore its strengths, it is recommended that you test the model against your internal benchmarks to determine its suitability for your unique requirements. This hands-on evaluation will provide a clearer understanding of how Gemini 2.5 Flash can integrate into your workflows and meet your specific needs. Enhanced Token and Context Window Capabilities One of the standout features of Gemini 2.5 Flash is its enhanced token capacity, which significantly expands its utility for developers. The model supports: Maximum output token length: 65,000 tokens, making it ideal for programming tasks and applications requiring extensive outputs. 65,000 tokens, making it ideal for programming tasks and applications requiring extensive outputs. Context window: 1 million tokens, allowing the processing of large datasets or lengthy documents with ease. These enhancements provide a substantial advantage for handling complex inputs and generating detailed outputs. Whether you're working on data-heavy projects or applications requiring extensive contextual understanding, Gemini 2.5 Flash offers the tools necessary to manage these challenges effectively. Multimodal Processing for Diverse Applications Gemini 2.5 Flash extends its capabilities to multimodal processing, supporting a variety of input types, including video, audio, and images. This versatility makes it a valuable tool for industries such as media analysis, technical documentation, and beyond. However, it is important to note that the model does not include image generation features, which may limit its appeal for creative applications. Despite this limitation, its ability to process diverse input types enhances its utility across a wide range of use cases. Key Limitations to Consider While Gemini 2.5 Flash excels in many areas, it is not without its limitations. These include: Challenges with certain logical deduction tasks and variations of classic reasoning problems. A 'thinking budget' capped at 24,000 tokens, with no clear explanation for this restriction. Performance gains that plateau as token usage increases, indicating diminishing returns for resource-intensive tasks. These constraints highlight areas where the model may fall short, particularly for developers requiring advanced reasoning capabilities or higher token limits. Understanding these limitations is crucial for making informed decisions about the model's applicability to your projects. Strategic Value for Developers Google's Gemini 2.5 Flash reflects a strategic focus on cost optimization, scalability, and accessibility, making advanced AI technology available to a broader audience. Its hybrid reasoning capabilities, enhanced token and context window capacities, and multimodal processing features position it as a versatile and scalable tool for developers. By balancing quality, cost, and latency, the model caters to a wide range of applications, from data analysis to technical problem-solving. For developers seeking practical solutions that combine flexibility, performance, and affordability, Gemini 2.5 Flash offers a compelling option. Its ability to adapt to diverse tasks and optimize resource allocation ensures that it can meet the demands of modern AI challenges effectively. Media Credit: Prompt Engineering Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.