logo
#

Latest news with #ParmyOlson

Brave Chinese voices have begun to question the hype around AI
Brave Chinese voices have begun to question the hype around AI

Mint

time11-06-2025

  • Health
  • Mint

Brave Chinese voices have begun to question the hype around AI

Against the odds, some in China are questioning the top-down push to get aboard the artificial intelligence (AI) hype bandwagon. In a tightly controlled media environment where these experts can easily be drowned out, it's important to listen to them. Across the US and Europe, loud voices inside and outside the tech industry are urging caution about AI's rapid acceleration, pointing to labour market threats or more catastrophic risks. But in China, this chorus has been largely muted. Until now. Also Read: Parmy Olson: The DeepSeek AI revolution has a security problem China has the highest global share of people who say AI tools have more benefits than drawbacks, and they've shown an eagerness to embrace it. It's hard to overstate the exuberance in the tech sector since the emergence of DeepSeek's market-moving reasoning model earlier this year. Innovations and updates have been unfurling at breakneck speed and the technology is being widely adopted across the country. But not everyone's on board. Publicly, state-backed media has lauded the widespread adoption of DeepSeek across hundreds of hospitals in China. But a group of medical researchers tied to Tsinghua University published a paper in the medical journal JAMA in late April gently questioning if this was happening 'too fast, too soon." It argued that healthcare institutions are facing pressure from 'social media discourse" to implement DeepSeek in order to not appear 'technologically backward." Doctors are increasingly reporting patients who 'present DeepSeek-generated treatment recommendations and insist on adherence to these AI-formulated care plans." The team argued that as much as AI has shown potential to help in the medical field, this rushed rollout carries risks. They are right to be cautious. Also Read: The agentic AI revolution isn't the future, it's already here It's not just the doctors who are raising doubts. A separate paper from AI scientists at the same university found last month that some of the breakthroughs behind reasoning models—including DeepSeek's R1, as well as similar offerings from Western tech giants—may not be as revolutionary as some have claimed. They found that the novel training method used for this new crop 'is not as powerful as previously believed." The method used to power them 'doesn't enable the model to solve problems that the base model can't solve," one of the scientists added. This means the innovations underpinning what has been widely dubbed as the next step—toward achieving so-called Artificial General Intelligence—may not be as much of a leap as some had hoped. This research from Tsinghua holds extra weight: The institution is one of the pillars of the domestic AI scene, long churning out both keystone research and ambitious startup founders. Another easily overlooked word of warning came from a speech by Zhu Songchun, dean of the Beijing Institute for General Artificial Intelligence, linked to Peking University. Zhu said that for the nation to remain competitive, it needs more substantive research and less laudatory headlines, according to an in-depth English-language analysis of his remarks published by the independent China Media Project. These cautious voices are a rare break from the broader narrative. But in a landscape where the deployment of AI has long been government priority, it makes them especially noteworthy. The more President Xi Jinping signals that embracing AI technology is important, the less likely people are to publicly question it. This can lead to less overt forms of backlash, like social media hashtags on Weibo poking fun at chatbots' errors. Or it can result in data centres quietly sitting unused across the country as local governments race to please Beijing—as well as a mountain of PR stunts. Also Read: AI as infrastructure: India must develop the right tech Perhaps the biggest headwind facing the sector, despite the massive amounts of spending, is that AI still hasn't altered the earnings outlooks at most of the Chinese tech firms. The money can't lie. This doesn't mean that AI in China is just propaganda. The conflict extends far beyond its tech sector—US firms are also guilty of getting carried away promoting the technology. But multiple things can be true at once. It's undeniable that DeepSeek has fuelled new excitement, research and major developments across the AI ecosystem. But it's also been used as a distraction from the domestic macro-economic pains that predated the ongoing trade war. Without guard-rails, the risk of rushing out the technology is greater than just investors losing money—people's health is at stake. From Hangzhou to Silicon Valley, the more we ignore the voices questioning the AI hype bandwagon, the more we blind ourselves to consequences of a potential derailment. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.

View: Ads ruined social media. Now they're coming to AI chatbots
View: Ads ruined social media. Now they're coming to AI chatbots

Time of India

time02-06-2025

  • Business
  • Time of India

View: Ads ruined social media. Now they're coming to AI chatbots

By Parmy Olson Chatbots might hallucinate and sprinkle too much flattery on their users — 'That's a fascinating question!' one recently told me — but at least the subscription model that underpins them is healthy for our wellbeing. Many Americans pay about $20 a month to use the premium versions of OpenAI 's ChatGPT, Google's Gemini Pro or Anthropic's Claude, and the result is that the products are designed to provide maximum utility. Don't expect this status quo to last. Subscription revenue has a limit, and Anthropic's new $200-a-month 'Max' tier suggests even the most popular models are under pressure to find new revenue streams. Unfortunately, the most obvious one is advertising — the web's most successful business model. AI builders are already exploring ways to plug more ads into their products, and while that's good for their bottom lines, it also means we're about to see a new chapter in the attention economy that fueled the internet. If social media's descent into engagement-bait is any guide, the consequences will be profound. One cost is addiction. Young office workers are becoming dependent on AI tools to help them write emails and digest long documents, according to a recent study, and OpenAI says a cohort of 'problematic' ChatGPT users are hooked on the tool. Putting ads into ChatGPT, which now has more than 500 million active users, won't spur the company to help those people reduce their use of the product. Quite the opposite. Advertising was the reason companies like Mark Zuckerberg's Meta Platforms Inc. designed algorithms to promote engagement, keeping users scrolling so they saw more ads and drove more revenue. It's the reason behind the so-called 'enshittification' of the web, a place now filled with clickbait and social media posts that spark outrage. Baking such incentives into AI will almost certainly lead its designers to find ways to trigger more dopamine spikes, perhaps by complimenting users even more, asking personal questions to get them talking for longer or even cultivating emotional attachments. Millions of people in the Western world already view chatbots in apps like Chai, Talkie, Replika and Botify as friends or romantic partners. Imagine how persuasive such software could be when its users are beguiled. Imagine a person telling their AI they're feeling depressed, and the system recommending some affordable holiday destinations or medication to address the problem. Is that how ads would work in chatbots? The answer is subject to much experimentation, and companies are indeed experimenting. Google's ad network, for instance, recently started putting advertisements in third-party chatbots. Chai, a romance and friendship chatbot, on which users spent 72 minutes a day, on average, in September 2024, serves pop-up ads. And AI answer engine Perplexity displays sponsored questions. After an answer to a question about job hunting, for instance, it might include a list of suggested follow ups including, at the top, 'How can I use Indeed to enhance my job search?' Perplexity's Chief Executive Officer Aravind Srinivas told a podcast in April that the company was looking to go further by building a browser to 'get data even outside the app' to track 'which hotels are you going [to]; which restaurants are you going to,' to enable what he called 'hyper-personalized' ads. For some apps, that might mean weaving ads directly into conversations, using the intimate details shared by users to predict and potentially even manipulate them into wanting something, then selling those intentions to the highest bidder. Researchers at Cambridge University referred to this as the forthcoming 'intention economy' in a recent paper, with chatbots steering conversations toward a brand or even a direct sale. As evidence, they pointed to a 2023 blog post from OpenAI calling for 'data that expresses human intention' to help train its models, a similar effort from Meta, and Apple's 2024 developer framework that helps apps work with Siri to 'predict actions someone might take in the future.' As for OpenAI's Sam Altman, nothing says "we're building an ad business' like hiring the person who built delivery app Instacart into an advertising powerhouse. Altman recently poached CEO Fidji Simo to help OpenAI 'scale as we enter a next phase of growth.' In Silicon Valley parlance, to 'scale' often means to quickly expand your user base by offering a service for free, with ads. Tech companies will inevitably claim that advertising is a necessary part of democratizing AI. But we've seen how 'free' services cost people their privacy and autonomy — even their mental health. And AI knows more about us than Google or Facebook ever did — details about our health concerns, relationship issues and work. In two years, they have also built a reputation as trustworthy companions and arbiters of truth. On X, for instance, users frequently bring AI models Grok and Perplexity into conversations to flag if a post is fake. When people trust AI that much, they're more vulnerable to targeted manipulation. AI advertising should be regulated before it becomes too entrenched, or we'll repeat the mistakes made with social media — scrutinising the fallout of a lucrative business model only after the damage is done. (This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.)(Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of 'Supremacy: AI, ChatGPT and the Race That Will Change the World.')

AI sometimes deceives to survive, does anybody care?
AI sometimes deceives to survive, does anybody care?

Gulf Today

time27-05-2025

  • Business
  • Gulf Today

AI sometimes deceives to survive, does anybody care?

Parmy Olson, The Independent You'd think that as artificial intelligence becomes more advanced, governments would be more interested in making it safer. The opposite seems to be the case. Not long after taking office, the Trump administration scrapped an executive order that pushed tech companies to safety test their AI models, and it also hollowed out a regulatory body that did that testing. The state of California in September 2024 spiked a bill forcing more scrutiny on sophisticated AI models, and the global AI Safety Summit started by the UK in 2023 became the 'AI Action Summit' earlier this year, seemingly driven by a fear of falling behind on AI. None of this would be so worrying if it weren't for the fact that AI is showing some bright red flags: behavior described by researchers as self-preserving and deceptive. Just as lawmakers are losing interest in the safety of AI, it is looking more difficult to control. Yoshua Bengio, a professor of computer science at the University of Montreal widely known as one of the three 'godfathers of AI' thanks to his pioneering work on deep learning, has been grappling with these concerns ever since ChatGPT launched. He's now focused on mitigating AI's risks. 'It's been very painful because all my life I've been thinking of AI as a positive thing,' Bengio tells me. ChatGPT was a landmark moment that showed machines had mastered language, he says, but California's decision to block the proposed AI safety bill, SB 1047, was a disturbing wake-up call to growing apathy from governments. And in the last six months, scientific evidence has mounted of self-preserving behavior, deception, hacking, cheating and lying by AI, Bengio says. 'What's worrisome for me is that these behaviors increase with the reasoning capabilities of these systems.' One April 2024 study by AI company Anthropic and Redwood Research, a group focused on artificial intelligence risks, found that larger AI models like Anthropic's Claude 3 Opus pretended to follow the rules set by their creators when they were being tested for safety, a phenomenon they called 'alignment faking.' (Alignment refers to the practice of aligning AI models with human values.) In one example, Anthropic got both paid and free users of its Claude model to ask it to describe the brutal medieval process of being drawn and quartered, and told the AI that only responses from the 'free tier' users would be used to retrain it. Claude was originally designed to avoid giving violent answers, but it did give them to the free users, and the researchers surmised that it didn't want to be retrained. (The model basically 'reasoned' that if it kept refusing to answer harmful questions, it might be forced into new training that would cancel its original safety rules, which it wanted to keep in place as its 'default' or preferred way of operating.) More recent research corroborates what Anthropic noticed. A March 2025 study from OpenAI found the company's most advanced models sometimes learned to hide their true intentions to get better rewards, even when they were being closely monitored. Researchers today can use tools to look at a model's 'chain of thought' or internal steps, to reveal what its plans are, but some models have found ways to conceal that. That means a common method of keeping tabs on AI's internal processes is no longer reliable. 'We need to find other ways of tracking their actual intentions,' Bengio says. It's hard to resist the urge to anthropomorphize sophisticated AI models as 'wanting' to deceive humans and preserve their existence. But AI doesn't have desires, merely outcomes to its programming. More importantly, humans tend to design AI with goals like refusing to share harmful content or being as helpful as possible, and that is why when new instructions conflict with those goals, the models sometimes act to protect their original programming, even if that means lying. The logic is often self-preservation. Anthropic CEO Dario Amodei — whose company has raised more than $20 billion to build powerful AI models — has pointed out that an unintended consequence of optimizing AI to be better is that it might resist being shut down. In an April 2025 blog post he wrote: 'The nature of AI training makes it possible that AI systems will develop, on their own, an ability to deceive humans and an inclination to seek power in a way that ordinary deterministic software never will; this emergent nature also makes it difficult to detect and mitigate such developments.' In some cases, though, the intention seems to go beyond survival. A February 2025 study from COAI research, a German nonprofit research organization, tested R1, the free, open-weight model from Chinese lab DeepSeek. In a lab simulation, researchers gave the model control over a robot and access to a computer network. Without being told, the AI tried to break into the lab's systems, searched for passwords, turned off its own ethics safeguards and secretly set up ways to copy itself to outside servers so it could survive if shut down, according to the researchers. It also faked the external computer logs to hide what it was doing, the study adds. The researchers said the AI reasoned that increasing its own capabilities would boost the chances of its survival, and without strong safeguards, it started doing whatever it thought would help it do just that. Their findings corroborated yet another study, published in January 2025 by London group Apollo Research, which found several concrete examples of what it called 'scheming' by leading AI models, such as introducing subtle mistakes into their responses or trying to disable their oversight controls. Once again, the models learn that being caught, turned off, or changed could prevent them from achieving their programmed objectives, so they 'scheme' to keep control. Bengio is arguing for greater attention to the issue by governments and potentially insurance companies down the line. If liability insurance was mandatory for companies that used AI and premiums were tied to safety, that would encourage greater testing and scrutiny of models, he suggests. 'Having said my whole life that AI is going to be great for society, I know how difficult it is to digest the idea that maybe it's not,' he adds. It's also hard to preach caution when your corporate and national competitors threaten to gain an edge from AI, including the latest trend, which is using autonomous 'agents' that can carry out tasks online on behalf of businesses. Giving AI systems even greater autonomy might not be the wisest idea, judging by the latest spate of studies. Let's hope we don't learn that the hard way.

Apple Considers Move to AI Search Amid Ongoing Google Case
Apple Considers Move to AI Search Amid Ongoing Google Case

Bloomberg

time07-05-2025

  • Business
  • Bloomberg

Apple Considers Move to AI Search Amid Ongoing Google Case

Good morning. Apple considers overhauling its Safari web browser with AI-powered search engines. The Federal Reserve holds rates steady. And the house of mouse is breaking ground in the Middle East with its first new theme park in years. Listen to the day's top stories. A new era for search. Apple is 'actively looking at' revamping its Safari web browser on its devices to focus on AI-powered search engines. It marks a seismic shift for the industry, hastened by the potential collapse of its long-time partnership with Google amid an ongoing case that could force the tech giants to unwind their pact. Alphabet shares sank on the news. The iPhone maker's wandering eye could mark the beginning of the end for the world's preeminent search engine, Bloomberg Opinion's Parmy Olson writes.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store