Latest news with #DarioAmodei
Yahoo
9 hours ago
- Science
- Yahoo
AI is more likely to create a generation of ‘yes-men on servers' than any scientific breakthroughs, Hugging Face co-founder says
Hugging Face's co-founder, Thomas Wolf, is pouring cold water on the hopes that current AI systems could revolutionize scientific progress. Speaking to Fortune at VivaTech in Paris, Wolf argued that today's large language models excel at producing plausible answers but lack the creativity to ask original scientific questions. Rather than building the next Einstein, Wolf says we may be creating a generation of digital 'yes-men.' Hugging Face's top scientist, Thomas Wolf, says current AI systems are unlikely to make the scientific discoveries some leading labs are hoping for. Speaking to Fortune at Viva Technology in Paris, the Hugging Face co-founder said that while large language models (LLMs) have shown an impressive ability to find answers to questions, they fall short when trying to ask the right ones—something Wolf sees as the more complex part of true scientific progress. 'In science, asking the question is the hard part, it's not finding the answer,' Wolf said. 'Once the question is asked, often the answer is quite obvious, but the tough part is really asking the question, and models are very bad at asking great questions.' Wolf said he came to the conclusion after reading a widely circulated blog post by Anthropic CEO Dario Amodei called Machines of Loving Grace. In it, Amodei argues the world is about to see the 21st century 'compressed' into a few years as AI accelerates science drastically. Wolf said he initially found the piece inspiring but started to doubt Amodei's idealistic vision of the future after the second read. 'It was saying AI is going to solve cancer and it's going to solve mental health problems — it's going to even bring peace into the world, but then I read it again and realized there's something that sounds very wrong about it, and I don't believe that,' he said. For Wolf, the problem isn't that AI lacks knowledge but that it lacks the ability to challenge our existing frame of knowledge. AI models are trained to predict likely continuations, for example, the next word in a sentence, and while today's models excel at mimicking human reasoning, they fall short of any real original thinking. 'Models are just trying to predict the most likely thing,' Wolf explained. 'But in almost all big cases of discovery or art, it's not really the most likely art piece you want to see, but it's the most interesting one.' Using the example of the game of Go, a board game that became a milestone in AI history when DeepMind's AlphaGo defeated world champions in 2016, Wolf argued that while mastering the rules of Go is impressive, the bigger challenge lies in inventing such a complex game in the first place. In science, he said, the equivalent of inventing the game is asking these truly original questions. Wolf first suggested this idea in a blog post titled The Einstein AI Model, published earlier this year. In it, he wrote: 'To create an Einstein in a data center, we don't just need a system that knows all the answers, but rather one that can ask questions nobody else has thought of or dared to ask.' He argues that what we have instead are models that behave like 'yes-men on servers'—endlessly agreeable, but unlikely to challenge assumptions or rethink foundational ideas. This story was originally featured on
Yahoo
14 hours ago
- Business
- Yahoo
Commentary: Secure AI for America's future & humanity's too
A technological revolution is unfolding — one that will transform our world in ways we can barely comprehend. As artificial intelligence rapidly evolves and corporate America's investment in AI continues to explode, we stand at a crossroads that will determine not just America's future but humanity's as well. Many leading experts agree that artificial general intelligence (AGI) is within sight. There is a growing consensus that it could be here within the next two to five years. This is a fundamental shift that will lead to scientific and technological advances beyond our imagination. Some have referred to the development of advanced AI as the Second Industrial Revolution, but the truth is that it will be more significant than that — perhaps incomprehensibly so — and we are not prepared. The potential benefits of AGI are extraordinary. It could discover cures for diseases we have battled for generations, find solutions to the most difficult mathematical and physics problems, and create trillions of dollars in new wealth. However, there is real cause for concern that we are racing toward an unprecedented technological breakthrough without considering the many dangers it poses. This includes dangers to our labor force, U.S. national security, and even humanity's very existence. As Anthropic CEO Dario Amodei recently suggested, AI could lead to a 'bloodbath' for job-seekers trying to find meaningful work, and that is just one threat. The same technology that could eradicate cancer may also create bioweapons of unprecedented lethality. Systems designed to optimize energy distribution could be weaponized to destroy critical infrastructure. As countries sprint to develop advanced AI, the one conversation we are not having is about the possibility that the same tools that might solve our greatest challenges could create catastrophic and even existential risks. Back in 2014, Stephen Hawking warned, 'The development of full artificial intelligence could spell the end of the human race.' More recently, OpenAI CEO Sam Altman claimed, 'AI will probably most likely lead to the end of the world, but in the meantime, there will be great companies.' According to Bill Gates, not even doctors and lawyers are safe from AI replacement. AI advancement is developing at warp speed without any brakes. We are unprepared to deal with those risks. For this reason, we are launching The Alliance for Secure AI, with a mission to ensure advanced AI innovation continues with security and safety as top priorities. We have no interest in stifling critical technological advancement. America can continue to lead the world in AI development while also establishing the necessary safeguards to protect humanity from catastrophe. Safeguards begin with effective communication across political lines. We will host strategy meetings with coalition partners across the technology, policy, and national security sectors, ensuring that conversations are informed about the dangers of AGI. Beyond the halls of Congress, this will require a public education push. Most Americans are unaware of the unprecedented threats that AI may pose. Our educational efforts will make complex AI concepts accessible for everyday Americans who must understand that their livelihoods are at risk. By convening AI experts, policymakers, journalists, and other key stakeholders, we can connect leaders who must work together to get this right for America, and humanity. We have no choice but to build a community committed to responsible AI advancement. I am profoundly optimistic about AI's potential to improve our lives. And yet, alongside its potential benefits, AGI will introduce serious and dangerous problems that we will all need to work together to solve. The advanced AI revolution will be far more consequential than anything in history. Daily activities for everyday Americans will be forever changed. AGI will impact the economy, national security, and the understanding of consciousness itself. Google is already hiring for a 'post-AGI' world where AI is smarter than the smartest human being in all cognitive tasks. It is critical that the U.S. maintains its technological leadership while ensuring AI systems align with human values and American principles. Without safeguards, we risk a future in which the most powerful technology ever created could threaten human liberty and prosperity. This is about asking fundamental questions: What role should AI play in society? What are the trade-offs we need to consider? What limits should we place on autonomous systems? Finding the answers to these questions requires broad public engagement — not just from Big Tech, but from every single American. _____ _____
Yahoo
a day ago
- Business
- Yahoo
LinkedIn cofounder Reid Hoffman says people are underestimating AI's impact on jobs, but it won't be a 'bloodbath'
Reid Hoffman said AI will transform jobs, but he doesn't think it will cause a 'bloodbath.' The LinkedIn cofounder was referring to comments made by Anthropic CEO Dario Amodei last month. Nvidia CEO Jensen Huang has also disagreed with Anthropic's AI job loss predictions. Reid Hoffman, the venture capitalist who cofounded LinkedIn, said AI will transform jobs, but he rejected the idea that it will result in a "bloodbath" for job seekers. "Yes, I think people are underestimating AI's impact on jobs," Hoffman said on an episode of the Rapid Response podcast, released Tuesday. "But I think inducing panic as a response is serving media announcement purposes," he said, "and not actually, in fact, intelligent industry and economic and career path planning." The podcast's host, Bob Safian, asked Hoffman about comments made by Dario Amodei, CEO of AI firm Anthropic, in May. In an interview with Axios, Amodei warned that AI companies and governments needed to stop "sugarcoating" the potential for mass job losses in white-collar industries like finance, law, and consulting. "We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei said. He estimated that AI could spike unemployment by up to 20% in the next five years, and may eliminate half of entry-level white-collar jobs within that same period. Hoffman said he had called the Anthropic CEO to discuss it. "'Bloodbath' is a very good way to grab internet headlines, media headlines," Hoffman said. (Axios, not Amodei, used the phrase "white-collar bloodbath.") But, Hoffman added, "bloodbath just implies everything going away." He said he disagreed with this assessment, believing that transformation, not mass elimination, of jobs is a more likely outcome. "Dario is right that over a decade or three, there will be a massive set of job transformation," Hoffman said. But he compared it to the introduction of tools like Microsoft Excel, which were believed by some at the time to mark the end of accountancy roles. "In fact, the accountant job got broader, richer," Hoffman said. He added: "Just because a function's coming that has a replacement area on a certain set of tasks doesn't mean all of this job's going to get replaced." Instead of AI eliminating roles, Hoffman predicted: "We at least have many years, if not a long time, of person-plus-AI doing things." Hoffman isn't the only business leader to question Amodei's AI doomsday prophecy. Speaking at VivaTech in Paris earlier this month, Nvidia CEO Jensen Huang said he and Amodei "pretty much disagree with almost everything" on AI. "One, he believes that AI is so scary that only they should do it," Huang said. "Two, that AI is so expensive, nobody else should do it." Huang added, "And three, AI is so incredibly powerful that everyone will lose their jobs, which explains why they should be the only company building it." Anthropic did not immediately respond to a request for comment from Business Insider. Read the original article on Business Insider


Forbes
2 days ago
- Business
- Forbes
The Hidden Leadership Opportunity In Amazon CEO Andy Jassy's AI Memo
Andy Jassy's message contains a beneficial reminder. Amazon CEO Andy Jassy recently shared a message with employees about the far-reaching impact of generative AI. While it's no surprise that AI is embedded across every business unit at Amazon, a small text block, in particular, stood out to many people. In the message to employees, Jassy shared: 'We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.' Coming from a company of Amazon's scale, this was yet another signal of a significant corporate cultural shift. Jassy's message echoes recent warnings from Nvidia's Jensen Huang and Anthropic's Dario Amodei concerning AI. For some leaders, this can be unnerving. But for other leaders, this can equally serve as an opportunity for growth and reinvention. In business, buzzwords like "adaptability" are frequently shared. However, with the disruption of AI, adaptability is a necessity for survival and advancement. Psychologist Carol Dweck, author of Mindset: The New Psychology of Success, wrote: "Becoming is better than being." That's especially true now, as many leaders have built their identities around being experts, having all the answers, and optimizing what already works. But in this upcoming era, that posture and rigidity can become a liability. One of the first steps toward adaptability is detaching from who you were and committing to who you must become. It means less ego, additional experimentation, and more curiosity. And most importantly, it means modeling this mindset for your team because the tone you set will shape how your organization responds to change. AI will test your business. But it will test your identity and ego even more. You don't need to become a world-class machine learning expert. However, it is essential to establish a solid foundation in AI fluency. This involves understanding what AI can (and cannot) do, how it applies to your specific domain, and how to effectively integrate it into your workflow, thinking, and leadership. AI fluency starts with something deceptively simple: curiosity. Curiosity begins with asking more questions, such as, what can this tool do for how I lead? How might it improve my decision-making, communication, or strategy development? What's the next skill I need to learn before I need it? Curiosity opens the door to adaptability. Adaptability unlocks leverage. Leverage is how you create your unique competitive advantage. Think of it like driving: you don't need to know how every part of the engine works. But you do need to know how to steer the car. The leaders who learn to drive AI, even at a basic level, will quickly outpace those standing still, waiting for instructions or, worse, waiting until they're forced to catch up. In sports, great players anticipate, and the same is true in business. Leaders must build and develop culture and numerous other entities before the need is urgent or before the market forces them to act. As Jassy noted, Amazon is operating like "the world's largest startup." That means experimenting early, failing fast, and scaling what works. That playbook isn't just for tech giants. It's a leadership principle for any individual or team, regardless of size. Whether you're running a billion-dollar division, leading a five-person strategy unit, or attempting to build a healthier workforce, the message is the same: don't wait to be told or forced to evolve. The longer you wait, the fewer your options and the more you're playing catch-up. By the time you desperately need a new capability, it's already too late to start building it. Andy Jassy's message, like those from other CEOs discussing AI, can feel unsettling, especially when layered on top of today's broader uncertainties. But viewed through the right lens, this moment is shaping up to offer an asymmetrical upside for leaders who lean in. That opportunity begins with your identity, specifically, how you speak to yourself and how you perceive challenges and change. As Carol Dweck wrote, "The view you adopt for yourself profoundly affects the way you lead your life." That includes how you lead your team, your business, and your approach toward AI. The AI threat is real. But so is the AI opportunity. Choose the mindset that opens doors instead of slamming them shut.


CNN
2 days ago
- Business
- CNN
AI warnings are the hip new way for CEOs to keep their workers afraid of losing their jobs
Every few weeks, the Earth cries out for an artificial intelligence scare on a frequency heard only by tech CEOs. And lo, like a rain cloud over a parched valley, here comes Amazon boss Andy Jassy to shower us with fresh fear and dread. In a memo sent to employees titled 'Some thoughts on Generative AI,' Jassy spent 1,200 words largely rattling off examples of Amazon's AI progress. It is making Alexa, its personal assistant software, 'meaningfully smarter,' and turning its customer service chatbot into 'an even better experience.' (How, and by what measure? He didn't say. But 'you get the idea,' he wrote.) Then in a textbook example of burying the lede, he got to the point around paragraph 15: We're almost certainly going to replace some Amazon workers with AI 'agents.' When? 'In the next few years.' How many jobs are we talking about? 'It's hard to know… we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.' Where are all these so-called agents? 'Many of these agents have yet to be built, but make no mistake, they're coming, and coming fast.' Fast! Soon! We expect! They're coming! To be clear: I'm not saying Jassy is lying. But he is clearly invoking AI to put a modern spin on a strategy as old as time: Keep workers working by making them afraid of losing their jobs. The sentiment echoes a similar but more dramatic statement by Anthropic CEO Dario Amodei, who told CNN and Axios that AI could wipe out half of all entry-level white-collar jobs sometime in the next five years. (Why half? And why five years? Eh, why not… Amodei's motive is to make his core technology appear both inevitable and scary-powerful.) Not all tech CEOs agree, to be clear. Nvidia's Jensen Huang and Google Deepmind's Demis Hassabis — both major players in the AI space — have pushed back on Amodei's apocalyptic take. It's important to keep a couple of things in mind when we get these semiannual bursts of AI fearmongering from the very people who stand to profit from advancing the technology. One: Automation and machine learning have been around for decades, and yes, that has had (and continues to have) an impact on the labor market. But the idea that generative AI, in particular, is going to usher in some kind of doom-slash-utopia belongs in the realm of science fiction. Large language models that power advanced AI chatbots can be impressive sidekicks and sounding boards, to be sure. They are also hallucinating more — not less — the larger they become. And they have just about run out of the kind of human-grade data engineers need to train the models. Two: Notice that Jassy's note to staff didn't say AI was coming for his job, or his fellow executives' jobs. Kinda seems like he might want to review what current AI is good at — producing OK-sounding memos, synthesizing information and (maybe) solving strategic puzzles. And then consider what AI is still really bad at — physically lifting things and moving them around. Three: It's curious to see Big Tech recycling the same language about 'flexibility' and 'efficiency' that came with literally every other workplace tech innovation of the last 30+ years. Email, Slack, Teams, Zoom, Plorfen, Globz. (OK I made the last two up.) To be clear, those things aren't inherently bad. They did give us flexibility that proved vital during the 2020 lockdowns. But they also gave us the flexibility to be online in perpetuity, all day and night, seven days a week. Incidentally, Microsoft, a company that's earmarked $80 billion in AI spending for the year, just released a report about how those innovations have — rather than liberate office workers from drudgery — actually trapped us in an 'infinite workday.' The report found the typical office worker using Microsoft's Outlook, Teams, PowerPoint and other products increasingly spend their days getting interrupted every two minutes by a meeting, an email or a chat notification during a standard eight-hour shift. That's 275 pings a day, my colleague Anna Cooban notes. The average employee receives 117 emails a day, and sent or received 58 instant messages outside of their core working hours — a jump of 15% from last year. Part of Microsoft's solution for this 'broken system,' it should be noted, includes re-orienting jobs around — wait for it — AI agents.