logo
#

Latest news with #Anthropic

LinkedIn Cofounder Reid Hoffman says people are underestimating impact of AI on jobs, rejects bloodbath fears
LinkedIn Cofounder Reid Hoffman says people are underestimating impact of AI on jobs, rejects bloodbath fears

India Today

time2 hours ago

  • Business
  • India Today

LinkedIn Cofounder Reid Hoffman says people are underestimating impact of AI on jobs, rejects bloodbath fears

Many professionals are worried about AI taking over jobs, especially in white-collar roles. However, LinkedIn co-founder Reid Hoffman believes that the fear of AI, particularly the panic over mass job losses, is all due to its exaggeration. Hoffman believes that while AI will certainly bring significant transformation to the job sector, there will be no bloodbath for white-collar made these comments in response to a statement by Anthropic CEO Dario Amodei during an interview with Fast Company. Amodei had told Axios that AI could lead to a dramatic overhaul of white-collar work. While Hoffman acknowledged that change is inevitable, he dismissed the idea that the rise of AI would spell catastrophe for is right that over, call it, a decade or three, it will bring a massive set of job transformations. And some of that transformation will involve replacement issues,' Hoffman said. He emphasised that the shift in jobs due to AI should not be confused with total job destruction. 'Just because a function's coming that has a replacement area on a certain set of tasks doesn't mean all of this job's going to get replaced.' To support his views, Hoffman pointed to the example of the launch of spreadsheet software like Excel. He highlighted that although Excel impacted the nature of accounting work, it did not eliminate the need for accountants. Instead, the accounting profession evolved and even expanded in scope. 'Everyone was predicting that the accountant job would go away. And actually, in fact, the accountant job got broader, richer,' he maintains a clear view that, in future, AI will assist humans rather than replace them entirely. He envisions a world of 'person plus AI doing things' as the most likely scenario going forward. Therefore, AI-powered tools like GPT-4, Claude, and Microsoft Copilot should be used to enhance departments — not eliminate them. He warns that trying to completely substitute humans with AI would be a serious mistake. 'Could I just replace, for example, my accountants with GPT-4? The answer is absolutely not. That would be a disastrous mistake.'The LinkedIn co-founder also pushed back against the notion that automation through AI would wipe out entire departments. 'Let's replace my marketing department or my sales department with GPT-4. Absolutely not,' he said, adding, 'that's nowhere close to a bloodbath.'However, Hoffman is not denying the potential for job replacement altogether. He acknowledges that some roles are more vulnerable — especially those that have already been reduced to scripted, mechanical tasks. 'What jobs are most likely to be replaced? They're the ones where we're trying to program human beings to act like robots.' Yet, even in such cases, Hoffman believes AI will not take over everything. Much will depend on how companies choose to implement AI in their workflows.

Reid Hoffman Downplays AI Job Loss Fears, Urges Focus on Human-AI Collaboration
Reid Hoffman Downplays AI Job Loss Fears, Urges Focus on Human-AI Collaboration

Hans India

time2 hours ago

  • Business
  • Hans India

Reid Hoffman Downplays AI Job Loss Fears, Urges Focus on Human-AI Collaboration

LinkedIn co-founder Reid Hoffman has pushed back against growing anxiety over artificial intelligence (AI) and its impact on employment, especially among white-collar workers. In a recent conversation sparked by comments from Anthropic CEO Dario Amodei, Hoffman argued that while AI will indeed change the landscape of work, fears of an all-out "job bloodbath" are exaggerated. Amodei had earlier warned of AI driving a significant overhaul of white-collar jobs, raising concerns about the replacement of human roles. Hoffman, however, offered a more balanced perspective. 'Dario is right that over, call it, a decade or three, it will bring a massive set of job transformations. And some of that transformation will involve replacement issues,' he admitted. But he quickly clarified that this shift doesn't equate to widespread unemployment. 'Just because a function's coming that has a replacement area on a certain set of tasks doesn't mean all of this job's going to get replaced.' Hoffman pointed to historical parallels to support his view, citing the example of Microsoft Excel. When spreadsheet software was introduced, many feared it would render accountants obsolete. Instead, the field evolved. 'Everyone was predicting that the accountant job would go away. And actually, in fact, the accountant job got broader, richer,' he said. According to Hoffman, the future of work lies in symbiosis between humans and machines. He imagines a workplace where employees are empowered, not displaced, by AI tools such as GPT-4, Claude, and Microsoft Copilot. These technologies, he insists, should be used to enhance productivity, not eliminate human effort. 'Could I just replace, for example, my accountants with GPT-4? The answer is absolutely not. That would be a disastrous mistake,' Hoffman warned. Hoffman strongly cautioned against wholesale automation, particularly the idea of removing entire departments. 'Let's replace my marketing department or my sales department with GPT-4. Absolutely not,' he said. 'That's nowhere close to a bloodbath.' While Hoffman does acknowledge that some roles are at greater risk—especially those made up of repetitive or scripted tasks—he believes the potential for AI to replace such jobs has more to do with how businesses choose to deploy these technologies. 'What jobs are most likely to be replaced? They're the ones where we're trying to program human beings to act like robots,' he said. In conclusion, Hoffman remains optimistic about AI's role in the job market. Instead of viewing AI as a threat, he believes it should be seen as a powerful partner. 'Person plus AI doing things' is the model he champions — one where human judgment, creativity, and adaptability remain essential. As the debate around AI and jobs continues, Hoffman's call for cautious optimism and thoughtful implementation serves as a timely reminder: transformation does not have to mean elimination.

Model Context Protocol provides the the interconnection for AI work.
Model Context Protocol provides the the interconnection for AI work.

Forbes

time2 hours ago

  • Business
  • Forbes

Model Context Protocol provides the the interconnection for AI work.

Persons hand inserting a USB cable charger into a mobile phone AI needs contextual interconnection to work. Model Context Protocol is an open standard developed by the maverick artificial intelligence startup Anthropic. It is designed to allow AI agents to access and interact with external data, application programming interfaces, software tools and services. Rather like some universal two-way USB-C port for AI (a nickname it has embraced), MCP provides a secured and standardized route for AI models to access information and take action. Given this technology's potential, what do software application developers (and the businesspeople using the AI services above them) need to know about MCP? Anthropic open sourced MCP in November 2024 and the company says that the architecture itself is 'straightforward' to use i.e. developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers. Just as a standard software connector allows different devices to communicate seamlessly, MCP enables AI systems to access and interpret the right context by linking them with a whole range of software services, tools and data sources. Billed as a game-changer for AI integration, MCP is gaining traction among vendors including Microsoft, OpenAI and Google. 'Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture,' detailed Anthropic itself, on the company's technical blog. But that's not all software developers need to know. Why? Because in most software engineering teams, the issue of integration within context remains the biggest barrier to useful AI. According to Facundo Giuliani, solution engineering team manager at enterprise CMS company Storyblok, this integration disconnect is fundamental because context is everything for AI interactions in terms of the way we will want to use AI-based smart automation services today. 'Whether a software team is building a new app, chatbot or ecommerce engine, the model's performance hinges on its ability to understand the user's intent, history, preferences and environment. Traditionally, AI integrations have relied on static prompts to deliver instructions and context. This can be timely and cumbersome, while undermining the scope for accuracy and scalability. MCP changes this,' enthused Giuliani. Instead of relying on scattered prompts, this new technology standard means that software engineers are now able to define and deliver context dynamically, making integrations faster, more accurate and easier to maintain. By decoupling context from prompts and managing it like any other component, developers can, in effect, build their own personal, multi-layered prompt interface. This is said to transform AI from a black box into an integrated part of your an organization's working technology stack. 'One of MCP's big advantages is how well it fits into typical development workflows. Being API-first by design, MCP plugs into existing tools and frameworks with ease, allowing developers to define, update and reuse context programmatically,' explained Giuliani. ' Think of it in the same way as managing code or data. This new layer of control makes AI behavior more predictable and easier to test, debug and scale across environments.' Importantly, he says, MCP also fits naturally into composable and MACH (Microservices, API-first, Cloud-native, Headless) architectures by treating context as a modular, API-driven component that can be integrated wherever needed. Just like microservices or headless frontends, using this approach means you can compose and embed AI functionality across different layers of the stack without rigid dependencies. The result is greater flexibility and reusability, faster iteration across distributed systems and full scalability. Having worked with this new AI software services layer internally at Storyblok, the team suggest that there is good news to behold. That good news is that users of all technical abilities (okay, software engineers for now, but these technologies inevitably get more and more abstracted over time and moved towards the hands of so-called citizen developers) don't necessarily need to be a machine learning expert to get started with MCP. What's more important is having a solid understanding of APIs, data structures, and typical application architecture. 'To begin, AI engineering teams need to map out the key context components their AI models need to deliver accurate, relevant responses. They then need to ensure these elements are well-structured, consistently maintained and easily accessible across the system. Since MCP is all about providing context effectively, understanding how AI fits into the end users' software product experiences is essential,' said Giuliani. Because MCP is API-driven, teams can start experimenting with context-aware applications using the tools and languages they already know. In Storyblok's experience, most software developers can have a basic integration up and running in under an hour. Once they are up and running, they can then aim to integrate MCP incrementally within existing workflows. They should then test thoroughly to observe how different context signals influence AI behavior. Most importantly, they should treat context as a living part of the AI software system being created and continuously update and refine it based on real user interactions and feedback to maximize effectiveness over time. 'Like any powerful tool, MCP comes with its own set of pitfalls. One of the most common mistakes is a poorly defined context i.e. either too little data or too much irrelevant data. This can result in inconsistent model behavior or bloated integrations. Another mistake is treating MCP as a plug-and-play solution without tailoring it to the specific needs of an application. Context is inherently tied to the business domain in which this technology is used, so it needs to be structured thoughtfully to specific use cases to get the most out of it,' advised Giuliani. Wider support for MCP is seen far and wide. Image and video platform company Cloudinary has announced its Cloudinary Model Context Protocol Server offering this month. The company says that this MCP service allows AI agents and large language models like Base44, Claude, Cursor and others to interact with Cloudinary's image and video APIs and documentation using natural language. The firm promises that this technology will be accessible through its platform for both 'traditional' software developers and AI builders alike. MCP Server is our latest commitment to ensuring software engineers of all kinds have the tools they need to build visual-first experiences and apps, said Tal Lev-Ami, co-founder and CTO, Cloudinary. He suggests that the new era of LLM-powered code generation underscores the importance of open, API-first platforms and tools like MCP. For his money, this is the route to empowering software engineers to build within flexible and trusted frameworks and models. Enterprise data services company Ctera now offers native support for MCP and claims to be the 'first hybrid cloud platform' to embed an MCP Server for secure AI integration into its stack. This allows enterprises to connect LLMs, including assistants like Claude, AI IDEs (e.g. Cursor) and internally developed agents, directly to private data, without compromising security or compliance. CTO of Ctera Aron Brand says that this development is step toward for LLM-based assistants to work seamlessly with an organization's internal data. 'We're giving their teams a secure and intelligent way to enable real-time decisions, faster workflows and new kinds of automation without introducing security and compliance challenges to the business,' said Brand. The industry is in broad agreement of the proposition that MCP is more than just a new standard; it could change how we think about AI and where we are able to 'inject it' into working business applications. With a short-term roadmap focused on enhanced security, richer developer tooling and broader ecosystem support, the consensus is that MCP will continue to inch even closer to becoming a universal standard for AI integration in just the next one to two years.

Commentary: Secure AI for America's future & humanity's too
Commentary: Secure AI for America's future & humanity's too

Yahoo

time3 hours ago

  • Business
  • Yahoo

Commentary: Secure AI for America's future & humanity's too

A technological revolution is unfolding — one that will transform our world in ways we can barely comprehend. As artificial intelligence rapidly evolves and corporate America's investment in AI continues to explode, we stand at a crossroads that will determine not just America's future but humanity's as well. Many leading experts agree that artificial general intelligence (AGI) is within sight. There is a growing consensus that it could be here within the next two to five years. This is a fundamental shift that will lead to scientific and technological advances beyond our imagination. Some have referred to the development of advanced AI as the Second Industrial Revolution, but the truth is that it will be more significant than that — perhaps incomprehensibly so — and we are not prepared. The potential benefits of AGI are extraordinary. It could discover cures for diseases we have battled for generations, find solutions to the most difficult mathematical and physics problems, and create trillions of dollars in new wealth. However, there is real cause for concern that we are racing toward an unprecedented technological breakthrough without considering the many dangers it poses. This includes dangers to our labor force, U.S. national security, and even humanity's very existence. As Anthropic CEO Dario Amodei recently suggested, AI could lead to a 'bloodbath' for job-seekers trying to find meaningful work, and that is just one threat. The same technology that could eradicate cancer may also create bioweapons of unprecedented lethality. Systems designed to optimize energy distribution could be weaponized to destroy critical infrastructure. As countries sprint to develop advanced AI, the one conversation we are not having is about the possibility that the same tools that might solve our greatest challenges could create catastrophic and even existential risks. Back in 2014, Stephen Hawking warned, 'The development of full artificial intelligence could spell the end of the human race.' More recently, OpenAI CEO Sam Altman claimed, 'AI will probably most likely lead to the end of the world, but in the meantime, there will be great companies.' According to Bill Gates, not even doctors and lawyers are safe from AI replacement. AI advancement is developing at warp speed without any brakes. We are unprepared to deal with those risks. For this reason, we are launching The Alliance for Secure AI, with a mission to ensure advanced AI innovation continues with security and safety as top priorities. We have no interest in stifling critical technological advancement. America can continue to lead the world in AI development while also establishing the necessary safeguards to protect humanity from catastrophe. Safeguards begin with effective communication across political lines. We will host strategy meetings with coalition partners across the technology, policy, and national security sectors, ensuring that conversations are informed about the dangers of AGI. Beyond the halls of Congress, this will require a public education push. Most Americans are unaware of the unprecedented threats that AI may pose. Our educational efforts will make complex AI concepts accessible for everyday Americans who must understand that their livelihoods are at risk. By convening AI experts, policymakers, journalists, and other key stakeholders, we can connect leaders who must work together to get this right for America, and humanity. We have no choice but to build a community committed to responsible AI advancement. I am profoundly optimistic about AI's potential to improve our lives. And yet, alongside its potential benefits, AGI will introduce serious and dangerous problems that we will all need to work together to solve. The advanced AI revolution will be far more consequential than anything in history. Daily activities for everyday Americans will be forever changed. AGI will impact the economy, national security, and the understanding of consciousness itself. Google is already hiring for a 'post-AGI' world where AI is smarter than the smartest human being in all cognitive tasks. It is critical that the U.S. maintains its technological leadership while ensuring AI systems align with human values and American principles. Without safeguards, we risk a future in which the most powerful technology ever created could threaten human liberty and prosperity. This is about asking fundamental questions: What role should AI play in society? What are the trade-offs we need to consider? What limits should we place on autonomous systems? Finding the answers to these questions requires broad public engagement — not just from Big Tech, but from every single American. _____ _____

Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions
Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions

Yahoo

time13 hours ago

  • Science
  • Yahoo

Advanced AI models generate up to 50 times more CO₂ emissions than more common LLMs when answering the same questions

When you buy through links on our articles, Future and its syndication partners may earn a commission. The more accurate we try to make AI models, the bigger their carbon footprint — with some prompts producing up to 50 times more carbon dioxide emissions than others, a new study has revealed. Reasoning models, such as Anthropic's Claude, OpenAI's o3 and DeepSeek's R1, are specialized large language models (LLMs) that dedicate more time and computing power to produce more accurate responses than their predecessors. Yet, aside from some impressive results, these models have been shown to face severe limitations in their ability to crack complex problems. Now, a team of researchers has highlighted another constraint on the models' performance — their exorbitant carbon footprint. They published their findings June 19 in the journal Frontiers in Communication. "The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," study first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences in Germany, said in a statement. "We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models." To answer the prompts given to them, LLMs break up language into tokens — word chunks that are converted into a string of numbers before being fed into neural networks. These neural networks are tuned using training data that calculates the probabilities of certain patterns appearing. They then use these probabilities to generate responses. Reasoning models further attempt to boost accuracy using a process known as "chain-of-thought." This is a technique that works by breaking down one complex problem into smaller, more digestible intermediary steps that follow a logical flow, mimicking how humans might arrive at the conclusion to the same problem. Related: AI 'hallucinates' constantly, but there's a solution However, these models have significantly higher energy demands than conventional LLMs, posing a potential economic bottleneck for companies and users wishing to deploy them. Yet, despite some research into the environmental impacts of growing AI adoption more generally, comparisons between the carbon footprints of different models remain relatively rare. To examine the CO₂ emissions produced by different models, the scientists behind the new study asked 14 LLMs 1,000 questions across different topics. The different models had between 7 and 72 billion parameters. The computations were performed using a Perun framework (which analyzes LLM performance and the energy it requires) on an NVIDIA A100 GPU. The team then converted energy usage into CO₂ by assuming each kilowatt-hour of energy produces 480 grams of CO₂. Their results show that, on average, reasoning models generated 543.5 tokens per question compared to just 37.7 tokens for more concise models. These extra tokens — amounting to more computations — meant that the more accurate reasoning models produced more CO₂. The most accurate model was the 72 billion parameter Cogito model, which answered 84.9% of the benchmark questions correctly. Cogito released three times the CO₂ emissions of similarly sized models made to generate answers more concisely. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," said Dauner. "None of the models that kept emissions below 500 grams of CO₂ equivalent [total greenhouse gases released] achieved higher than 80% accuracy on answering the 1,000 questions correctly." RELATED STORIES —Replika AI chatbot is sexually harassing users, including minors, new study claims —OpenAI's 'smartest' AI model was explicitly told to shut down — and it refused —AI benchmarking platform is helping top companies rig their model performances, study claims But the issues go beyond accuracy. Questions that needed longer reasoning times, like in algebra or philosophy, caused emissions to spike six times higher than straightforward look-up queries. The researchers' calculations also show that the emissions depended on the models that were chosen. To answer 60,000 questions, DeepSeek's 70 billion parameter R1 model would produce the CO₂ emitted by a round-trip flight between New York and London. Alibaba Cloud's 72 billion parameter Qwen 2.5 model, however, would be able to answer these with similar accuracy rates for a third of the emissions. The study's findings aren't definitive; emissions may vary depending on the hardware used and the energy grids used to supply their power, the researchers emphasized. But they should prompt AI users to think before they deploy the technology, the researchers noted. "If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies," Dauner said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store