Latest news with #AlphaEvolve

Yahoo
11-06-2025
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
11-06-2025
- Business
- Yahoo
Sam Altman thinks AI will have 'novel insights' next year
In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. Sign in to access your portfolio


Time of India
25-05-2025
- Business
- Time of India
Big in big tech: AI agents now code alongside developers
Big Tech is doubling down on AI-powered coding agents—intelligent tools that go beyond assisting developers to actively collaborating with them. This week, Microsoft, Google, and OpenAI rolled out major upgrades that mark a shift in how software is built. These agents don't just generate code—they fix bugs, add features, and increasingly understand developer intent. The result? Compressed timelines, reduced manual grunt work, and the beginning of a fundamental shift in how programming teams function. Investors see software development as a high-fit application for agentic AI , or autonomous agents that can plan, execute, and self-correct across tasks. Coding, they believe, may be the killer use case. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Dhoni's Exclusive Home Interior Choice? HomeLane Get Quote Undo The week's biggest announcements: Microsoft: At its Build developer conference, Microsoft unveiled a new GitHub Copilot agent —a more proactive version of the AI tool that can now autonomously fix bugs and implement features. Instead of simply suggesting code snippets, the agent understands goals and acts on them. Live Events OpenAI: A week earlier, OpenAI introduced an upgraded version of its coding model Codex. The new agent is designed to handle multiple programming tasks in parallel—bringing multitasking capabilities to code generation. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Google DeepMind: Released AlphaEvolve , an advanced coding agent capable of tackling mathematical and computational problems. The system doesn't just generate code—it validates solutions using automated evaluators, reducing errors and hallucinations. Why this matters Coding appears to be the breakout application for agentic AI. Unlike creative writing or visual generation, software can be tested immediately—a program either runs or it doesn't. This gives developers a clear feedback loop, allowing faster refinement. However, these tools still struggle with subtle logic errors and hallucinations. As they generate more code, the risk of flawed output also grows. Still, the productivity gains are substantial. The shift is global AI now writes a third of Microsoft and Google's code, according to the companies. Indian startups are following suit. As reported by ET in April , AI agents are generating between 40–80% of code at some early- and growth-stage companies, using tools like ChatGPT, Claude, and Gemini. From prototypes to production systems, AI-written code is speeding up delivery cycles and changing how software teams operate—possibly forever.


Time of India
25-05-2025
- Business
- Time of India
Google DeepMind CEO Demis Hassabis disagrees with company's co-founder Sergey Brin on this one thing: 'We thought it was...'
Left: Google DeepMind CEO Demis Hassabis, Right: Google co-founder Sergey Brin Google DeepMind CEO Demis Hassabis holds a more cautious outlook on the arrival of artificial general intelligence (AGI) than the Alphabet-owned company's co-founder Sergey Brin . Currently, AGI's definition is contested, with some focusing on human-level competence across all domains and others on an AI's capacity to learn, adapt and produce autonomous outputs beyond its training data. Despite both having access to similar data and insights into AI development, Hassabis' perspective differs from Brin's. In a recent conversation on the New York Times' Hard Fork podcast, it was noted that Brin expects AGI to arrive before 2030, while Hassabis has predicted that it will happen just after 2030. This difference in forecasts raises questions about how these Google executives may be perceiving differently from the same information. Hassabis also stated that he is sticking to a timeline he has maintained since DeepMind was founded in 2010. What Demis Hassabis has predicted about the arrival of AGI Talking at the NYT podcast, Hassabis said: 'We thought it was roughly a 20-year mission, and amazingly, we're on track. It's somewhere around there, I would think.' The prediction came after Brin jokingly accused Hassabis of 'sandbagging', which is intentionally downplaying timelines to later overdeliver. However, during the interview, Hassabis stood by his reasoning, pointing to the complexity of defining AGI itself. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like These Are The Most Beautiful Women In The World Undo "I have quite a high bar. It should be able to do all of the things that the human brain can do, even theoretically. And so that's a higher bar than, say, what the typical individual human could do, which is obviously very economically important,' Hassabis noted. When asked whether AGI would emerge through gradual improvements or sudden breakthroughs, Hassabis said both approaches are 'likely necessary.' 'We push unbelievably hard on the scaling,' he explained, while also funding 'blue sky' research such as AlphaEvolve . Last year, Anthropic CEO Dario Amodei predicted that AGI could arrive by 2026 or 2027, though he warned that unforeseen factors might delay its development. Other industry leaders share similar optimism: OpenAI CEO Sam Altman has suggested AGI could materialise during Trump's presidency, and Ark Invest's Cathie Wood has argued it could become a major engine of economic growth.


Geeky Gadgets
21-05-2025
- Science
- Geeky Gadgets
Alpha Evolve: The Self-Improving AI That's Breaking Boundaries
What if machines could not only learn but also teach themselves to become better with each iteration? This isn't the plot of a sci-fi movie—it's the reality unfolding in artificial intelligence research. Systems like Google DeepMind's Alpha Evolve are pioneering a new frontier in AI: recursive self-improvement, where machines refine their own capabilities without constant human intervention. From breaking decades-old computational records to optimizing global data centers, Alpha Evolve is proving that AI can not only solve problems but also reinvent the way solutions are created. Yet, as promising as this sounds, it raises a critical question: how far can machines go in self-improvement before they outpace human oversight? AI Explained explore how Alpha Evolve's iterative learning process is reshaping fields like computational mathematics, hardware design, and energy efficiency. You'll discover how this system blends human ingenuity with machine precision to tackle challenges once thought insurmountable. But it's not all smooth sailing—Alpha Evolve's reliance on human-defined goals and its inability to independently identify new problems highlight the limits of today's AI. As we unpack the breakthroughs, limitations, and ethical considerations surrounding recursive AI systems, one thing becomes clear: the journey toward self-improving machines is as complex as it is fantastic. Alpha Evolve Overview How Alpha Evolve Works Alpha Evolve operates through a recursive process of code refinement, which begins with human-submitted problems and predefined evaluation metrics. The system employs a combination of smaller, faster models like Gemini Flash and more advanced systems such as Gemini Pro. These models collaborate to optimize performance while maintaining computational efficiency, making sure that resources are used effectively. A defining feature of Alpha Evolve is its evolutionary database, which stores successful prompts and solutions. This database allows the system to learn from past iterations, adapt to new challenges, and continuously improve its capabilities. By combining human creativity with machine-driven precision, Alpha Evolve bridges the gap between human ingenuity and computational power, creating a synergy that enhances problem-solving potential. Key Achievements Alpha Evolve has already demonstrated its fantastic potential through several new accomplishments: Advancing Computational Research: The system achieved a record-breaking tensor decomposition for matrix multiplication, surpassing a 50-year-old algorithm. This breakthrough highlights its ability to push the boundaries of mathematical and computational research. The system achieved a record-breaking tensor decomposition for matrix multiplication, surpassing a 50-year-old algorithm. This breakthrough highlights its ability to push the boundaries of mathematical and computational research. Optimizing Data Centers: By improving Google's data center operations, Alpha Evolve recovered 0.7% of global compute resources. This optimization translates into substantial energy savings and increased efficiency across Google's infrastructure. By improving Google's data center operations, Alpha Evolve recovered 0.7% of global compute resources. This optimization translates into substantial energy savings and increased efficiency across Google's infrastructure. Accelerating AI Development: The system contributed to the design of next-generation tensor processing units (TPUs), reducing training times for AI models and allowing faster development cycles. These achievements underscore Alpha Evolve's capacity to drive innovation across diverse fields, from computational mathematics to industrial optimization. What is Recursive Self-Improvement in AI? Alpha Evolve Explained Watch this video on YouTube. Here are additional guides from our expansive article library that you may find useful on AI Self-Improvement. Limitations and Challenges Despite its impressive accomplishments, Alpha Evolve is not without limitations. Its reliance on human-defined problems and evaluation metrics restricts its autonomy, as it cannot independently identify or define new challenges. In fields like natural sciences, where physical experiments are often required, the system's applicability remains limited. Additionally, while Alpha Evolve excels at optimizing existing processes, it lacks the ability to create entirely new systems or operate without human oversight. These constraints emphasize the ongoing necessity of human involvement in AI development. Clear problem definitions and robust evaluation metrics are essential to maximize the system's effectiveness and ensure its outputs align with intended goals. Addressing these challenges will be critical to unlocking Alpha Evolve's full potential. Future Directions for Alpha Evolve Several areas of improvement could significantly enhance Alpha Evolve's capabilities and broaden its impact: Expanding Context Windows: Increasing the size of the evolutionary database to accommodate larger context windows—potentially up to 10 million tokens—would enable the system to tackle more complex and nuanced problems. Increasing the size of the evolutionary database to accommodate larger context windows—potentially up to 10 million tokens—would enable the system to tackle more complex and nuanced problems. Integrating Advanced Models: Incorporating next-generation LLMs, such as Gemini 3, could improve performance, versatility, and adaptability across a wider range of applications. Incorporating next-generation LLMs, such as Gemini 3, could improve performance, versatility, and adaptability across a wider range of applications. Optimizing Search Algorithms: Refining the program generation processes could lead to faster and more accurate results, enhancing the system's efficiency. Refining the program generation processes could lead to faster and more accurate results, enhancing the system's efficiency. Improving Evaluation Metrics: Developing more sophisticated and domain-specific metrics would allow Alpha Evolve to address a broader spectrum of applications, from scientific research to industrial optimization. These advancements would not only enhance Alpha Evolve's functionality but also expand its potential to influence various industries and scientific disciplines. Broader Implications Alpha Evolve's recursive approach to self-improvement has far-reaching implications for science and technology. By automating the refinement of solutions, it demonstrates how AI can drive innovation in areas such as computational mathematics, hardware design, and energy efficiency. Its success also highlights the growing importance of interpretability, debugability, and predictability in mission-critical AI systems, making sure that outputs are reliable and aligned with human objectives. This development reflects a broader shift in AI research priorities. Traditional reinforcement learning methods are increasingly being complemented by iterative improvement approaches that emphasize adaptability and precision. This trend suggests a new direction for AI development, one that prioritizes continuous refinement over static optimization, paving the way for more dynamic and responsive systems. Ethical and Competitive Considerations Google DeepMind's commitment to ethical AI development is evident in its focus on applications that benefit humanity. By explicitly opposing the use of AI in warfare, the organization sets a standard for responsible innovation. However, the rapid pace of AI advancements raises critical questions about oversight, accountability, and equitable access to these technologies. As systems like Alpha Evolve become more capable, balancing innovation with ethical considerations will be essential. Collaboration between researchers, policymakers, and industry leaders will play a pivotal role in making sure that AI development aligns with societal values and priorities. Establishing clear guidelines and frameworks for responsible AI use will be crucial to navigating the challenges posed by increasingly autonomous systems. The Path Forward Alpha Evolve exemplifies the fantastic potential of recursive AI systems. Through iterative self-improvement, it has achieved breakthroughs in computational efficiency, hardware design, and applied sciences. While challenges remain, its development represents a significant step toward the realization of artificial general intelligence (AGI). As AI continues to evolve, systems like Alpha Evolve will shape the future of technology and its impact on society, offering new possibilities for innovation, progress, and the betterment of humanity. Media Credit: AI Explained Filed Under: AI, Technology News, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.