logo
#

Latest news with #Codex

How 'discounts' on ChatGPT software may be straining OpenAI's ties with Microsoft
How 'discounts' on ChatGPT software may be straining OpenAI's ties with Microsoft

Time of India

time2 days ago

  • Business
  • Time of India

How 'discounts' on ChatGPT software may be straining OpenAI's ties with Microsoft

OpenAI is offering significant discounts on its enterprise version of ChatGPT, a move that has created tension with its major backer, Microsoft, a report has claimed. These discounts, typically ranging from 10% to 20%, are being offered to clients who sign multi-year contracts and bundle multiple OpenAI services, such as its API, the Deep Research agent, or the Codex coding assistant. The report says that the discounts may put OpenAI in direct competition with Microsoft's own offerings, particularly its Azure OpenAI Service , which allows businesses to access OpenAI's models through Microsoft's cloud infrastructure. This potential overlap may be the source of the friction between the two AI powerhouses. The strategy appears to be a direct bid to increase overall revenue and customer lock-in across OpenAI's diverse product portfolio. Notably, OpenAI is said to be eying expanding its customer base and achieving a target of $15 billion in ChatGPT-related enterprise revenue by 2030. As of earlier this year, OpenAI boasted $100 million in revenue from ChatGPT Enterprise and reported over three million paying business subscribers across its various ChatGPT plans. However, these discounted offerings are directly competing with Microsoft's sales teams, particularly for its Copilot AI services , reportedly leading to increased tension in their partnership. OpenAI's $3 billion acquisition has escalated tensions with Microsoft: Report According to a report by The Wall Street Journal, both companies are reportedly entangled in their most significant dispute to date, centred on OpenAI's recent $3 billion acquisition of coding firm Windsurf. It also said that tensions have escalated to the point where OpenAI executives have discussed filing antitrust complaints against Microsoft. The core of the disagreement lies in Microsoft's existing access to all of OpenAI's intellectual property under their current partnership agreement. OpenAI is now seeking to block Microsoft from accessing Windsurf's technology, particularly given that Microsoft offers its own competing AI coding product, GitHub Copilot. This acquisition has seemingly deepened existing cracks in their relationship, the report noted, adding that the increasing conflict has prompted some OpenAI executives to internally deliberate accusations of anticompetitive practices against Microsoft. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Why companies implementing agentic AI before putting proper governance in place will end up behind, not ahead of, the curve
Why companies implementing agentic AI before putting proper governance in place will end up behind, not ahead of, the curve

Fast Company

time12-06-2025

  • Business
  • Fast Company

Why companies implementing agentic AI before putting proper governance in place will end up behind, not ahead of, the curve

Agentic AI is the buzzword of 2025. Although technically an 'emerging technology,' it feels like companies of all sizes are quickly developing and acquiring AI agents to stay ahead of the curve and competition. Just last week, OpenAI launched a research preview of Codex, the company's cloud-based software engineering agent or its 'most capable AI coding agent yet.' And it's fair that people are interested and excited. Transforming industries From customer service to supply chain management and the legal profession, AI agents are set to transform industries across the board and are already showing us that they can be pervasive across both consumer and enterprise environments, bringing AI fully into the mainstream. Unlike chatbots and image generators, which provide answers, but require prompts, AI agents execute multistep tasks on behalf of users. In 2025, these autonomous software programs will dramatically change how people interact with technology and how businesses operate. This aligns with Forrester's latest findings, which had agentic AI at the top of its recent Top 10 Emerging Technologies for 2025, highlighting the power and potential of this emerging trend. However, as the report also points out, the rewards come with big risks and challenges. Let's dive into these as well as why companies must prioritize governance before development and implementation in order to stay ahead of, not behind, the curve and their competition. A Governance-First Approach In just three years, at least 15% of day-to-day work decisions will be made autonomously by AI agents—up from virtually 0% in 2024. This prediction by Gartner, while promising, sits alongside another key stat: 25% of enterprise breaches will be linked to AI agent abuse. As mentioned above, the rapid and widespread adoption of AI agents, while exciting, comes with complex challenges, such as shadow AI, which is why companies must prioritize a governance-first approach here. So what is it about AI agents that makes them particularly challenging to control? Short answer: their ability to operate autonomously. Long answer: this technology makes it difficult for organizations to have visibility over four things: Who owns which agent What department oversees them What data they have access to What actions the agent can take How do you effectively govern them A comprehensive approach This is where unified governance can step in. With a comprehensive governance framework, companies can ensure that AI agents operate responsibly and are aligned with organizational standards and policies. The alternative: a lack of governance framework for AI agents can mishandle sensitive data, violate compliance regulations, and make decisions misaligned with business objectives. Let's use a real-world example: you are a CEO for a major organization. Your company builds and introduces an AI-powered assistant to help automate workflows and save you time. Now imagine that the assistant gains access to your confidential files. Without guidance or governance, the assistant summarizes sensitive financial projections and closed-door board discussions and shares them with third-party vendors or unauthorized employees. This is definitely a worst-case scenario, but it highlights the importance for a solid governance framework. Here's a helpful governance checklist: Establish guidelines that clearly define acceptable use and assign accountability. Carry out regular reviews to help identify and mitigate potential risks and threats. Appoint the right stakeholder to foster transparency and build trust in how AI agents are used internally and externally. Blurred lines According to Sunil Soares, Founder of YDC, Agentic AI will drive the need for new governance approaches. As more applications include embedded AI and AI agents, the line between applications and AI use cases will become increasingly blurred. I couldn't agree more. Whether you develop AI agents internally or partner with a third-party vendor, this technology will unlock significant value. But the challenges are not one-size-fits-all and will not go away. And while the human element remains important, manual oversight on its own is not sufficient or realistic when it comes to scale and size. Therefore, when you build out your governance framework, ensure that you have automated monitoring tools in place that detect and correct violations of policies, record decisions for greater transparency, and escalate the complex cases that require additional oversight, such as a human-in-the-loop. A centralized governance framework ensures accountability, risk assessment, and ethical compliance. Like everything else in life, you need to create and establish boundaries. And don't worry—implementing a governance framework first won't slow innovation down. When you find the right balance between innovation and risk management, you stay ahead of the curve and competition, leaving room for more cutting-edge AI agents and fewer headaches. For a perfect pill, deploy a unified governance platform for data and AI, as it will be the key to managing and ensuring AI agents don't become the next shadow IT.

Google Jules vs OpenAI Codex : Which AI Tool Fits Your Development Needs?
Google Jules vs OpenAI Codex : Which AI Tool Fits Your Development Needs?

Geeky Gadgets

time12-06-2025

  • Geeky Gadgets

Google Jules vs OpenAI Codex : Which AI Tool Fits Your Development Needs?

What if the future of coding wasn't about writing lines of code but about collaborating with an AI partner that understands your needs? Enter Google Jules vs OpenAI Codex, two new AI-powered coding tools that are reshaping how developers approach software creation. While both promise to transform productivity and streamline workflows, they take distinctly different paths to get there. Google Jules thrives within the Google ecosystem, offering unparalleled integration with tools like Android Studio and Google Cloud. Meanwhile, OpenAI Codex wields its versatility, excelling across a wide range of programming languages and development environments. The question is, which one aligns better with your goals—and what does this rivalry reveal about the future of programming? In this exploration, GosuCoder dissects the unique strengths and limitations of these two AI titans. From natural language code generation to debugging prowess and ecosystem compatibility, we'll uncover how each tool caters to specific developer needs. Whether you're a Google-centric developer seeking seamless integration or a multi-platform coder craving flexibility, this comparative perspective will help you navigate the decision-making process. As you read, consider not just the tools themselves but the broader implications they hold for the evolution of AI in software development. After all, the choice between Google Jules and OpenAI Codex isn't just about tools—it's about how we envision the future of coding itself. Jules vs Codex Programming Language Support and Versatility Both Google Jules and OpenAI Codex are built to support a wide range of programming languages, making them versatile tools for developers. OpenAI Codex is particularly notable for its extensive language coverage, including popular options like Python, JavaScript, Ruby, and more. This broad compatibility makes it a strong choice for developers working across diverse platforms. In contrast, Google Jules focuses on seamless integration with Google's ecosystem, excelling in areas such as Android development and Google Cloud services. OpenAI Codex offers broad compatibility, making it ideal for projects involving multiple programming languages. offers broad compatibility, making it ideal for projects involving multiple programming languages. Google Jules provides a tailored experience for developers working within Google's ecosystem. Your decision here depends on whether you prioritize general versatility or a platform-specific approach optimized for Google's tools. Code Generation Capabilities AI-powered code generation is a standout feature of both tools, but their approaches differ significantly. OpenAI Codex excels in generating functional code snippets from natural language prompts. By describing your requirements in plain English, you can receive accurate and functional code suggestions. Google Jules, on the other hand, emphasizes contextual code generation. Its deep integration with Google's services allows it to provide highly relevant and specific suggestions, particularly for projects involving Google APIs or cloud services. OpenAI Codex is highly flexible, making it suitable for a wide range of coding tasks. is highly flexible, making it suitable for a wide range of coding tasks. Google Jules delivers precise, context-specific assistance, especially for projects tied to Google's ecosystem. Choosing between these tools depends on whether you value flexibility for general coding tasks or specificity for Google-related projects. Google Jules vs OpenAI Codex Watch this video on YouTube. Here is a selection of other guides from our extensive library of content you may find of interest on AI coding. Code Debugging and Optimization Debugging and optimization are critical aspects of software development, and both tools offer AI-driven solutions to address these challenges. OpenAI Codex identifies errors in your code and suggests fixes, often providing explanations to help you understand the changes. Google Jules takes this a step further by integrating with Google's debugging tools, offering real-time insights and performance metrics that are particularly useful for cloud-based applications. OpenAI Codex is a strong choice for general debugging tasks and error explanations. is a strong choice for general debugging tasks and error explanations. Google Jules excels in optimizing cloud-based applications and providing detailed performance insights. If your workflow involves extensive debugging or performance tuning, Google Jules may offer an advantage, especially for cloud-focused projects. Development Environment Integration Seamless integration with development environments is essential for maintaining productivity. OpenAI Codex integrates with widely used code editors like Visual Studio Code, allowing developers to access its features directly within their existing workflows. Google Jules, in contrast, is deeply embedded in Google's ecosystem, offering a cohesive experience with tools like Android Studio and Google Cloud Platform. OpenAI Codex provides compatibility with a variety of popular development tools, making it versatile for diverse workflows. provides compatibility with a variety of popular development tools, making it versatile for diverse workflows. Google Jules offers a unified experience for developers working within Google's ecosystem. Your choice will depend on whether you prefer a tool that integrates with multiple environments or one that is tightly aligned with Google's platforms. Productivity and the Software Development Process Both tools aim to enhance productivity by automating repetitive tasks, reducing errors, and accelerating development cycles. OpenAI Codex allows developers to focus on higher-level problem-solving by handling routine coding tasks efficiently. Google Jules, with its contextual assistance, is particularly effective for navigating complex projects that involve Google's technologies. OpenAI Codex is ideal for developers seeking a general-purpose assistant to streamline coding tasks across various platforms. is ideal for developers seeking a general-purpose assistant to streamline coding tasks across various platforms. Google Jules is better suited for projects deeply integrated with Google's tools and services. The impact of these tools on your productivity will largely depend on your specific use case and development environment. Limitations and Areas for Improvement Despite their advanced capabilities, both tools have limitations that developers should consider. OpenAI Codex can occasionally generate incorrect or suboptimal code, requiring careful review and refinement. Google Jules, while highly effective within Google's ecosystem, may lack versatility for projects outside of Google's platforms. Additionally, neither tool eliminates the need for human oversight, as AI-generated code can miss subtle nuances or introduce unexpected bugs. OpenAI Codex users should be prepared to verify and refine its outputs to ensure accuracy. users should be prepared to verify and refine its outputs to ensure accuracy. Google Jules users may find its utility limited for non-Google platforms or technologies. These limitations highlight the importance of using these tools as assistants rather than replacements for human expertise. Choosing the Right Tool for Your Needs Google Jules and OpenAI Codex represent significant advancements in AI-driven development tools, each offering unique strengths. OpenAI Codex stands out for its versatility and natural language processing capabilities, making it a strong choice for developers working across diverse platforms. Google Jules, on the other hand, excels in contextual assistance and seamless integration with Google's ecosystem, making it ideal for projects centered on Google's technologies. Your choice between these tools should be guided by your specific requirements, such as the platforms you work with, the programming languages you use, and the level of integration you need. By understanding their features, strengths, and limitations, you can use these tools to enhance your productivity and streamline your software development process. Media Credit: GosuCoder Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Sam Altman thinks AI will have 'novel insights' next year
Sam Altman thinks AI will have 'novel insights' next year

Yahoo

time11-06-2025

  • Business
  • Yahoo

Sam Altman thinks AI will have 'novel insights' next year

In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. This article originally appeared on TechCrunch at Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Sam Altman thinks AI will have 'novel insights' next year
Sam Altman thinks AI will have 'novel insights' next year

Yahoo

time11-06-2025

  • Business
  • Yahoo

Sam Altman thinks AI will have 'novel insights' next year

In a new essay published Tuesday called "The Gentle Singularity," OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman's futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman's essays contain hints about what OpenAI is working on next. At one point in the essay, Altman claimed that next year, in 2026, the world will "likely see the arrival of [AI] systems that can figure out novel insights." While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world. When announcing OpenAI's o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas. Altman's blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn't be the only company focused on this effort — several of OpenAI's competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world. In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research. If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core. This wouldn't be the first time Altman has tipped his hat about OpenAI's plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex. But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI's ability to generate genuinely original insights. Earlier this year, Hugging Face's Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today's AI models cannot generate novel hypotheses. Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting. Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman's essay may feature something familiar -- a preview of where OpenAI is likely headed next. Sign in to access your portfolio

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store