logo
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

Express Tribune6 hours ago

A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015.
Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance.
The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity.
Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive.
Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out.
Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces
Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces

Express Tribune

time4 hours ago

  • Express Tribune

Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces

Elon Musk has once again directed public criticism toward OpenAI CEO Sam Altman, calling him 'Scam Altman' in a recent post on the social media platform X. The comment came shortly after the release of The OpenAI Files, a report raising concerns about OpenAI's governance, profit model, and safety practices. Musk framed his remark as a reaction to the revelations outlined in the report. Musk and Altman, both prominent figures in the tech and artificial intelligence sectors, share a history as co-founders of OpenAI. Musk served on OpenAI's board from its founding in 2015 until stepping down in 2018. He has since criticized the company's evolution from a non-profit research lab to a 'capped-profit' model, arguing that the move contradicts OpenAI's original mission of promoting safe and open AI development. In addition to their involvement in AI, both Musk and Altman have been vocal supporters of cryptocurrency, adding another dimension to their public personas and influence in the tech world. Musk, who leads Tesla, SpaceX, and X, has long promoted digital assets such as Bitcoin and Dogecoin. Tesla holds over $1 billion in Bitcoin, and Musk's public endorsements of Dogecoin have often impacted its market price. Altman, similarly, has expressed support for Bitcoin, describing it as a critical technological step during a 2023 appearance on The Joe Rogan Experience. He also launched the cryptocurrency Worldcoin in 2019, with a focus on decentralized identity and finance. Musk's recent criticism comes amid broader industry debates over the future of artificial intelligence. Centralized models, like those used by OpenAI, have been criticized for concentrating power and limiting transparency. Decentralized alternatives, often supported by crypto infrastructure, are being explored as a counterbalance.

OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

Express Tribune

time6 hours ago

  • Express Tribune

OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization

A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015. Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance. The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity. Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive. Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out. Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.

AI and the environment
AI and the environment

Express Tribune

time16 hours ago

  • Express Tribune

AI and the environment

The writer is an academic and researcher. He is also the author of Development, Poverty, and Power in Pakistan, available from Routledge Listen to article For Gen X people like me, who are trying to get used to the new world of AI, like we learnt using the computer, and then the Internet many years ago, it is intriguing to see how AI is becoming integrated into our lives. For researchers like me, AI is making it easier to navigate Internet searches, and to synthesise relevant literature. Besides such novice applications of AI, however, this evolving technology is going to start playing an increasingly prominent role in more salient aspects of our lives ranging from healthcare, education, manufacturing, agriculture, and even warfare. There are also legitimate reasons to be wary of AI's power. AI is making it much easier to spread disinformation, enable fraud, and cause conflicts to become deadlier. Moreover, AI, like many other technologies that we have become so dependent on in our consumerist world, ranging from cars to cell phone, has significant environmental impacts. This heavy ecological footprint of AI is more concerning to me than speculations about AI dominating or replacing humans. AI has a much larger environmental impact than many of the other innovations we now depend on, due to the exorbitant amount of energy needed to operate and train AI systems, and because of the e-waste produced by the hardware used to run AI. Training and operationalising large language models such as ChatGPT depend on energy still being generated via fossil fuels, which is leading to more carbon emissions, and increased global warming. Each ChatGPT question is estimated to use around 10 times more electricity than a traditional Google search. Producing and disposing of AI hardware also generates a lot of e-waste comprised of harmful chemicals. Running AI models need a lot of water too, to cool the data centres which house massive servers, and to cool thermoelectric or hydroelectric plants which supply electricity for these data centers. The race to produce AI is also compelling major tech giants to walk back on their earlier environmental pledges. Consider, for instance, the case of Google. A few years ago, Google set an ambitious target to address climate change by becoming 'net zero' emissions, but now the company's emissions are growing due to Goggle's bid to become a leader in AI. As the AI industry continues to grow, its environmental impact will grow too. However, as is the case of ecological destruction caused by over consumption of other products, the environmental impacts of AI will not be evenly distributed across different regions or socio-economic classes. The benefits of AI will not be evenly spread either. Higher income countries are better poised to capture economic value from AI because they already have better digital infrastructure, more AI development resources, and advanced data systems. Better off households will be able to enjoy the benefits of AI, while having more resilience in terms of shielding themselves from its adverse impacts. Conversely, the quest to produce more AI may cause exploitation in poorer countries that provide the critical resources needed for AI. This is not a speculative statement, but one based on ground realities. Consider, for instance, the dismal condition of miners, including children, in poor African countries like Congo, who are toiling away to produce cobalt to power batteries used to run electric cars, and our phones. Al will require many more of these critical resources, potentially leading to even more exploitation of people and natural environments in resource-rich but poor countries. It is important to improve the energy efficiency of AI models and data centers, and to use renewable energy sources to power AI data centres. Moreover, it is also vital to promote more sustainable mining and manufacturing practices and improve e-waste management to reduce the amount of harmful chemicals entering the environment. However, whether these efforts will be paid more attention than maximising profits within this highly unregulated new domain of human innovation remains to be seen.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store