Latest news with #AlphaGo


Mint
a day ago
- Business
- Mint
Microsoft and OpenAI forged a close bond. Why it's now too big to last.
Once tied at the hip, Microsoft and OpenAI increasingly look like rivals seeking an amicable divorce. But like all separations, it could get messy, and this past week OpenAI indicated it's willing to get down in the mud. When Microsoft and OpenAI first got together in 2019, the most powerful artificial intelligence in the world was literally playing games. AlphaGo from Google's DeepMind lab was the first machine to beat human Go champions, but that's all it did. AI as we know it now was still in its research phase. Venture capital's focus was on cloud and cryptocurrency start-ups, but Microsoft saw something in the nonprofit AI lab called OpenAI, which had just come off a bruising leadership battle that saw Sam Altman prevail over Elon Musk. Without Musk's billions of dollars, OpenAI changed to a bespoke structure in which a for-profit AI lab is controlled by a nonprofit board. Investors' returns were capped at 100 times their stake. The reorganization cleared the way for Microsoft to invest $1 billion in OpenAI in 2019. Those funds fueled the release of ChatGPT in November 2022—the spark to the AI prairie fire that is still spreading. Soon thereafter, Microsoft invested another $10 billion, which supported OpenAI's rapid expansion. Since then, the bills have added up, given the high cost of scaling AI. At first the two companies were symbiotic. All of OpenAI's AI computing is done on Microsoft's Azure cloud. Microsoft has access to all of OpenAI's intellectual property, including its catalog of models that underpin a range of AI services Microsoft offers with its Copilot products. When the OpenAI nonprofit board ousted Altman in a November 2023 coup, Microsoft CEO Satya Nadella backed Altman, a key endorsement that helped restore his post. But the partnership that made so much sense from 2019 to 2023 has now made each company too dependent on the other. OpenAI has large ambitions, and Sam Altman believes it will need unprecedented computing power to get there, more than Microsoft can provide. He would also like more control over the data-center buildout. Altman's company also has increasingly go-it alone ambitions—it says subscriptions and licenses to ChatGPT are on track to bring in $10 billion a year. For its part, Microsoft now relies on OpenAI as both a major customer and supplier. That's the kind of concentration risk that should make Microsoft executives nervous. 'OpenAI has become a significant new competitor in the technology industry," Microsoft President Brad Smith said in a February 2024 blog post. This was the first public indication that the relationship may not have been as cozy as some supposed. Microsoft began working on its own AI models that year, and in October 2024, it declined to participate in a $6.6 billion OpenAI funding round. In January, Microsoft and OpenAI modified their agreement so that Microsoft would no longer be OpenAI's exclusive cloud provider, but would retain right-of-first-refusal for all new business. Microsoft hasn't been exercising that right to any large degree—OpenAI subsequently signed new cloud deals with CoreWeave and Alphabet's Google Cloud, two Microsoft competitors. The same January day as the deal modification, Altman stood in the Oval Office with President Donald Trump, Oracle Chairman Larry Ellison, and SoftBank Group CEO Masa Son to announce Project Stargate, an ambitious plan to raise $500 billion for a massive cluster of AI data centers controlled by Altman. The partnership and high-profile event made clear that OpenAI had new friends and had moved beyond its Microsoft reliance. The partnership on display in the Oval Office led to a $40 billion March funding round, led by SoftBank. But it came with a string attached: $20 billion of it is contingent on OpenAI doing another reorganization into a public-benefit corporation by the end of the year, which would give SoftBank and other new investors more conventional investor rights. But there are key hurdles in the way of that restructuring and the $20 billion, including a lawsuit from Elon Musk and regulatory approvals from California, Delaware, and the federal government. But the biggest obstruction is that Microsoft has a large stake in the current OpenAI. To convert corporate structures, OpenAI will have to negotiate new terms, and in a ticking-clock scenario like this, Microsoft has all the leverage, which grows each day. According to The Wall Street Journal, negotiations are getting testy. The main point of contention is how much of the new OpenAI Microsoft will own. But there is also the matter of OpenAI's acquisition of an advanced AI coding tool, Windsurf. Under their current arrangement, Microsoft has access to all of OpenAI's IP, and that would include Windsurf. But OpenAI doesn't want this, because Microsoft has its own coding assistant, GitHub Copilot, and this puts the companies on another axis of competition. In a joint statement, Microsoft and OpenAI told Barron's: 'We have a long-term, productive partnership that has delivered amazing AI tools for everyone. Talks are ongoing and we are optimistic we will continue to build together for years to come." According to the Journal, OpenAI thinks it could deter Microsoft from dragging out negotiations by keeping open the possibility of publicly accusing Microsoft of antitrust violations and lobbying the White House to open an investigation. Since the Stargate announcement, Altman has had a close relationship with Trump. In this regard, the Journal article is a message from OpenAI: We aren't powerless here. This is how the divorce could get ugly. Microsoft could slow-walk the talks, and as the end of the year approaches, the pressure would grow on OpenAI to settle, or lose $20 billion in funding. OpenAI, meanwhile, could start pushing on its White House levers to encourage some type of Microsoft investigation—what the WSJ called its 'nuclear option." But like any nuclear exchange, no one would emerge victorious. Microsoft would be tarred, and OpenAI would still miss its $20 billion deadline. Since the launch of ChatGPT, AI in the U.S. has been dominated by the Microsoft-OpenAI alliance. The now inevitable breakup has everyone scrambling to fill the void. Write to Adam Levine at


The Wire
09-06-2025
- Politics
- The Wire
The Liberal Arts University in the Age of AI and ‘Activism'
Menu हिंदी తెలుగు اردو Home Politics Economy World Security Law Science Society Culture Editor's Pick Opinion Support independent journalism. Donate Now Society The Liberal Arts University in the Age of AI and 'Activism' Ajay Skaria 5 minutes ago Spending some time on Ashoka University founder Sanjeev Bikhchandani's recent article may be helpful. First, it provides a concrete illustration of how generative AI can inhibit the capacity for critical thinking. Second, it helps us think about what we as teachers and citizens can do to guide those whose capacity for critical analysis has been weakened by AI. 'Ships in the Dark' a. 1927 painting by Paul Klee. Photo: Wikipedia. Real journalism holds power accountable Since 2015, The Wire has done just that. But we can continue only with your support. Contribute now Last summer, while in Kerala, I happened to read Benjamín Labatut's The Maniac. I was drawn to it because, two years earlier, a very dear friend had gifted me his previous book, When We Cease to Understand the World. Like that book, The Maniac is difficult to classify. It is fiction but draws so heavily on historical events that to call it fiction seems a bit of a stretch, though it would be even more of stretch to call it anything else. So let's resort to the copout of just calling it a book. Benjamin Labatut's 'The Maniac'. Though one might say it focuses primarily on the life and afterlife of John von Neumann, in the process The Maniac also traces the rise of artificial intelligence. Given the themes of his previous book, I have no doubt that Labatut is acutely aware of the social, political, and ecological implications of AI, but in The Maniac he focuses principally on its intellectual aspect; I shall be doing the same here (though, as we shall see, this may not really be possible). One of the book's most compelling chapters comes toward the end, when the world champion at Go, Lee Sedol, plays against an artificial intelligence program created by Google, AlphaGo, and loses. Go is an infinitely more complicated game than chess, and the kind of brute computational power that made possible the early unbeatable programs in chess would not have succeeded here. AlphaGo's creators built it instead on the more supple form of artificial intelligence, based on 'self-play and reinforcement learning, which meant that, in essence, it had taught itself how to play.' In The Maniac, Sedol describes his feeling after one of the moves: 'I thought AlphaGo was based on probability calculation and it was merely a machine. But when I saw this move it changed my mind. Surely AlphaGo is creative. This move made me think about Go in a new light. What does creativity mean in Go? It was not just a good, or great, or a powerful move. It was meaningful.' And then, in another game, Sedol makes a similar move, resulting in the only game he wrests from AlphaGo. Labatut writes: 'Facing each other, Lee and the computer had managed to stray beyond the limits of Go, casting a new and terrible beauty, a logic more powerful than reason that will send ripples far and wide.' Labatut's book, and that episode in particular, came to mind as an achingly poignant counterpoint while I read the response from one of Ashoka University's founders, Sanjeev Bikhchandani, to a former student from the University who wrote to him protesting the administration's silence on the hounding and eventual arrest on sedition charges of Professor Ali Khan Mahmudabad for his very reflective Facebook post. (The Supreme Court decision that released him on bail also has some astoundingly weak reasoning.) Bikhchandani vigorously defends the university's inaction over Mahmudabad's arrest and harassment, reproducing at length in the process an answer that Google AI generated for him when he asked it the question, 'Are all liberal arts universities activist in nature?' Bikhchandani says that he agrees with the AI's answer, and goes on to buttress it (to his mind) with additional points. I do not want to spend too much time on the specific arguments that Bikhchandani makes. The relatively valid ones are also very obvious ones: educational institutions cannot easily take strongly oppositional positions, especially against authoritarian regimes that do not follow or that systematically weaponise the rule of law; university administrations deal with regulators not only through public statements but also through institutional channels less in the public eye, and these administrations have only limited leeway in defying regulators; universities need money to function, and raising money for Ashoka is not easy. At a time when the attack on Ali Khan Mahmudabad is likely 'an excuse to corner and target Ashoka University,' I can certainly understand its administrators' and trustees' wanting to proceed cautiously. Two matters, however, vitiate these valid arguments. For one, there is all that his letter gets wrong, or glosses over. To briefly respond: as Priya Ramani notes, Bikhchandani's recollection that the college he studied in, St. Stephens, was free of activism during his time there is quite wrong. And the contrast he seems to be venturing between academic freedom and free speech is quite muddled, as would be clear to those more familiar with the relation and distinction, laid out very nicely in an essay by Adam Sitze. Besides, even his insistence that Mahmudabad was not engaged in academic freedom since he was engaged in speech outside academic venues does not quite hold up when we remember that Ashoka University's trustees and administration failed to defend Sabyasachi Das after the publication of his scholarly article on democratic backsliding in Indian elections. But even more worrisome is the second issue – the banality of Bikhchandani's arguments. This is a meta-issue, so to speak. The banal is worse than the wrong because the banal is also the abandonment of reflectiveness; it is the subversion of the ability to think critically about right and wrong because what is attenuated here is the ability to make meaningful distinctions about right and wrong. Indeed, if Labatut's book is an exploration of how AI might allow us – us, humans – to reach new artistic and intellectual and critical levels, Bikhchandani's article is a perfect example of the dulling of the critical – I deliberately do not say intellectual – faculties that will affect most people who allow their analytical capacities to be controlled by AI. Spending some more time on Bikhchandani's article may be helpful in two ways. First, it provides a concrete illustration of how generative AI can inhibit the capacity for critical thinking. Second, it helps us think about what we as teachers and citizens can do to guide those whose capacity for critical analysis has been weakened by AI. § Over the last three or so years, I have increasingly been incorporating into my courses a distinctive sort of assignment – one where students generate an answer using the AI involved in Large Language Models or LLMs (most stick to ChatGPT) to questions based on course material, and then produce a revised and meta-reflective version of the answer, both modifying it and explaining what they changed around and why. My students and I have found that when these models are fed social sciences-oriented questions (for now, let's just gloss those as questions where fidelity to facts is crucial) they get too many of their facts wrong. And when they are fed humanities-oriented questions (let's gloss those as questions where issues of meaning have to be probed), they are not so much wrong as banal, tending toward pabulum. Either way, what ChatGPT usually spits out is the kind of answer that in my classes would by itself be in the C range, or at most a B minus. In their engagement with the AI answer, differences between students also become clear. For the most reflective, the ChatGPT answer becomes an occasion to review their own presumptions more critically. What often results is something more brilliant and insightful than would likely have resulted if they had answered the question directly, without the ChatGPT detour. But the less reflective students usually find themselves concurring with ChatGPT. Even if they add quite a few factual corrections, they find it difficult to do more than add a small caveat or two to the humanistic questions of meaning that frame ChatGPT's arguments. And the students in the low C range end up offering etiolated versions of even the ChatGPT answer. How to help these weaker students develop a more thoughtful relation with generative AI is a matter that I continue to puzzle over. One thing that I have found somewhat helpful is asking students what groupthink might be embedded in the answers, and how and why might they want to take these answers apart. Illustration: Yutong Liu & Kingston School of Art/ Bikhchandani's reply to the student remains at the level of an LLM AI answer, as he himself effectively declares, and at times it sinks to the level of an etiolated version of such an answer; it is a good illustration of the kind of paper which in my undergraduate classes would get at best a C plus. In his case, of course, the problem starts with the question itself. From what I have seen, to get LLM AI models such as Google AI or ChatGPT to produce a half-decent answer to such a banal and generic question would be well-nigh impossible. One could, of course, come up with a more interesting answer to the question, but that would have to begin by reframing the question or probing its presumptions: asking more deeply what a liberal education is, asking what is glossed over in the term 'activist,' asking whether faculty and students and administration engage with 'activism' in necessarily different ways, and so on. But at present, at least, AI like Google's is incapable of that work. (To be clear, my remarks are only about the type of LLM AI Bikhchandani used: I have myself found, and others have, too, that when fed a delimited corpus and asked to generate answers on that basis, AI can be astonishingly good, and I can perhaps be persuaded that if I spent more time with feeding generative AI the right material, I might experience an AlphaGo moment.) Reframing Bikhchandani's question by critically parsing it would also be beginning to answer it. To carry out that task with the care it deserves would take longer than is possible in the compass of a short piece. But since Bikhchandani seems to have at least some curiosity about these matters, maybe one owes it to him to provide briefly the protocols that may help him critically move beyond the simple-minded embrace of Google AI pabulum. § So, in that educative spirit, here goes: No, 'liberal arts universities' are not 'necessarily activist' in nature – on this matter, Google AI is quite correct. Paradoxically, however, this is for several nested reasons which are about the different forms of action and activism at work in the concept of the university. 1) The relation between action and activism is a complex one, and we often invoke 'activism' in intellectually lazy ways. To put it very schematically for now: action that challenges what are taken to be prevalent social norms, whether of the right or left, is more likely to be classed as activism. 2) Conceptually, what distinguishes the modern university is that it is, for those who aspire to abide by its principle, a place focused on education as an autonomous end, rather than merely a place for technical training – that is, merely a means to transmit already formulated knowledge. I know, of course, that this aspiration has never been realised, and has always been undercut in many ways: from its very inception, there have been social exclusions that have shaped access to it for both faculty and students; since the 1980s, there has also been the neoliberal subordination of autonomy to the rhetoric of 'excellence.' But the aspirational dimension of the university cannot be easily extinguished, and this dimension is arguably especially important for the historically marginalised as they articulate the terms of the dignity they have been long denied. To treat education as an autonomous end means that we pursue both rational explanation and reflection as qualities in themselves. Such pursuit may lead to a departure from existing norms in a society or even within the university itself. This is why dissent is constitutive of the university as institution. And dissent even in speech means action. (Strikingly, Google AI recognises this more clearly than Bikhchandani. Its answer to him specifies that liberal arts education involves the 'development of critical thinking skills, not necessarily a commitment to activism.' Yes indeed, but this does not mean that activism is only 'a choice,' as Bikhchandani's sloppy regurgitation of Google AI's answer assumes.) 3) In other words, the modern university is founded on a distinction between thought/speech and action: the university is the place for thinking and speaking, and the world is the place for action. But that distinction is not an opposition. The terms bleed into each other: speech itself is an act, as is evident from the cases of Sabyasachi Das and Ali Khan Mahmudabad. Thought in this understanding has its meaning precisely because it is meant to inform and shape action, precisely because it assumed that action without thought is unacceptable The very commitment to action means that thinking must intervene in or at least speak up about action wherever the latter seems unthinking. Is this not activism? And should not university administrations defend this sort of 'activism' as the very principle of the university, even if on occasion, as when facing authoritarian regimes, administrations must choose their battles and defences strategically? Illustration: Hanna Barakat & Archival Images of AI + AIxDESIGN / / 4) So far I have addressed university education but not the adjective 'liberal.' In its application as an adjective to education, 'liberal' refers, as Wikipedia notes, to 'a system or course of education suitable for the cultivation of a free (Latin: liber) human being.' This sense of 'liberal' as an experience of being free – call it liberality – predates liberalism as an ideology which articulates a particular institutional order of freedom. And 'liberal education' has arguably retained more of that open-ended commitment to the idea of freedom, asking also what freedom is, than liberalism as ideology. It is precisely because of the destabilizing emphasis on 'free' that a liberal education tends by its very nature to be driven by a democratic spirit. But save in caricatural stoicisms, there is no such thing as a freedom of the mind that does not strive for a freedom in action. Would this not be another reason why a liberal education necessarily inclines toward not just action but activism? (This, though, is a very different matter from the claim that universities are 'necessarily activist,' as a shoddy – too quickly transitive – logic might assume.) 5) The focus here is not just on liberal education; it is on a liberal arts education. The phrase 'liberal arts' specifies a particular way of inhabiting the world – through critique. The crux of a liberal arts education as a concept is the combination of the sciences, centred as they are on explanation, expertise, and questions of 'what,' and the arts, centred as they are around reflection and questions of 'who.' Until just a few decades back, reflection was dominated by humanist reason, or a reason that made 'the human,' with all its constitutive exclusions, into the 'who.' Critical theory, increasingly prominent since the 1980s, represents an alternative tradition, one that is not humanist reason, but is not without reason. It emerges from the encounter, in friendship, of reasoning with the other and others it minoritises or places at its margins. 5) By at least the late 20th century, moreover, reflection had come include also the capacity to reflect critically on reason and try to practice a responsibility to what reason excludes – surely this could be one way of describing what is distinctive about critical theory. 6) While it makes sense empirically to distinguish between liberal universities and technical universities (those that teach only professional skills, and nothing of the humanities and social sciences), that distinction has no conceptual purchase. Even technical universities, when they treat technical education as an autonomous end, cannot avoid the liberal commitment involved in the undecidable and open-ended sense of freedom. 7) There is indeed a sense in which universities as institutions should not take activist positions, as Bikhchandani avers. But that sense becomes very complicated when we are attentive to it. The principal reason that universities might eschew activism is to keep open the institutional and conceptual space for students and faculty to engage in critical thought and the action – 'activism'? – from which thought is, in any meaningful sense, inseparable. This opening up of a space for students and faculty through institutional neutrality is an implication of the University of Chicago's Kalven report. To complicate things further, this emphasis on institutional neutrality does not always work, as critics have pointed out. For universities' own eschewal of activism remains tenuous: they must nourish in the wider societies of which they are part the capacity to engage in the critical thought that universities at their best embody. What happens when this nourishing of the capacity for critical thought is itself at odds with dominant or prevalent values in wider society? At that moment, should we say that a university pursuing its constitutive commitment to autonomous education has become activist? I know there is more to be said about each of these points, and also that there are more points to be made. But for now, consider this as offering some provocations for lifting Bikhchandani's C-plus level Google AI/ChatGPT-type piece to the kind of analysis one would expect from somebody who has had a liberal education. I do very much hope that Bikhchandani will take this opportunity to cultivate a deeper-than-AI understanding of liberal education, and more broadly of education as an autonomous activity: it would be wonderful to have trustees and founders who have such an understanding, whether in India or in the US, where they have repeatedly failed to understand what university education is about. Luckily, if Bikhchandani decides to go this route, there are many brilliant teachers at Ashoka University, including Ali Khan Mahmudabad, who can guide him. The gap between the AI answer to Bikhchandani's question and a more thoughtful reflection can also bring us back to what I have not been able to take up in this brief piece: the social and political dangers that AI poses. Where AI as an intellectual formation is dominant, I do not how we can avoid the dominance of the banal in social and political life. True, this banality may be most evident in the LLM type of AI, but it arguably occurs in more insidious ways in every form of AI as we know them today. And this social and political violence is quite apart from the tremendous environmental violence of AI. § There remains one other matter to be taken up: why have Bikhchandani's critical capacities – again, not at all to be confused with intellectual capacities – been so affected as to make him incapable of going beyond a Google AI-level understanding what a liberal arts education involves? My interest in this question does not centre on Bikhchandani individually or personally. For now at least, I am quite incurious about that. I am concerned more with the structural position he exemplifies. For Bikhchandani is not an exception. Silicon Valley and the worlds of finance, industry, and advertising teem with intelligent neoliberals who display a similar incomprehension about liberal education, who formulate banal questions about education (and many other subjects), and who do not even recognize the banality of their questions or the answers they generate. What accounts for this pervasive Dunning-Kruger effect? Put very schematically, it seems to me that a constitutive blindness is at work. Silicon Valley, as well as worlds such as those of finance or technology, deal primarily with issues that seem best addressed by a hypothetical imperative (that is to say, addressing issues that can be resolved in an 'if X, then Y' manner). Hypothetical imperatives, exemplified in instrumental reason, do not require persuasion or conversation. By contrast, as Gayatri Chakravorty Spivak famously notes, education involves the 'uncoercive rearrangement of desires.' I would add two observations to Spivak's remark: it is not just education, but also democratic sociality itself (including the capitalist sociality embodied in consumerism and advertising) that involves an uncoercive rearrangement of desires. Second, what is distinctive about modern education, especially at it gets involved in the question of reflection, is that here the uncoercive rearrangement of desires proceeds through critique, which is also to say through a division of the self, or a constant autocritique of desires. The act of loving another in their otherness is the other activity—the primary activity, really, of which education as an autonomous activity is but one privileged institutional form—in which the uncoercive rearrangement of desires proceeds through a division of the self. Precisely this insurmountable division of the self separates education, and love of the other in their otherness, from the uncoercive rearrangement of desires involved in consumerism, including most consumption of social media. The current crisis in the legitimacy of 'higher education' (the sphere in which education is most often regarded as an autonomous activity), and the increasing claim that universities are overrun by 'activists,' are surely related to a transformation in the relations between these three phenomena—the hypothetical imperative, liberal education as an other-oriented uncoercive rearrangement of desires, and consumerism as a self-oriented uncoercive rearrangement of desires. Illustration: Kathryn Conrad / / Until about the 1980s, it seems fair to say, the relation between the disciplines in the university and the hypothetical imperative was a cozy one. Thus, for much of the modern period, as Priya Satia notes, 'historians have not been critics but abettors of those in power'; the hypothetical imperative and liberal education seemed to condense in the same being—the one whom Frantz Fanon famously describes as 'the white man.' This was a time when it was possible to understand the hypothetical imperative as instrumental rationality, and liberal education as substantive or value rationality. This was a time when it was possible for many to hope that all would be well if only substantive rationality could control instrumental rationality, even as critics like Max Horkheimer and Theodor Adorno pointed to its impossibility. This was a time when it was commonplace to encounter the assertion that technology was not itself bad, and that what mattered was what 'man' did with it. By the 80s, however, the breakdown of that cozy relation was well under way. Two developments reinforced each other. On the one hand, neoliberalism emphasised the hypothetical imperative even more aggressively and had much less patience with the celebration of value rationality. What seemed much more attractive to this new order of the hypothetical imperative were the self-oriented forms of the uncoercive rearrangement of desires. Now, even for sympathetic neoliberals, liberal education can be affirmed only to the extent that it is a private pleasure, something carried out for one's private edification. Bikhchandani exemplifies this view in some of his remarks, as, for instance, in his yearning to be able to treat Ashoka as he would a private company. As for the neoconservative populisms that are becoming increasingly powerful, they perhaps recognize more clearly than the neoliberal position that it may be difficult to contain the university in this way, given its conceptual premises; this is why they seek to destroy the university as we know it. On the other hand, universities have seen the rise of various forms of critical theory, and the presence in much larger numbers of groups who had once been excluded from higher education. This has led to much less patience with value rationality, much more recognition of the fact that what was celebrated as value rationality was often the values of the dominant. At the same time, dissatisfaction with the world of the hypothetical imperative and its close twin, the self-oriented forms of the uncoercive rearrangement of desires, intensified. The university, and especially its students and faculty, increasingly emerged as the locus of the critique—a hesitant and often internally contradictory one, to be sure—of wider society to the extent that it was constituted by the hypothetical imperative and the self-centered uncoercive rearrangement of desires. It is this enormous gap that makes liberal education as a concept incomprehensible to somebody like Bikhchandani. At most, as I noted, his neoliberal perspective can celebrate liberal education as a private good – never as a public one. To treat it as a public good would be to acknowledge and affirm its potential to remake society. This neoliberalism – unlike liberalism – finds difficult to do. Indeed, Bikhchandani's incomprehension of the liberal university – his perception that the university has become a locus primarily of 'activism' – is more than anything a telling symptom of the attenuation of the critical tools with which to understand liberal education, or education as an autonomous activity. Ajay Skaria teaches in the Department of History and Institute for Global Studies at the University of Minnesota. This essay draws in part of some arguments expanded at greater length in his essay 'Gaza and the Unsettling Equality of Academic Freedom,' which is forthcoming in Critical Times (8:1). This essay first appeared on the Critical Times ' blog ' In the Midst.' The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments. Make a contribution to Independent Journalism Related News Ashoka University Can't Call Its Refusal to Stand Up to BJP's Bullying 'Institutional Neutrality' Founders of Ashoka Should Know that a University Can't be Equated With Hierarchies of a Corporate Office Is Ashoka University the Next Target After Professor Ali Khan? Who Gets to Think in India? Supreme Court's Bail Condition on Ashoka Professor Mahmudabad: Has Dissent Become Disorder? On Science and Changing Culture: A Conversation with Professor P. Balaram Ashoka Prof Arrested For 'Endangering Sovereignty' Over Post Criticising Jingoism, Sent to Custody Till May 20 'Inverted the Meaning, Invented an Issue': Ashoka Professor on Women's Panel's Reaction to Army Post The Curious Crusade of Renu Bhatia Against Ashoka Professor Mahmudabad View in Desktop Mode About Us Contact Us Support Us © Copyright. All Rights Reserved.

New Indian Express
01-06-2025
- Science
- New Indian Express
Dark AI: The Black Hole
And in that moment, a quiet boundary dissolved. While xenobots mesmerise the scientific community, they've also reignited a global debate: what new frontiers—and dangers—are we agreeing to when we embrace emergent forms of AI? Let's be clear: AI today is not sentient. It doesn't 'want' anything, doesn't dream, doesn't resent you for shouting at Alexa. But that's not the real concern. The anxiety around AI isn't about whether it will wake up and write poetry about its sad little server racks. The fear is about what happens when its power, speed, and optimisation capabilities outstrip human governance. Delhi-based tech expert Shayak Majumder says, 'The primary concern isn't that machines will start thinking like humans, but that humans will stop thinking critically in a world shaped by AI assistants. I have always compared the advent of AI to the advent of internet. Earlier there were concerns of jobs getting eaten up, but about two-three decades later, we have learned how to leverage internet to our advantage. For now, we need to start getting adept with AI tools, to stay ahead of the curve. The 'dark side' of AI lies not in its intelligence, but in how we choose to wield it, regulate it, and remain accountable for its impact.' AI creating life; AI going beyond its mandate to serve mankind could bring us to the brink of extinction in myriad ways. When AlphaGo (Google DeepMind's AI) played Go against world champion Lee Sedol, it made a move (Move 37) that no human had ever thought of. AlphaGo's calculations indicated that the move had a mere 1 in 10,000 chance of being played by a human. It wasn't programmed specifically to make that move. It thought several moves ahead and invented strategies no one taught it. Researchers called it 'beautiful' and 'creative' and playing against a 'thinking entity'. In a 2020 simulation, OpenAI trained simple AI agents to compete in hide-and-seek games. Without being programmed to, some agents invented tool-use like pushing objects to block doors or building forts. They did it by inventing complex strategies not taught by humans. They adapted and outsmarted their rivals on their own. In 2017, two AI chatbots, Bob and Alice, were designed to negotiate with each other. But very soon, they invented their own language, unintelligible to humans to make negotiations more efficient. Significantly, they abandoned English because it was inefficient for them. They began optimising communication without human permission or understanding. Researchers shut the programme down because they couldn't control or predict it anymore. Scientists at MIT and elsewhere are building neural networks that repair themselves when attacked or corrupted, without human instructions. Like living tissue healing itself, the network 'senses' failure and reorganises thereby suggesting rudimentary self-preservation instincts: a building block of 'will'. This collective was seen in xenobots who built cooperative groups, and self-repaired wounds without an external brain or microchips. They acted as if they had goals. The scary and fascinating part? Emergence doesn't ask permission. It just happens. Because the xenobots were not meant to think. But they moved as though they had decided to. They acted as though they had purpose. And that suggested something that made researchers and philosophers alike slightly queasy: that perhaps intelligence simply emerges.


Business Upturn
28-05-2025
- Business
- Business Upturn
Syneris Launches to Break Barriers in AI Infrastructure with Decentralized Compute Power
BIRKIRKARA, Malta, May 28, 2025 (GLOBE NEWSWIRE) — officially announces the launch of its full-stack Decentralized AI Infrastructure, aiming to transform the global AI development landscape by unlocking affordable AI development at scale. As demand for artificial intelligence continues to surge across sectors, Syneris steps in with a bold mission: to decentralize access to high-performance computing, enabling more builders, startups, researchers, and enterprises to create and deploy AI without the traditional limitations of cost, centralization, and technical gatekeeping. 'We believe the future of AI shouldn't belong to a handful of tech giants,' says the Syneris team. 'It should be open, collaborative, and powered by the people.' A Global Problem Meets a Scalable Solution In today's AI race, the high cost of computing remains a major bottleneck. Traditional GPU resources are increasingly monopolized by a handful of tech giants, making access to AI computing platforms prohibitively expensive for smaller teams and independent developers. Training advanced models like GPT-4 or AlphaGo can cost between $10 – 20 million, requiring thousands of high-performance GPUs. Ironically, more than 50% of global GPU capacity is sitting idle — locked away in personal devices, gaming rigs, and institutional hardware that's rarely optimized for AI workloads. At the same time: – 85% of AI startups cite compute costs as a top barrier to model training and deployment. – Cloud GPU prices have tripled over the past two years due to supply shortages and centralized control. – Over 70% of global AI infrastructure is owned by fewer than five major tech corporations. This level of centralization stifles innovation, restricts access, and deepens inequality in the AI ecosystem. It turns progress into a privilege of scale, not a function of talent or creativity. Syneris offers a better way. Our hybrid GPU computing network aggregates underused GPUs and CPUs from across the globe and transforms them into a Decentralized AI Infrastructure. This approach dramatically reduces cost while unlocking access to computing resources for the 99%. Contributors are rewarded with transparent, token-based incentives — creating a fair and self-sustaining ecosystem where computational power is not hoarded, but shared. Built for Builders: AI Tools for All At the heart of Syneris is a complete suite of tools for the AI development lifecycle. From code-free model creation to enterprise-grade deployment, the AI computing platform supports users of all technical backgrounds. Developers can build and test models with intelligent assistance, including real-time coding support and automated debugging tools. Non-developers can experiment with powerful no-code and low-code interfaces, crafting custom models using pre-built templates and visual workflows. Through its flagship product line, Syneris Generation AI, users can generate human-like content across text, images, video, and voice with minimal resource consumption. These tools open doors for applications in marketing, media, automation, education, and beyond — all part of a commitment to affordable AI development. AI World: A Decentralized Marketplace for AI Intelligence Syneris is not just an infrastructure provider — it is also a Decentralized AI Marketplace. The 'AI World' platform allows model creators to publish, monetize, and continuously improve their AI models. Businesses can browse categorized libraries of AI solutions tailored to industry verticals, performance needs, and budget constraints. Transparent performance metrics, reviews, and demo options ensure reliability and reduce decision-making risk. This Decentralized AI Marketplace fosters open collaboration, allowing builders and users to connect, share feedback, and co-create higher-value solutions. All transactions are executed with Syneris tokens, ensuring seamless commerce within a secure digital economy. Strategic Scaling Through Smart Integration To ensure scalability from day one, Syneris has strategically integrated with leading GPU computing networks such as Aethir and This enables the AI computing platform to meet immediate computational demand while it concurrently develops its proprietary infrastructure. Over time, Syneris aims to reduce dependency on third-party systems and move toward full operational independence — without compromising on performance, scalability, or global reach. Looking ahead, the platform's long-term vision is to become a fully self-sustaining, community-owned Decentralized AI Infrastructure — empowering millions to access AI freely, without the need for permission or the burden of premium costs imposed by centralized gatekeepers. Laying the Foundation for an Open AI Future As artificial intelligence redefines the way societies function, there is a growing responsibility to ensure that the benefits of AI are broadly distributed — not concentrated in the hands of the few. Syneris recognizes this need and responds with a technically sophisticated yet community-first approach. It is not just enabling access to AI tools; it is reshaping the ownership model of AI infrastructure itself. Developers, GPU contributors, AI builders, and enterprises are invited to become part of the Syneris ecosystem — where intelligence is built together, not rented from the top. Explore Syneris Website: X/Twitter: Telegram: Discord: Contact: [email protected] Name: Peter Miles About Syneris Syneris is a Decentralized AI Infrastructure and AI computing platform built to democratize access to machine intelligence. By connecting unused GPU and CPU resources into a global GPU computing network, Syneris provides scalable, affordable AI development and a dynamic Decentralized AI Marketplace, all underpinned by a contributor-driven token economy. Disclaimer: This press release is provided by Syneris. The statements, views, and opinions expressed in this content are solely those of the content provider and do not necessarily reflect the views of this media platform or its publisher. We do not endorse, verify, or guarantee the accuracy, completeness, or reliability of any information presented. This content is for informational purposes only and should not be considered financial, investment, or trading advice. Investing in crypto and mining related opportunities involves significant risks, including the potential loss of capital. Readers are strongly encouraged to conduct their own research and consult with a qualified financial advisor before making any investment decisions. However, due to the inherently speculative nature of the blockchain sector–including cryptocurrency, NFTs, and mining–complete accuracy cannot always be guaranteed. Neither the media platform nor the publisher shall be held responsible for any fraudulent activities, misrepresentations, or financial losses arising from the content of this press only with funds that you can afford to lose. Neither the media platform nor the publisher shall be held responsible for any fraudulent activities, misrepresentations, or financial losses arising from the content of this press release. In the event of any legal claims or charges against this article, we accept no liability or responsibility. Legal Disclaimer: This media platform provides the content of this article on an 'as-is' basis, without any warranties or representations of any kind, express or implied. We do not assume any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information presented herein. Any concerns, complaints, or copyright issues related to this article should be directed to the content provider mentioned above. Photos accompanying this announcement are available at Disclaimer: The above press release comes to you under an arrangement with GlobeNewswire. Business Upturn takes no editorial responsibility for the same.


Time of India
15-05-2025
- Business
- Time of India
Former Google employee on AlphaEvolve: ‘Google's AI just made math discoveries NO human has'
Former Google employee Deedy Das recently shared an online post, highlighting the company's artificial intelligence has made math discoveries never achieved by humans. In the post, Das said Google's new AI agent – AlphaEvolve resolved complex problems, including the optimal way to fit 11 and 12 hexagons into a larger hexagon — a challenge that has long remained unsolved. Another major breakthrough highlighted in the post is the improvement in matrix multiplication. The AI, he writes in the post, reduced the number of steps needed to multiply two 4x4 matrices from 49 to 48 — the first such improvement in 56 years. Das termed AlphaEviolve as the 'AlphaGo move 37 moment for math,' referencing a historic 2016 move by Google's AI during a professional Go match that stunned experts. In the post, Das wrote: 'Google's AI just made math discoveries NO human has! —Solved optimal packing of 11 and 12 hexagons in hexagons. —Reduced 4x4 matrix multiplication from 49 operations to 48 (first advance in 56 years!) and many more. AlphaEvolve is the AlphaGo 'move 37' moment for math. Insane.' What is AlphaEvolve Google's DeepMind recently announced AlphaEvolve – an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. Announcing the new tool in a blog post, the company said 'AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.' 'AlphaEvolve enhanced the efficiency of Google's data centers, chip design and AI training processes — including training the large language models underlying AlphaEvolve itself. It has also helped design faster matrix multiplication algorithms and find new solutions to open mathematical problems, showing incredible promise for application across many areas,' the blog added. Google CEO Sundar Pichai's post on AlphaEvolve Announcing the new AI tool, Google CEO Sundar Pichai wrote: 'AlphaEvolve, our new Gemini-powered coding agent, can help engineers + researchers discover new algorithms and optimizations for open math + computer science problems. We've used it to improve the efficiency of our data centers (recovering 0.7% of our fleet-wide compute resources on average). We're also using it in chip design and to speed up Gemini's training, the very models underpinning AlphaEvolve itself — an exciting flywheel of progress!' AI Masterclass for Students. Upskill Young Ones Today!– Join Now