logo
Opinion: Make the Robot Your Colleague, Not Overlord

Opinion: Make the Robot Your Colleague, Not Overlord

NDTV4 days ago

There's the Terminator school of perceiving artificial intelligence risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership.
In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders - including Sam Altman of OpenAI and Demis Hassabis of Alphabet Inc.'s DeepMind - sent shockwaves with a statement that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war."
Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by "accelerationists" largely drowning out the doomers. Companies and countries have raced toward being the first to achieve superhuman AI, brushing off the early calls to prioritize safety. And it has all left the public very confused.
But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades researching the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs "live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks."
Well, kumbaya.
Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines - especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. "Equality" is the sensitive part. Humans want to keep believing they are superior, not equal to the machines.
His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the US has produced AI-inspired characters like the Terminator, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet.
Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans.
It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur "civilization extinction" remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring - whether that's job displacement, allegations of copyright infringement or reneging on climate change goals.
Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late.
It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritize independent analysis on large-scale AI risks.
Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fearmongering.
There might be merit in viewing these machines as colleagues and not overlords.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Elon Musk's Tesla Robotaxi service hits real roads—$4.20 rides begin in Austin with no drivers; but here's a catch
Elon Musk's Tesla Robotaxi service hits real roads—$4.20 rides begin in Austin with no drivers; but here's a catch

Time of India

time21 minutes ago

  • Time of India

Elon Musk's Tesla Robotaxi service hits real roads—$4.20 rides begin in Austin with no drivers; but here's a catch

Tesla officially launched its long-anticipated driverless Robotaxi rides in Austin, Texas, on Sunday. This rollout is the company's first real-world trial of autonomous cars transporting paying passengers without anyone behind the wheel. The test is currently limited to a small section of the South Congress neighborhood. Each vehicle, a 2025 Model Y, is equipped with Tesla's updated Full Self-Driving (FSD) software and monitored by a Tesla employee seated in the front passenger seat. CEO Elon Musk posted that the launch was the result of 'a decade of hard work,' giving credit to Tesla's in-house software and AI chip teams. Though Tesla's Cybercabs unveiled in 2024 were not used, the Model Ys now feature a version of Tesla's so-called 'unsupervised' driving technology. Only a handful of cars, limited operations Currently, only ten vehicles are part of the trial, with operations restricted to daylight hours between 6 a.m. and midnight and paused during poor weather. The rides are priced at a flat $4.20—a likely nod to Musk's frequent internet references. Social media influencers were among the first to access the service through early invitations. Footage shared online shows the cars operating normally, though some videos capture abrupt braking, especially around law enforcement. Tesla has not clarified the extent of safety monitors' ability to intervene during rides. The company has also said in-ride monitoring is off by default unless an incident is reported. Regulations, safety, and secrecy surround the launch Ahead of the launch, Texas passed a new law governing autonomous vehicles. Effective September 1, the law requires AV operators to obtain state permits and meet Level 4 autonomy standards. The law replaces a previous policy that prevented local authorities from regulating such vehicles. Now, the Texas DMV can revoke permits if safety concerns arise. Governor Greg Abbott signed the legislation just before Tesla's Robotaxi rollout began. Unlike competitors like Waymo and Zoox, Tesla uses only cameras and neural networks—without lidar or radar—to power its system. While this approach may lower costs, some experts argue it introduces more risk. Carnegie Mellon's Philip Koopman described the launch as 'the end of the beginning,' highlighting that scaling such services citywide remains a challenge. Tesla has not disclosed many technical or operational details. In a letter to the Texas Attorney General, it declined to share specifics, citing trade secrets and confidential business data. As a result, most public knowledge has come from Musk's social media and promotional content by service currently excludes riders under 18 and avoids difficult intersections or complex conditions. The company may suspend passengers who violate rules, such as smoking, drinking, or sharing ride footage that breaks guidelines. Cameras will scan the cabin after rides to ensure vehicles are ready for the next driverless ride service has finally hit real roads. But with tight controls, limited access, and lingering questions, the big test lies ahead.

Siddharth Pai: Meta is going all GPUs blazing to win the ‘superintelligence' race
Siddharth Pai: Meta is going all GPUs blazing to win the ‘superintelligence' race

Mint

time22 minutes ago

  • Mint

Siddharth Pai: Meta is going all GPUs blazing to win the ‘superintelligence' race

Next Story Siddharth Pai Mark Zuckerberg is doing all he can to leapfrog Generative AI and develop machines that can 'think'. The challenge is of another order of magnitude, but the resources he's pouring into it means he's in the race alright. Zuckerberg is personally setting the pace, scouting for top minds and restructuring teams to align with his lofty ambition. Gift this article Meta's audacious pivot towards what it calls 'superintelligence' marks more than a renewal of its AI ambitions; it signals a philosophical recalibration. A few days ago, Meta unveiled a nearly $15 billion campaign to chase a future beyond conventional AI—an initiative that has seen the recruitment of Scale AI's prodigy founder Alexandr Wang and the launch of a dedicated 'superintelligence' lab under the CEO's own gaze ( Meta's audacious pivot towards what it calls 'superintelligence' marks more than a renewal of its AI ambitions; it signals a philosophical recalibration. A few days ago, Meta unveiled a nearly $15 billion campaign to chase a future beyond conventional AI—an initiative that has seen the recruitment of Scale AI's prodigy founder Alexandr Wang and the launch of a dedicated 'superintelligence' lab under the CEO's own gaze ( This is not merely an attempt to catch up; it is a strategic gambit to leapfrog competitors like OpenAI, Google DeepMind, Anthropic and xAI. Currently, Meta's AI offerings, its Llama family, primarily reside within the predictive and Generative AI paradigm. These systems excel at forecasting text sequences or generating images and dialogue, but they lack the structural scaffolding required for reasoning, planning and understanding the physical world. Meta's chief AI scientist Yann LeCun has been eloquent on this front, arguing in a 2024 Financial Times interview that large language models, while powerful, are fundamentally constrained—they grasp patterns but not underlying logic, memory or causal inference ( For LeCun and his team, superintelligence denotes AI that transcends such limitations and is capable of building internal world models and achieving reasoning comparable to—or exceeding—human cognition. This definition distances itself sharply from today's predictive AI, which statistically extrapolates from patterns, as well as GenAI, which crafts plausible outputs, such as text or images. Superintelligence, by contrast, aspires for general-purpose cognitive ability. Unsiloed and flexible, it will be able to plan hierarchically and form persistent internal representations. It is not alone in this quest. Ilya Sutskever, the former chief scientist at OpenAI who believes powerful AI could harm humanity, has co-founded Safe Superintelligence. It has no plans to release products but its stated mission is to build superintelligence and release the technology once it has been proven to be completely safe ( Also Read: Brave Chinese voices have begun to question the hype around AI Meta has established a cadre of roughly 50 elite researchers, luring them with huge compensation packages, to work with Scale AI to create a vertically integrated stack of data labelling, model alignment and deployment. Meta chief Mark Zuckerberg's combative leadership style—marked by intense micromanagement and 24/7 messaging—hints at both the urgency and stakes. In comparison with rivals, Meta lags on the AI developmental curve. Its Llama-4 release has faced delays and scrutiny, while its competitors have sped ahead—OpenAI moved quickly to GPT-4 and Google countered it with Gemini-based multimodal agents. Nevertheless, Meta brings distinctive assets to the table: its social graph, an enormous user base, its sprawling compute resources, which include hundreds of thousands of Nvidia H100 GPUs, and a renewed impetus underpinned by its Scale AI partnership. Yet, beyond the material strength of its stack lies the more profound question: Can Meta, with its social media heritage, really deliver on superintelligence? LeCun muses that a decade may pass before systems capable of hierarchical reasoning, sustained memory and world modelling come to fruition ( Meta's pursuit is an investment in a bold vision as much as engineering muscle. Also Read: Artificial general intelligence could reboot India's prospects The differences between predictive, generative and superintelligent systems are consequential. An AI tool that merely predicts or synthesizes text operates within a bounded comfort zone, finding patterns, optimizing loss and generating output. However, a superintelligent AI must contend with the open-ended unpredictability of real-world tasks—reasoning across contexts, planning with foresight and adapting to novel situations. It requires an architecture qualitatively different from pattern matching. In this sense, Meta is not joining the arms race to outdo competitors in generative benchmarks. Instead, it aims to leapfrog that race for a big stake in a future where AI systems begin to think, plan, learn and remember far better. The risk is high: billions of dollars are invested, talent battles are underway and there is no guarantee that such advancements will fully materialize. Critics note that AI today fails at some straightforward tasks that any competent Class 10 school student would pass with ease. But Meta views this as a strategic inflection point. Zuckerberg is personally setting the pace, scouting for top minds and restructuring teams to align with his lofty ambition. If Meta can transition from crafting better chatbots to instilling AI with coherent, persistent models of the world, it just might recalibrate the AI hierarchy entirely ( Whether this would mark Meta's renaissance remains to be seen. Yet, the narrative shift is unmistakable. Where once Meta chased after generative prowess, it now envisions cognitive machines that supposedly actually 'think.' The challenge lies not only in engineering capability, but in philosophical restraint. Superintelligent systems demand new ethics, not just new math. If Meta achieves its goal, it will not merely change AI—it will redefine our expectations of intelligence itself. In this quest, the company must navigate both technical intricacies and the social repercussions of creating minds that learn, adapt and may surpass us. Whether such minds can be safely steered is a question that no GPU cluster can answer definitively. The author is co-founder of Siana Capital, a venture fund manager. Topics You May Be Interested In Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

After admitting he uses ChatGPT, Narayana Murthy says management and technology graduates are same for him
After admitting he uses ChatGPT, Narayana Murthy says management and technology graduates are same for him

India Today

timean hour ago

  • India Today

After admitting he uses ChatGPT, Narayana Murthy says management and technology graduates are same for him

As artificial intelligence continues to reshape the job landscape, confusion around which field is more relevant—technical or management—continues to linger in the minds of young graduates. While some argue that AI is driven by technical skills and therefore holds dominance, others may lean towards the importance of management skills to create a collaborative workspace between humans and AI. However, Infosys co-founder N.R. Narayana Murthy rejects this divide altogether. In his view, both fields are equally important and relevant in navigating the AI-led a recent interview with Moneycontrol, the 78-year-old software industry titan said he sees no meaningful distinction between the two educational streams. He argues that both fields simply approach problems from different angles. 'I do not see any difference between a management graduate and a technology graduate because they attack the problem at different levels,' Murthy said. 'One asks 'what,' while the other focuses on 'how'.'Murthy also expressed his disagreement with the idea that AI is a threat to human jobs in the future. He believes AI is a tool that can significantly boost human productivity. 'It is all about improving productivity. It is all about solving problems that are beyond human effort,' he added. Sharing his own experience with AI he reveals that ever since he started using ChatGPT to prepare lectures, the chatbot has significantly helped him improve his productivity. What once took him up to 30 hours now he is able to finish it in just five. 'I improved my own productivity by as much as five times,' he noted, emphasising how AI can act as an assistive agent, not a Murthy believes that AI will elevate, not eliminate, the role of the human worker. Instead of mass job losses, he anticipates that AI will bring about transformation and more jobs based on evolving skill sets. 'Everybody said when computers came to the banking sector, jobs would go away. But jobs have multiplied by a factor of 40 to 50,' he noted. On the same lines, he suggests that AI will help in making people smarter and work smarter. 'Our programmers and analysts will become smarter and smarter... They will solve bigger problems, more complex problems.'What will change, however, according to Murthy, is the kind of thinking that will be required. He believes future professionals will need to become sharper in defining problems and crafting better, more complex questions. 'The smartness is in asking the right question,' he said. According to him, the true value of human input in jobs will lie not in routine execution, but in strategic thinking and creative problem-solving.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store