logo
#

Latest news with #AI-inspired

Colleagues or overlords? The debate over AI bots has been raging but needn't
Colleagues or overlords? The debate over AI bots has been raging but needn't

Mint

time3 hours ago

  • Science
  • Mint

Colleagues or overlords? The debate over AI bots has been raging but needn't

There's the Terminator school of perceiving artificial intelligence (AI) risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders—including Sam Altman of OpenAI and Demis Hassabis of Alphabet's DeepMind—sent shockwaves with a statement that warned: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Also Read: AI didn't take the job. It changed what the job is. Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by AI 'accelerationists' largely drowning out AI doomers. Companies and countries have raced towards being the first to achieve superhuman AI, brushing off the early calls to prioritise safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades studying the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs 'live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines—especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. 'Equality' is the sensitive part. Humans want to keep believing they are superior, not equal to machines. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the United States has produced AI-inspired characters like the Terminator from the eponymous Hollywood movie, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur 'civilization extinction' remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring—whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. Also Read: You're absolutely right, as the AI chatbot says With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritise independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fear-mongering. There might be merit in seeing these machines as colleagues and not overlords. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.

Opinion: Make the Robot Your Colleague, Not Overlord
Opinion: Make the Robot Your Colleague, Not Overlord

NDTV

time4 days ago

  • Science
  • NDTV

Opinion: Make the Robot Your Colleague, Not Overlord

There's the Terminator school of perceiving artificial intelligence risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders - including Sam Altman of OpenAI and Demis Hassabis of Alphabet Inc.'s DeepMind - sent shockwaves with a statement that warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by "accelerationists" largely drowning out the doomers. Companies and countries have raced toward being the first to achieve superhuman AI, brushing off the early calls to prioritize safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades researching the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs "live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Well, kumbaya. Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines - especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. "Equality" is the sensitive part. Humans want to keep believing they are superior, not equal to the machines. His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the US has produced AI-inspired characters like the Terminator, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur "civilization extinction" remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring - whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritize independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fearmongering. There might be merit in viewing these machines as colleagues and not overlords.

BLACKPINK's Jennie named most popular female K-pop star in May; IVE's Jang Wonyoung and aespa's Karina follow in top 30

Pink Villa

time18-05-2025

  • Entertainment
  • Pink Villa

BLACKPINK's Jennie named most popular female K-pop star in May; IVE's Jang Wonyoung and aespa's Karina follow in top 30

The Korean Business Research Institute has unveiled the latest individual brand reputation rankings for female K-pop idols. It covers data collected from April 18 to May 18, 2025. This monthly index evaluates a star's impact across several key areas: consumer participation, media exposure, communication activity, and community engagement. Jennie of BLACKPINK has clinched the top spot for May, dominating the chart with an impressive brand reputation index of 9,479,200. Her performance marks a massive 42.78% increase from the previous month. It reflects a surge in public interest following her global Met Gala attendance and her recent guest appearances. Jennie's strong brand keywords include 'like JENNIE,' 'Met Gala,' and 'You Quiz on the Block.' Meanwhile, the positive descriptors such as 'unrivaled,' 'confident,' and 'honest' featured prominently in her related search terms. The sentiment surrounding her activity remains overwhelmingly positive, with a positivity score exceeding 90%. Holding steady at No. 2 is IVE's Jang Wonyoung, whose star power continues to shine with a brand index of 6,210,896. Despite only a slight 1.95% increase since April, her consistent presence on the charts reflects her sustained influence in both the fashion and entertainment scenes. In third place, BLACKPINK's Rosé makes a significant climb, overtaking several competitors with a brand score of 5,496,021. Her 13.51% month-over-month increase may be attributed to renewed interest in her solo music appearances, fashion editorials, and consistent social media engagement. aespa's Karina claims the No. 4 position, continuing her momentum with a brand reputation score of 4,951,586. The leader and visual of aespa remains a favorite for brand endorsements. She is frequently discussed online for her stage charisma and unique AI-inspired concept, which keeps her group relevant in the evolving K-pop landscape. Rounding out the top five is IVE's An Yu Jin, whose rising visibility as both a performer and MC has helped boost her brand score to 4,864,346. The slight 4.91% rise from last month cements her as one of the most well-rounded fourth-generation idols. She is praised for her leadership, vocals, and polished variety show skills. The rest of the top 30 is packed with some of K-pop's most recognizable and emerging names. aespa's Winter, Red Velvet's Joy, IVE's Rei, Red Velvet's Seulgi and ITZY's Yuna all made the top 10. LE SSERAFIM's Kim Chaewon, TWICE's Sana, and ILLIT's Minju ranked just outside the top 10. They are followed by Red Velvet's Irene, OH MY GIRL's Mimi, and Red Velvet's Wendy. aespa's Giselle and Ningning also secured spots on the list, along with Red Velvet's Yeri and IVE's Leeseo. TWICE members Nayeon, Jihyo, and Jeongyeon continued to show strong presence, while cignature's Jeewon and ITZY's Ryujin also made it into the rankings. BLACKPINK's Lisa, OH MY GIRL's YooA, and FIFTY FIFTY's Yewon maintained visibility, as did Girls' Generation's Taeyeon and LE SSERAFIM's Sakura.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store