logo
#

Latest news with #InfocommMediaDevelopmentAuthority

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers
Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

CNBC

timea day ago

  • Science
  • CNBC

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content. The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC. Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI. "The answer, after almost 15 years of research, is, no, we don't know how to do this, and it doesn't look like we are getting better," Rando, who focuses on adversarial machine learning, told CNBC. However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles. Shayne Longpre, a researcher in AI and policy and lead of the Data Provenance Initiative, noted that there are currently insufficient people working in red teams. While AI startups are now using first-party evaluators or contracted second parties to test their models, opening the testing to third parties such as normal users, journalists, researchers, and ethical hackers would lead to a more robust evaluation, according to a paper published by Longpre and researchers. "Some of the flaws in the systems that people were finding required lawyers, medical doctors to actually vet, actual scientists who are specialized subject matter experts to figure out if this was a flaw or not, because the common person probably couldn't or wouldn't have sufficient expertise," Longpre said. Adopting standardized 'AI flaw' reports, incentives and ways to disseminate information on these 'flaws' in AI systems are some of the recommendations put forth in the paper. With this practice having been successfully adopted in other sectors such as software security, "we need that in AI now," Longpre added. Marrying this user-centred practice with governance, policy and other tools would ensure a better understanding of the risks posed by AI tools and users, said Rando. Project Moonshot is one such approach, combining technical solutions with policy mechanisms. Launched by Singapore's Infocomm Media Development Authority, Project Moonshot is a large language model evaluation toolkit developed with industry players such as IBM and Boston-based DataRobot. The toolkit integrates benchmarking, red teaming and testing baselines. There is also an evaluation mechanism which allows AI startups to ensure that their models can be trusted and do no harm to users, Anup Kumar, head of client engineering for data and AI at IBM Asia Pacific, told CNBC. Evaluation is a continuous process that should be done both prior to and following the deployment of models, said Kumar, who noted that the response to the toolkit has been mixed. "A lot of startups took this as a platform because it was open source, and they started leveraging that. But I think, you know, we can do a lot more." Moving forward, Project Moonshot aims to include customization for specific industry use cases and enable multilingual and multicultural red teaming. Pierre Alquier, Professor of Statistics at the ESSEC Business School, Asia-Pacific, said that tech companies are currently rushing to release their latest AI models without proper evaluation. "When a pharmaceutical company designs a new drug, they need months of tests and very serious proof that it is useful and not harmful before they get approved by the government," he noted, adding that a similar process is in place in the aviation sector. AI models need to meet a strict set of conditions before they are approved, Alquier added. A shift away from broad AI tools to developing ones that are designed for more specific tasks would make it easier to anticipate and control their misuse, said Alquier. "LLMs can do too many things, but they are not targeted at tasks that are specific enough," he said. As a result, "the number of possible misuses is too big for the developers to anticipate all of them." Such broad models make defining what counts as safe and secure difficult, according to a research that Rando was involved in. Tech companies should therefore avoid overclaiming that "their defenses are better than they are," said Rando.

Use AI to keep jobs, boost healthcare, curb climate change to maximise good: President Tharman
Use AI to keep jobs, boost healthcare, curb climate change to maximise good: President Tharman

Straits Times

time27-05-2025

  • Business
  • Straits Times

Use AI to keep jobs, boost healthcare, curb climate change to maximise good: President Tharman

SINGAPORE - Artificial intelligence (AI) can enable displaced workers to find new meaningful jobs in other sectors and not resort to 'flipping burgers' after their previous jobs are disrupted by rapid technological developments, said President Tharman Shanmugaratnam. Speaking at the opening gala for Singapore's 5th annual tech conference Asia Tech x Singapore on May 27, Mr Tharman noted the need for systematic training involving governments and the industry to level up displaced workers' skills. 'If there are some people displaced in one sector because of creative disruption, how do they get deployed, not just into flipping burgers, but into new jobs in other sectors,' said Mr Tharman, urging the audience to think about productivity more broadly. 'It's productivity for the workforce at large... to maximise our potential to create good jobs for everyone who wishes to be in the workforce,' he said. Speaking at the Fullerton Bay Hotel, Mr Tharman said that AI is driving productivity in factories, call centres and banks, but the progress is not necessarily translating into more jobs that ensure the workforce is productive, such as to create new jobs. Productivity for human society is one of three areas he cited where global consensus can be attained to maximize the good of AI, and minimise the risk of AI harms. In his half-hour speech to policymakers, tech leaders and industry guests, Mr Tharman said the other two areas are in healthcare and climate change. Specifically, AI can aid in early disease detection and easing pressures on healthcare systems. AI also has the potential to improve energy efficiency to curb climate change. Organised by the Infocomm Media Development Authority (IMDA), the conference is expected to host 3,500 attendees from around the world who will attend panels and discussions on AI governance and innovation in the technology sector between May 27 to 29. Executives from major tech companies like OpenAI, Microsoft and Google are also scheduled to attend panel discussions that address pressing issues in tech. Mr Tharman highlighted opportunities where AI can improve healthcare, such as to spot and treat diseases and support healthcare systems. He said: 'We need to take it much, much further and ensure that safety is ensured through regulation of AI in healthcare.' Tougher regulation is necessary to bring AI's impact on the healthcare sector further, so that trust in healthcare systems is preserved, said Mr Tharman, citing Singapore's efforts to introduce guidelines in healthcare for developers and users. For instance, developers are obliged to gather feedback from clinicians and patients on their apps, to build confidence in using such systems. Humanity's fight against climate change, too, stands to gain from AI innovation, even as intensifying AI use requires high amounts of water and energy to support its computing. Yet, AI is key to monitoring the levels of environmental degradation and to improve energy efficiency across the economy, such as to enable more productive food systems so that resources in forests are not depleted unsustainably, said Mr Tharman. In spite of the advantages that AI can bring, there are risks that society must come together to address. Mr Tharman said that among the risks is the use of AI-generated content, together with social media platforms and rogue actors, to spread disinformation that can erode trust in democracy. Mr Tharman said: 'They are forcing people into bubbles and hardening divisions within society… We do not yet have a solution to this but it is a dangerous problem .' He also warned that AI risks transforming warfare for the worst - an urgent issue that the United States and China, in particular, must discuss to find ways to control the use of AI in war. To achieve these objectives, industries must start to act sectorally to use AI to address issues within each field, such as to address the needs and spur innovation within agriculture, healthcare and climate change, Mr Tharman said. He suggested developing multilateral governments and coalitions among policymakers, scientists, tech players and civil society who can iron out guidelines and common standards for AI. Momentum for coalitions in the tech industry is building, said Mr Tharman. Scientists and members of the global tech sector gathered here for the Singapore Conference on AI in April to discuss the priorities for global AI safety research, which Mr Tharman described as a good example of what it takes to work together. Mr Tharman said: 'We need some form of calibration, of consensus-based guidance. Some way in which coalitions of the willing come together so that we can maximize the good and minimise the risk of the worst. We can't leave it to the jungle.' Join ST's Telegram channel and get the latest breaking news delivered to you.

Vivo X200 FE mobile launch in India soon, BIS certification listing spotted- What to expect
Vivo X200 FE mobile launch in India soon, BIS certification listing spotted- What to expect

Hindustan Times

time26-05-2025

  • Hindustan Times

Vivo X200 FE mobile launch in India soon, BIS certification listing spotted- What to expect

The Vivo X200 FE is on the verge of launching in India soon, following its appearance on key certification platforms. The device, believed to be a rebranded version of the Vivo S30 Pro Mini, set to launch in China on May 29, has been spotted on multiple regulatory sites, signalling its entry into several markets soon. Recently, a device carrying the model number V2503 was listed on the Bureau of Indian Standards (BIS) website. This model number matches the Vivo X200 FE, though the BIS listing does not reveal detailed specifications or additional information about the handset. Alongside this, the smartphone has also received certification from Singapore's Infocomm Media Development Authority (IMDA), which indicates preparations for its launch in Southeast Asia. Earlier, the same model was recorded on Thailand's National Broadcasting and Telecommunications Commission (NBTC) website, which further confirms its release across the region. Also read: Google to let users test Android 16 desktop mode on phones with external display support, here's how On the other hand, Industry insiders, including tipster Yogesh Brar, suggest that Vivo will price the X200 FE between Rs. 50,000 and Rs. 60,000 in India. The launch is expected to happen by July, with the phone likely to be available in two colour options. Also read: Windows 10 Support Is Ending: Flipkart says it's time to upgrade to a Windows 11 PC According to the rumour mill, the Vivo X200 FR is expected to feature a 6.31-inch LTPO OLED display with a 120Hz refresh rate. For security, it will include an in-display fingerprint sensor. The handset is set to run on MediaTek's Dimensity 9400e chipset, a recent addition to the market. Also read: Oppo Reno 14 Pro confirmed to feature a 50MP ultra-wide lens, 6,200mAh battery, and more For photography, the device will come with a 50MP main sensor, identified as the Sony IMX921, according to the leaks. Alongside this, there will be a 50MP telephoto camera with 3x optical zoom, featuring the Sony IMX882 sensor. The phone will also include an 8MP ultrawide lens for additional photography options. Under the hood, the handset is expected to house a 6,500mAh battery with 90W fast charging support, which gives the device a quicker recharge time.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store