Latest news with #NatureMachineIntelligence


New York Post
7 days ago
- Science
- New York Post
AI is getting smarter every day —with thought processes already eerily similar to humans: study
Their powers go beyond AI-mpersonation. Artificial intelligence doesn't just look and act human — it supposedly thinks like us as well. Chinese researchers found the first-ever evidence that AI models like ChatGPT process information similarly to the human mind, detailing the dystopian-seeming discovery in the journal 'Nature Machine Intelligence.' Advertisement 'This provides compelling evidence that the object representations in LLMs (large language models), although not identical to human ones, share fundamental similarities that reflect key aspects of human conceptual knowledge,' wrote the team behind the study, which was a collaboration between the Chinese Academy of Sciences and the South China University of Technology, the Independent reported. 3 The AI models displayed remarkable powers of categorization. Malambo C/ – The team reportedly wanted to see if LLM models can 'develop humanlike object representations from linguistic and multimodal data (data in different forms such as text, audio, etc).' Advertisement 3 Prior to the study, many tech experts simply assumed that language models like ChatGPT mimicked human responses through pattern recognition. AlexPhotoStock – To discover whether the AI 'bot process' mirrors our cognition, researchers had OpenAI's ChatGPT-3.5 and Google's Gemini Pro Vision perform a series of 'odd-one-out' trials, in which they were given three items and tasked with selecting the one that doesn't fit, the South China Morning Post reported. 3 Researchers remain unsure if AI understands the significance or emotional value of cats and other entities on a human level. AntonKhrupinArt – Remarkably, the AI created 66 conceptual dimensions to sort the objects. Advertisement After comparing this cybernetic object sorting to human analysis of the same objects, they found striking similarities between the models' 'perception' and human cognition — particularly when it came to language grouping. From this, researchers deduced that our psychological doppelgangers 'develop human-like conceptual representations of objects.' 'Further analysis showed strong alignment between model embeddings and neural activity patterns' in the region of the brain associated with memory and scene recognition. Advertisement Researchers noted that the language-based LLMs were a bit lacking with regard to categorizing visual aspects such as shape or spatial properties. Meanwhile, research has shown that AI struggles with tasks that require deeper levels of human cognition, such as analogical thinking — drawing comparisons between different things to conclude — while it's unclear if they comprehend certain objects' significance or emotional value. 'Current AI can distinguish between cat and dog pictures, but the essential difference between this 'recognition' and human 'understanding' of cats and dogs remains to be revealed,' said He Huiguang, a professor at the Chinese Academy of Sciences' (CAS) Institute of Automation. Nonetheless, the scientists hope that these findings will allow them to develop 'more human-like artificial cognitive systems' that can collaborate better with their flesh-and-blood brethren.


South China Morning Post
14-06-2025
- Science
- South China Morning Post
Chinese scientists find first evidence that AI could think like a human
Chinese researchers have confirmed for the first time that artificial intelligence large language models can spontaneously create a humanlike system to comprehend and sort natural objects, a process considered a pillar of human cognition. It provides new evidence in a debate over the cognitive capacity of AI models, suggesting that artificial systems that reflect key aspects of human thinking may be possible. 'Understanding how humans conceptualise and categorise natural objects offers critical insights into perception and cognition,' the team said in a paper published in the peer-reviewed journal Nature Machine Intelligence on Tuesday. 'With the advent of large language models (LLMs), a key question arises: can these models develop humanlike object representations from linguistic and multimodal data?' 13:28 How a shift toward Trump by tech giants like Meta could reshape Asia's digital future How a shift toward Trump by tech giants like Meta could reshape Asia's digital future LLMs are AI models trained on a vast amount of text data – along with visual and audio data in the case of multimodal large language models (MLLMs) – to process tasks.


South China Morning Post
06-05-2025
- Science
- South China Morning Post
Scientists create smallest, lightest wireless robot that can transform to suit conditions
Chinese researchers have developed what they say is the world's smallest and lightest wireless robot that can change form to travel on land or in the air. Advertisement Inspired by Lego, it could potentially be used in complex environments like disaster rescue operations, according to the team from Tsinghua and Beihang universities. Fundamental to the microrobot is a 'morphable actuator' – a component that converts energy into force. The researchers said their new actuator could also be used to make medical devices and components for virtual and augmented reality. 'We introduce a synergistic design concept of small-scale continuously morphable actuators,' the team said in a paper published in the peer-reviewed journal Nature Machine Intelligence on April 18. Drawing on the idea of Lego, the actuators can be customised to make versatile machines like robots that can morph between different modes. Advertisement 'Compared with other known wireless [land-air] robots, our robot has the smallest size, lightest weight and fastest ground movement speed in the world,' said study author Zhang Yihui, a professor of engineering mechanics at Tsinghua University.
Yahoo
20-04-2025
- Science
- Yahoo
Breakthrough: China unveils AI eyes for the blind to move independently and safely
Imagine a world where people who cannot see can still move around confidently without any self-doubt or fear. That world isn't far away with the arrival of AI. According to a study published in Nature Machine Intelligence, researchers from China unveiled a new wearable AI system that empowers blind and visually impaired individuals to navigate easily and independently. The system provides real-time guidance to users with a combination of video, vibrations, and audio prompts. The AI system includes a camera, an AI processor, and bone conduction headphones. The system is mounted between the user's eyebrows. The camera captures the live footage instantly for the AI system to analyze. Then, short audio cues are delivered directly through the headphones, without blocking ambient sounds. A team from Shanghai Jiao Tong University developed the system. Shanghai Artificial Intelligence Laboratory, East China Normal University, Hong Kong University of Science and Technology, and the State Key Laboratory of Medical Neurobiology at Fudan University. 'This research paves the way for user-friendly visual assistance systems, offering alternative avenues to enhance the quality of life for people with visual impairment,' the team wrote. Lead researcher Gu Leilei, an associate professor at Shanghai Jiao Tong University, emphasised the team's commitment to making the system as practical and easy to use as possible. "This system can partially replace the eyes,' he said. 'Lengthy audio descriptions of the environment can overwhelm and tire users, making them reluctant to use such systems,' Gu told The South China Morning Post. 'Unlike a car navigation system with detailed directions, our work aims to minimise AI system output, communicating information key for navigation in a way that the brain can easily absorb,' Gu said. The equipment is lightweight and compact in design, meaning it can be worn all day without any discomfort, allowing users to move naturally without feeling burdened. The system has been tested indoors with 20 visually impaired volunteers. After just 20 minutes of practice, most of the users could operate it with ease, according to the study. The process to set a destination is also quite simple. Users issue a voice command, and the AI finds a safe route, offering only essential prompts along the way. It's currently trained to identify 21 objects, including beds, chairs, tables, sinks, televisions, and food items. Researchers are planning to expand these capabilities further. Gu said the team's next focus is to refine the system for outdoor environments, where navigational challenges are far more complex. Enhancements could include improved object detection, dynamic route adaptation, and integration with real-world GPS systems. This AI-powered wearable system may offer a new level of autonomy with further developments, and with that, confidence to millions of visually impaired people to be independent worldwide.


South China Morning Post
17-04-2025
- Science
- South China Morning Post
Chinese scientists use AI to help visually impaired to ‘see', explore the world
Chinese scientists say they have developed a wearable artificial intelligence system that can help visually impaired people to navigate their way around the world. Advertisement An AI algorithm analyses real-time footage of the environment and gives the user concise directional prompts via bone conduction headphones, according to a paper published on Monday in the peer-reviewed journal Nature Machine Intelligence. Artificial skin sensory motors on each wrist monitor the user's surroundings and vibrate an alert if they detect potential obstacles on either side, it said. The system was developed by engineers from Shanghai Jiao Tong University, Shanghai Artificial Intelligence Laboratory, East China Normal University, and the Hong Kong University of Science and Technology in collaboration with researchers from the State Key Laboratory of Medical Neurobiology at Fudan University. 'This research paves the way for user-friendly visual assistance systems, offering alternative avenues to enhance the quality of life for people with visual impairment,' they wrote. Advertisement Lead author Gu Leilei, an associate professor with Shanghai Jiao Tong University's School of Electronic Information and Electrical Engineering, said that the AI-powered system was designed to optimise the user's experience through intuitive cues. 'Lengthy audio descriptions of the environment can overwhelm and tire users, making them reluctant to use such systems,' Gu said.