Latest news with #LiPing


Malay Mail
09-06-2025
- Science
- Malay Mail
PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts
A research team led by Prof. Li Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, explored the similarities between large language models and human representations, shedding new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what "flower" means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair "pasta" and "roses" might receive equally high olfactory ratings, but "pasta" is in fact more similar to "noodles" than to "roses" when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human Li Ping said, "The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future."Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, "The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect".Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, "These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans."Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI The issuer is solely responsible for the content of this announcement.

Associated Press
09-06-2025
- Science
- Associated Press
PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts
HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what 'flower' means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the world. A research team led by Prof. Li Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, explored the similarities between large language models and human representations, shedding new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. By exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour. Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms datasets. The research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair 'pasta' and 'roses' might receive equally high olfactory ratings, but 'pasta' is in fact more similar to 'noodles' than to 'roses' when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and LLMs. The representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual data. In light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human representations. Prof. Li Ping said, 'The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future.' Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, 'The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect'. Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, 'These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans.' Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI The issuer is solely responsible for the content of this announcement.


Forbes
24-04-2025
- Automotive
- Forbes
CATL Battery Billionaire And Wife Donate $137 Million To Fudan
Chinese EV battery billionaire Li Ping and his wife Liao Mei have decided to make a one-time donation of one billion yuan, or about $137 million, to Fudan University to support a new institute of advanced studies, the school said in an announcement on Tuesday. Li is a 1985 graduate from the elite Shanghai-headquartered school's department of materials who went on to become vice chairman of Contemporary Amperex Technology, or CATL, one of the world's largest makers of EV batteries. Liao graduated from Fudan's history department in 1986. The new institute, to be called the Xue Min Institute of Advanced Studies, aims to attract world-class scholars and focus on natural science and basic research, Fudan said. Greater China, including mainland China and Hong Kong, accounted for nine of the top 20 billionaires in the world automotive industry on the 2025 Forbes Billionaires List released earlier this month. Among them, Li, a Hong Kong citizen, had an estimated fortune worth $7.3 billion. Tesla's Elon Musk was the world's richest automotive industry billionaire with an estimated fortune worth $342 billion. CATL Chairman and CEO Robin Zeng came in second among the auto industry's richest with a fortune worth $37.9 billion. Wang Chuanfu, chairman of China's BYD – the world's biggest EV maker, ranked third at $26.4 billion. (See related report by Forbes China here.) Apple Supplier's Chairman Leads New List Of China's Top Businesswomen Look Out McDonald's And KFC: Here Comes China's Alan Song Derek Li And Squirrel Ai Aim To Lead The Future Of AI-Driven Education


South China Morning Post
25-03-2025
- Science
- South China Morning Post
China in bid to challenge giant SpaceX by deploying maglev rocket launch pad by 2028
In a bid to disrupt the United States' long-held dominance in space exploration, China is quietly advancing a radical new rocket launch system – powered not by roaring engines but by electromagnetic force – that could propel satellites into orbit with unprecedented speed and efficiency. Advertisement At the heart of the ambitious project is Galactic Energy , a private aerospace company that plans to debut the world's first electromagnetic rocket launch pad by 2028, a project that could redraw the competitive lines of the global space industry. Developed in partnership with state-backed research institutes in Sichuan province in southwestern China, the system uses superconducting magnets to silently accelerate rockets to supersonic speeds before ignition, a process often compared to launching a maglev train vertically. 11:05 Space race elevates Asia in new world order Space race elevates Asia in new world order The Ziyang government in Sichuan and the China Aerospace Science and Industry Corporation (CASIC) is testing China's first electromagnetic launch verification platform with the ambitious goal of launching in three years, according to a report by Sichuan Radio and Television last week. The platform would accelerate rockets to speeds above Mach 1 as rockets burn most fuel at the beginning of a flight, and offers a future in which launches could become as routine as high-speed rail departures. The technology could double payload capacity and lower the launch cost, said Li Ping, president of the Ziyang Commercial Space Launch Technology Research Institute. Li said the launch track would not require the maintenance needed for traditional launch pads, enabling more frequent launches. Advertisement If successful, it could offer China the critical edge it seeks to challenge American giants such as SpaceX.