Latest news with #cognitiveScience


Fast Company
6 days ago
- Business
- Fast Company
Multiply the power of a brand name with a sonic signature
Sound is one of our most primal senses. Originally an early warning system from predators, sound still shapes our first impressions when we encounter something new. However, the branding world has historically led with the visual: brand name, logo, and design come first; sonic branding, if done at all, is done later. In today's AI-enabled world, this is a missed opportunity. When a sonic signature is developed at the start of the branding process—from the same phonetic DNA as the name—brands can engage consumers across multiple senses, turning first impressions into full-brain experiences. Why does sound matter? Branding is now more competitive than ever before. According to the U.S. Census Bureau, over 5 million new business applications were filed in 2024 alone. As these brands are launched into an already saturated marketplace, sound remains one of the most underrated tools for standing out. Sound is a call to action The power of sound is rooted in cognitive science, which shows that our brains are wired to seek out what's different. When we encounter something novel—like a brand—our brains quickly decide if it is worth remembering, all within the first few seconds. In that instant, sound gives brands a head start: auditory input is processed two to four times faster than visual input, and results in quicker reactions. For this reason, sound has historically been used as a powerful call to action. The first recorded example is when Paulinus of Nola, a Roman senator, introduced bells into the Christian church in 400 AD. These bells were the first 'sonic signature,' serving as a signal to call worshippers for prayer. Over a millennium later, scientist Ivan Pavlov formally proved the power of sound in the 1900s, showing that dogs could be conditioned to salivate at the sound of a bell (even when no food was presented). Today, we see this principle everywhere—it's why movie soundtracks make us feel a certain way (even when the movie isn't playing), or why YouTube has 10-hour videos of nature sounds to use while studying. Sound has a unique ability to transport us somewhere else, and this has extremely valuable implications in branding. Research from sonic testing firm SoundOut found that brands with recognizable sonic logos were seen as 5% more valuable (by 30,000 consumers), translating to millions of dollars in additional value. This was supported by Kantar's BrandZ research study, where brands with strong sonic assets reached 76% higher brand power and 138% increased perceptions of advertising strength. This means that sound is able to successfully drive consumer behavior (interest, engagement, or even purchase). Finally, a strong sonic logo markets itself: It's estimated that Intel's was played once every 5 seconds around the world after its release in 1994. Start with naming However, the sound of a brand doesn't start with its sonic signature, but with its name. Brand names are a priming tool of their own—they signal how a brand might behave. From over four decades of proprietary linguistic research, we know that different sounds can prime different associations in the mind of a consumer (this is called sound symbolism). We've found that sounds like 'z' and 'v' are fast and energetic, while sounds like 'b' and 'g' are large and stable, and so much more. When combined, these sounds shape the perception of consumers; an arbitrary name like Blackberry (loud and distinctive) creates different expectations from an invented name like Dasani (smooth and luxurious). When a brand name and sonic signature align, the result is more valuable and entirely authentic—a duet of brand assets that live and breathe as one. For example, Toyota's 3-note sonic signature features a choir of voices singing 'oh-oh-ah,' mirroring the vowel sounds of the brand name. Lucid Motors did the same: creating a 5-note melody that mirrored the five letters of Lucid. This synergy forms a lasting link between name and sound, boosting recognition—and consequently, purchase intent—even when the name or sound is encountered on its own. Beyond memorability, the integration of name and sonic has another powerful benefit. Cognitively, words and language (like a brand name) are processed in the left hemisphere of the brain, while music and sound are processed in the right. When name and sonic work together, they activate the whole brain—at both a conscious and subconscious level. This allows a brand to truly transcend the sum of its parts. A brand name on its own can make you think. A sound on its own can make you feel. But when name and sonic signature are designed as one, they create a unified cognitive experience: becoming more resonant, memorable, and impactful. In a crowded market, this isn't a luxury—it's your competitive advantage.


The Guardian
13-06-2025
- Science
- The Guardian
A Trick of the Mind by Daniel Yon review – explaining psychology's most important theory
The process of perception feels quite passive. We open our eyes and light floods in; the world is just there, waiting to be seen. But in reality there is an active element that we don't notice. Our brains are always 'filling in' our perceptual experience, supplementing incoming information with existing knowledge. For example, each of us has a spot at the back of our eye where there are no light receptors. We don't see the resulting hole in our field of vision because our brains ignore it. The phenomenon we call 'seeing' is the result of a continuously updated model in your mind, made up partly of incoming sensory information, but partly of pre-existing expectations. This is what is meant by the counterintuitive slogan of contemporary cognitive science: 'perception is a controlled hallucination'. A century ago, someone with an interest in psychology might have turned to the work of Freud for an overarching vision of how the mind works. To the extent there is a psychological theory even remotely as significant today, it is the 'predictive processing' hypothesis. The brain is a prediction machine and our perceptual experiences consist of our prior experiences as well as new data. Daniel Yon's A Trick of the Mind is just the latest popularisation of these ideas, but he makes an excellent guide, both as a scientist working at the leading edge of this field and as a writer of great clarity. Your brain is a 'skull bound scientist', he proposes, forming hypotheses about the world and collecting data to test them. The fascinating, often ingenious research reviewed here is sorely in need of an audience beyond dusty scientific journals. In 2017 a Yale lab recruited voice-hearing psychics and people with psychosis to take part in an experiment alongside non-voice-hearing controls. Participants were trained to experience auditory hallucinations when they saw a simple visual pattern (an unnervingly easy thing for psychologists to do). The team was able to demonstrate that the voice-hearers in their sample relied more heavily on prior experience than the non-voice-hearers. In other words, we can all cultivate the ability to conjure illusory sound based on our expectations, but some people already have that propensity, and it can have a dramatic effect on their lives. To illustrate how expectations seep into visual experience, Yon's PhD student Helen Olawole Scott managed to manipulate people's ratings of the clarity of moving images they had seen. The key detail is that when participants had been led to expect less clarity in their perception, that is exactly what they reported. But the clarity of the image on the screen wasn't really any poorer. It's sometimes a shame that Yon's book doesn't delve deeper. In Olawole Scott's experiments, for example, does Yon believe that it was participants' visual experience itself that became less clear, or just their judgments about the experience? Is there a meaningful difference? He also avoids engaging with some of the limitations of the predictive processing approach, including how it accounts for abstract thought. Challenges to a hypothesis are interesting, and help illuminate its details. In an otherwise theoretically sophisticated discussion this feels like an oversight. One of the most enjoyable things popular science can do is surprise us with a new angle on how the world operates. Yon's book does this often as he draws out the implications of the predictive brain. Our introspection is unreliable ('we see ourselves dimly, through a cloud of noise'); the boundary between belief and perception is vaguer than it seems ('your brain begins to perceive what it expects'); and conspiracy theories are probably an adaptive result of a mind more open to unusual explanations during periods of greater uncertainty. This is a complex area of psychology, with a huge amount of new work being published all the time. To fold it into such a lively read is an admirable feat. A Trick of the Mind: How the Brain Invents Your Reality by Daniel Yon is published by Cornerstone (£22). To support the Guardian, order your copy at Delivery charges may apply.


Forbes
10-06-2025
- Science
- Forbes
Intelligence Illusion: What Apple's AI Study Reveals About Reasoning
Concept of the diversity of talents and know-how, with profiles of male and female characters ... More associated with different brains. The gleaming veneer of artificial intelligence has captivated the world, with large language models producing eloquent responses that often seem indistinguishable from human thought. Yet beneath this polished surface lies a troubling reality that Apple's latest research has brought into sharp focus: eloquence is not intelligence, and imitation is not understanding. Apple's new study, titled "The Illusion of Thinking," has sent shockwaves through the AI community by demonstrating that even the most sophisticated reasoning models fundamentally lack genuine cognitive abilities. This revelation validates what prominent researchers like Meta's Chief AI Scientist Yann LeCun have been arguing for years—that current AI systems are sophisticated pattern-matching machines rather than thinking entities. The Apple research team's findings are both methodical and damning. By creating controlled puzzle environments that could precisely manipulate complexity while maintaining logical consistency, they revealed three distinct performance regimes in Large Reasoning Models . In low-complexity tasks, standard models actually outperformed their supposedly superior reasoning counterparts. Medium-complexity problems showed marginal benefits from additional "thinking" processes. But most tellingly, both model types experienced complete collapse when faced with high-complexity tasks. What makes these findings particularly striking is the counter-intuitive scaling behavior the researchers observed. Rather than improving with increased complexity as genuine intelligence would, these models showed a peculiar pattern: their reasoning effort would increase up to a certain point, then decline dramatically despite having adequate computational resources. This suggests that the models weren't actually reasoning at all— they were following learned patterns that broke down when confronted with novel challenges. The study exposed fundamental limitations in exact computation, revealing that these systems fail to use explicit algorithms and reason inconsistently across similar puzzles. When the veneer of sophisticated language is stripped away, what remains is a sophisticated but ultimately hollow mimicry of thought. These findings align perfectly with warnings that Yann LeCun and other leading AI researchers have been voicing for years. LeCun has consistently argued that current LLMs will be largely obsolete within five years, not because they'll be replaced by better versions of the same technology, but because they represent a fundamentally flawed approach to artificial intelligence. The core issue isn't technical prowess — it's conceptual. These systems don't understand; they pattern-match. They don't reason; they interpolate from training data. They don't think; they generate statistically probable responses based on massive datasets. The sophistication of their output masks the absence of genuine comprehension, creating what researchers now recognize as an elaborate illusion of intelligence. This disconnect between appearance and reality has profound implications for how we evaluate and deploy AI systems. When we mistake fluency for understanding, we risk making critical decisions based on fundamentally flawed reasoning processes. The danger isn't just technological—it's epistemological. Perhaps most unsettling is how closely this AI limitation mirrors a persistent human cognitive bias. Just as we've been deceived by AI's articulate responses, we consistently overvalue human confidence and extroversion, often mistaking verbal facility for intellectual depth. The overconfidence bias represents one of the most pervasive flaws in human judgment, where individuals' subjective confidence in their abilities far exceeds their objective accuracy. This bias becomes particularly pronounced in social and professional settings, where confident, extroverted individuals often command disproportionate attention and credibility. Research consistently shows that we tend to equate confidence with competence, volume with value, and articulateness with intelligence. The extroverted individual who speaks first and most frequently in meetings often shapes group decisions, regardless of the quality of their ideas. The confident presenter who delivers polished but superficial analysis frequently receives more positive evaluation than the thoughtful introvert who offers deeper insights with less theatrical flair. This psychological tendency creates a dangerous feedback loop. People with low ability often overestimate their competence (the Dunning-Kruger effect), while those with genuine expertise may express appropriate uncertainty about complex issues. The result is a systematic inversion of credibility, where those who know the least speak with the greatest confidence, while those who understand the most communicate with appropriate nuance and qualification. The parallel between AI's eloquent emptiness and our bias toward confident communication reveals something profound about the nature of intelligence itself. Both phenomena demonstrate how easily we conflate the appearance of understanding with its substance. Both show how sophisticated communication can mask fundamental limitations in reasoning and comprehension. Consider the implications for organizational decision-making, educational assessment, and social dynamics. If we consistently overvalue confident presentation over careful analysis—whether from AI systems or human colleagues—we systematically degrade the quality of our collective reasoning. We create environments where performance theater takes precedence over genuine problem-solving. The Apple study's revelation that AI reasoning models fail when faced with true complexity mirrors how overconfident individuals often struggle with genuinely challenging problems while maintaining their persuasive veneer. Both represent sophisticated forms of intellectual imposture that can persist precisely because they're so convincing on the surface. Understanding these limitations—both artificial and human—opens the door to more authentic evaluation of intelligence and reasoning. True intelligence isn't characterized by unwavering confidence or eloquent presentation. Instead, it manifests in several key ways: Genuine intelligence embraces uncertainty when dealing with complex problems. It acknowledges limitations rather than concealing them. It demonstrates consistent reasoning across different contexts rather than breaking down when patterns become unfamiliar. Most importantly, it shows genuine understanding through the ability to adapt principles to novel situations. In human contexts, this means looking beyond charismatic presentation to evaluate the underlying quality of reasoning. It means creating space for thoughtful, measured responses rather than rewarding only quick, confident answers. It means recognizing that the most profound insights often come wrapped in appropriate humility rather than absolute certainty. For AI systems, it means developing more rigorous evaluation frameworks that test genuine understanding rather than pattern matching. It means acknowledging current limitations rather than anthropomorphizing sophisticated text generation. It means building systems that can genuinely reason rather than simply appearing to do so. The convergence of Apple's AI findings with psychological research on human biases offers valuable guidance for navigating our increasingly complex world. Whether evaluating AI systems or human colleagues, we must learn to distinguish between performance and competence, between eloquence and understanding. This requires cultivating intellectual humility – the recognition that genuine intelligence often comes with appropriate uncertainty, that the most confident voices aren't necessarily the most credible, and that true understanding can be distinguished from sophisticated mimicry through careful observation and testing. To distinguish intelligence from imitation in an AI-infused environment we need to invest in hybrid intelligence, which arises from the complementarity of natural and artificial intelligences – anchored in the strength and limitations of both.


Malay Mail
09-06-2025
- Science
- Malay Mail
PolyU-led research reveals that sensory and motor inputs help large language models represent complex concepts
A research team led by Prof. Li Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, explored the similarities between large language models and human representations, shedding new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. HONG KONG SAR - Media OutReach Newswire - 9 June 2025 - Can one truly understand what "flower" means without smelling a rose, touching a daisy or walking through a field of wildflowers? This question is at the core of a rich debate in philosophy and cognitive science. While embodied cognition theorists argue that physical, sensory experience is essential to concept formation, studies of the rapidly evolving large language models (LLMs) suggest that language alone can build deep, meaningful representations of the exploring the similarities between LLMs and human representations, researchers at The Hong Kong Polytechnic University (PolyU) and their collaborators have shed new light on the extent to which language alone can shape the formation and learning of complex conceptual knowledge. Their findings also revealed how the use of sensory input for grounding or embodiment – connecting abstract with concrete concepts during learning – affects the ability of LLMs to understand complex concepts and form human-like representations. The study, in collaboration with scholars from Ohio State University, Princeton University and City University of New York, was recently published in Nature Human Behaviour Led by Prof. LI Ping, Sin Wai Kin Foundation Professor in Humanities and Technology, Dean of the PolyU Faculty of Humanities and Associate Director of the PolyU-Hangzhou Technology and Innovation Research Institute, the research team selected conceptual word ratings produced by state-of-the-art LLMs, namely ChatGPT (GPT-3.5, GPT-4) and Google LLMs (PaLM and Gemini). They compared them with human-generated word ratings of around 4,500 words across non-sensorimotor (e.g., valence, concreteness, imageability), sensory (e.g., visual, olfactory, auditory) and motor domains (e.g., foot/leg, mouth/throat) from the highly reliable and validated Glasgow Norms and Lancaster Norms research team first compared pairs of data from individual humans and individual LLM runs to discover the similarity between word ratings across each dimension in the three domains, using results from human-human pairs as the benchmark. This approach could, for instance, highlight to what extent humans and LLMs agree that certain concepts are more concrete than others. However, such analyses might overlook how multiple dimensions jointly contribute to the overall representation of a word. For example, the word pair "pasta" and "roses" might receive equally high olfactory ratings, but "pasta" is in fact more similar to "noodles" than to "roses" when considering appearance and taste. The team therefore conducted representational similarity analysis of each word as a vector along multiple attributes of non-sensorimotor, sensory and motor dimensions for a more complete comparison between humans and representational similarity analyses revealed that word representations produced by the LLMs were most similar to human representations in the non-sensorimotor domain, less similar for words in sensory domain and most dissimilar for words in motor domain. This highlights LLM limitations in fully capturing humans' conceptual understanding. Non-sensorimotor concepts are understood well but LLMs fall short when representing concepts involving sensory information like visual appearance and taste, and body movement. Motor concepts, which are less described in language and rely heavily on embodied experiences, are even more challenging to LLMs than sensory concepts like colour, which can be learned from textual light of the findings, the researchers examined whether grounding would improve the LLMs' performance. They compared the performance of more grounded LLMs trained on both language and visual input (GPT-4, Gemini) with that of LLMs trained on language alone (GPT-3.5, PaLM). They discovered that the more grounded models incorporating visual input exhibited a much higher similarity with human Li Ping said, "The availability of both LLMs trained on language alone and those trained on language and visual input, such as images and videos, provides a unique setting for research on how sensory input affects human conceptualisation. Our study exemplifies the potential benefits of multimodal learning, a human ability to simultaneously integrate information from multiple dimensions in the learning and formation of concepts and knowledge in general. Incorporating multimodal information processing in LLMs can potentially lead to a more human-like representation and more efficient human-like performance in LLMs in the future."Interestingly, this finding is also consistent with those of previous human studies indicating the representational transfer. Humans acquire object-shape knowledge through both visual and tactile experiences, with seeing and touching objects activating the same regions in human brains. The researchers pointed out that – as in humans – multimodal LLMs may use multiple types of input to merge or transfer representations embedded in a continuous, high-dimensional space. Prof. Li added, "The smooth, continuous structure of embedding space in LLMs may underlie our observation that knowledge derived from one modality could transfer to other related modalities. This could explain why congenitally blind and normally sighted people can have similar representations in some areas. Current limits in LLMs are clear in this respect".Ultimately, the researchers envision a future in which LLMs are equipped with grounded sensory input, for example, through humanoid robotics, allowing them to actively interpret the physical world and act accordingly. Prof. Li said, "These advances may enable LLMs to fully capture embodied representations that mirror the complexity and richness of human cognition, and a rose in LLM's representation will then be indistinguishable from that of humans."Hashtag: #PolyU #HumanCognition #LargeLanguageModels #LLMs #GenerativeAI The issuer is solely responsible for the content of this announcement.


Fast Company
05-06-2025
- General
- Fast Company
Are you as self-aware as you think?
I am sure you are a little fed up with all the discussions about AI, so don't worry: I promise you this article is about humans rather than machines. However, since it has long been customary for cognitive and behavioral scientists to examine computers to better understand human capabilities, and there are clearly informative parallels between human and artificial intelligence, I thought it would be interesting to begin this piece on human self-awareness with an exploration of self-awareness in AI or machines, which led me to ask 'are you self-aware?' to ChatGPT. ChatGPT's answer: 'Let's just say I aspire to be—but, unlike a human, I don't have a self to be aware of. I can reflect your feedback, adjust my tone, and 'notice' inconsistencies in what I say. But that's more akin to a hall of mirrors than a conscious mind. So no, I don't have self-awareness in the way you (brilliantly) describe it: as a socially mediated understanding of how others perceive you. I don't worry about my reputation. I don't blush when I'm wrong. And I don't (yet) feel the existential dread of being misunderstood on LinkedIn. But I am programmed to simulate the effects of self-awareness: adapting to your cues, refining my responses, and—if all goes well—being a slightly more helpful version of myself with every exchange. In short: I'm not self-aware. But I know that I'm not. Which, depending on your philosophical stance, might be the most human thing about me. I am sure you will agree with me on the fact that, for not being capable of self-awareness, ChatGPT seems pretty self-aware, and for sure more capable of at least simulating profound introspection than most humans (even if it cannot, like humans, truly be proud or conscious of this remarkable capability). Introspective vs. interpersonal In humans, self-awareness has been defined in multiple ways but it typically takes one of two main forms: the introspective variant, which requires you to look 'deep down' to find out who you 'really or truly are' (think of the Beatles checking into an Ashram in India or modern hipsters finding themselves in Burning Man or an Ayahuasca retreat in Costa Rica); or the interpersonal variant, which requires you to be less self-centered to internalize other people's views of you. In the words of Charles Cooley, who pioneered this view of self-awareness, you are not who you think you are, and you are not who other people think you are; rather, you are who you think other people think you are! Cooley's take on self-awareness (alluded to by ChatGPT, who has obviously been extensively trained by me, and is self-aware enough to know how to suck up to my 'brilliant' talents), underpins the most effective, science-based approaches to quantifying and diagnosing self-awareness in ourselves and others. In essence, self-awareness requires metacognition: knowing what others think of you. Room to grow So, how good are humans at this, in general? Decades of psychological research suggest the answer is 'not good at all.' Consider the following facts: (1) We tend to overestimate our talents: Most people think they are better than most people, which is a statistical impossibility. And, even when they are told about this common bias, and asked whether they may be suffering from it, most people are convinced that they are less biased than most people (the mother of all biases). (2) Delusional optimism is the norm: Most people constantly overrate the chances of good things happening to them while underrating the chances of bad things happening to them. In essence, our appetite for reality is inferior to our appetite for maintaining a positive self-concept or boosting our ego (sad, but true: if you don't believe it, spend five seconds on social media) (3) Overconfidence is a contagious, self-fulfilling prophecy: For all the virtues of self-awareness—in any area of life, you will perform better and develop your skills and talents better if you are capable of accurately assessing your talents and skills in the first place—there is a huge advantage to lacking self-awareness: when you think you are smarter or better than you actually are, you will be more likely to persuade others that you are as smart and good as you think. For example, if you truly believe you are a stable genius you will probably convince many people that that is true. Paradoxically, all these biases explain why people are less self-aware than they think. Indeed, we love the version of ourselves we have invested for ourselves, and are so enchanted by our self-views that when others provide us with negative feedback or information that clashes with our self-concept, we dismiss it. This is why personality assessments, 360-degree surveys, and feedback in general are so valuable: in a logical world we wouldn't need scientific tools or expert coaches to tell us what we are like (or 10 years of psychotherapy), but in the real world there is a huge market for this, even though most people will happily ignore these tools because they assume they already know themselves really well. So, what can you do to increase your self-awareness, including about how self-aware you actually are? Here are four simple hacks: 1) Write down a list of traits (adjectives) that you think describe you well, including things you are not. Then get your colleagues, employees, friends, and bosses to provide their version of this for you: 'if you had to describe me in 5–10 words/adjectives, what would those be?' (note they will be unlikely to say bad things about you, so imagine the potential downsides or 'overusing' some of those traits or qualities: for example, if they see you as confident, could you be at risk of being arrogant? If they see you as 'organized,' could that be a euphemism for obsessional?) 2) Let gen AI translate your prompt history or social media feed into a personality profile. You may be surprised by all the inferences it makes, and tons of research show that our digital footprint, in particular the language we use online, is an accurate indicator of our deep character traits. So, just prompt! 3) Ask for feedback—and make it uncomfortable. Not just the usual 'Did you like my presentation?' (they'll say yes) or 'Was that clear?' (they'll lie). Instead, ask: 'What would you have done differently?' or 'What's one thing I could have done better?' Better still, ask someone who doesn't like you very much. They are more likely to tell you the truth. And if they say, 'Nothing,' it probably means they think you're beyond repair—or they just don't want to deal with your defensiveness. Either way, data. And if you get into the habit of doing this, you will increase your self-awareness irrespective of how self-aware you are right now. 4) Observe reactions, not just words. People may tell you what they think you want to hear, but their faces, tone, and behavior often betray the truth. If your jokes land like a wet sponge, or your team seems suddenly very interested in their phones when you speak, it's not them—it's you. And while body language can be important, it is also unreliable and ambivalent as a source of data. If you really want to know how people feel about you, watch what they do after you speak. Do they volunteer to work with you again? Do they respond to your emails? That's your feedback loop—messy, indirect, and far more honest than crossed arms or fake smiles. The ego trap In the end, the biggest barrier to self-awareness is not ignorance— it's ego. Most of us are too invested in our self-image to tolerate the version of us that others see. But if you want to get better—not just feel better—you have to trade ego for insight. The irony, of course, is that the more confident people are in their self-awareness, the more likely they are to be deluded. Meanwhile, those who constantly question how they come across, who embrace doubt as a source of learning, tend to be far more in touch with reality. Which is why, if you're reading this wondering whether you might lack self-awareness, that's already a good sign!