Latest news with #skepticism


Bloomberg
a day ago
- Science
- Bloomberg
Trust in AI Strongest in China, Low-Income Nations, Study Shows
A United Nations study found a sharp global divide on attitudes toward artificial intelligence, with trust strongest in low-income countries and skepticism high in wealthier ones. More than 6 out of 10 people in developing nations said they have faith that AI systems serve the best interests of society, according to a UN Development Programme survey of 21 countries seen by Bloomberg News. In two-thirds of the countries surveyed, over half of respondents expressed some level of confidence that AI is being designed for good.


Reuters
03-06-2025
- General
- Reuters
Expert testimony in an era of skepticism of expertise
June 03, 2025 - The public discourse in America surrounding the value of expertise — specialized knowledge in a particular subject matter gained over years of study and experience — has markedly shifted over the past several years. Where individuals once looked to so-called "traditional institutions" — academia, old-guard print media, books, or network TV — for news and information, many now look to social media or alternative news outlets that align with a certain viewpoint or ideology. This shift in news/information consumption aligns with a growing skepticism toward expertise in everyday life, including skepticism of scientific, medical and legal experts. While American courtrooms have mechanisms that insulate them from the shift away from reliance on experts, the jury pool may still be affected by this change. Because expert testimony is a critical aspect of jury trials, we provide recommendations for tailoring expert testimony to accommodate jurors' changing preferences and to overcome the skepticism that they may bring to the courtroom. The change in preferred news and information sources has resulted in a pronounced difference in the way that average Americans receive and digest information. Today, approximately one in five Americans say they regularly get news from news influencers on social media, according to the Pew Research Center. Unlike traditional formats, information shared on social media sites is chopped into seconds-long snippets and presented by individuals of largely unknown or unverified qualifications, as reported by The New York Times, "For Gen Z, Tik Tok Is the New Search Engine." Sept. 16, 2022. As a result, an individual with only anecdotal knowledge of a complex issue such as ADHD ("TikTok Misinformation is Warping Young People's Understanding of ADHD," ScienceAlert, March 21, 2025) may be presented opining on the condition alongside — and apparently co-equal to — a Ph.D. psychologist with decades of experience. This contrasts with the traditional-news format in which only vetted "experts" were given a platform to speak to the masses. Commensurate with the evolution in the ways Americans consume news and media, there has been a recent systemic departure from reliance on expertise in everyday life. With access to unlimited information and online encouragement to "do your own research," Americans are placing less value in expertise, which manifests in multiple ways. Americans are losing trust in science. A 2023 survey by the Pew Research Center showed that 57% of Americans say science has a mostly positive effect on society, compared with 73% in January 2019. This loss of public trust in science matters because "[p]eople with greater trust in scientists are more likely to align their own beliefs and actions with expert guidance and understanding," the report concluded. Americans have also demonstrated a shift away from reliance on experts in the medical field, which was accelerated by the COVID-19 pandemic. The Association of American Medical Colleges attributes the shift to several factors, including that people are overwhelmed by information, the country is increasingly socially divided and politically polarized and trust in traditional institutions is eroding. Changes in the way average Americans consume information and the loss of trust in science means the jury pool is changing. Today's jurors, unlike those of 30 years ago, each have a powerful computer in their pockets that is connected via the internet to virtually all human knowledge (not to mention the budding field of AI). These jurors are much more likely to view themselves as capable of researching complex questions to gain expertise on a given subject matter than their predecessors. Jurors are normally instructed not to use outside sources for information, and there have been instances where such use has led to mistrial. Against this backdrop, what is a trial attorney to do? Experts are important in the courtroom. They are the only avenue by which a jury can be presented with opinions based on scientific, technical, or other specialized knowledge. (See Federal Rules of Evidence 701 and 702.) It is also the experts' job to make complicated and often dry technical material both accessible and engaging to lay jurors. And experts matter to cases and case outcomes. For example, in the extremely high-profile murder trial of Derek Chauvin in 2021, in the death of George Floyd, the medical experts are widely considered to have been key to guiding the jury's understanding of the case, particularly Dr. Martin Tobin, a pulmonologist and critical care specialist, as reported in The New York Times. Dr. Tobin's testimony guided the jury through his analysis of hours of video footage of the arrest of Floyd, highlighting critical details in the videos. He also provided an anatomy lesson on the structure of the airway and operation of the lungs, with instructions for jurors to place their hands on their own necks to illustrate the areas he was describing. Other high-profile cases in which expert testimony has played a critical role include the OJ Simpson murder trial (forensic scientists), and various opioid litigations (public health and pharmaceutical industry experts). Patent litigators need effective expert testimony in every single one of their cases. How do trial lawyers meet this critical need for expert testimony given the current skepticism toward expertise? In some ways, the courtroom is uniquely insulated from the shift away from reliance on 701 of the Federal Rules of Evidence safeguards against parties offering unreliable opinions from lay witnesses. And Rule 702 requires courts to undertake rigorous analyses of the reliability and relevance of opinions offered by expert witnesses. See, "The New Daubert Standard: Implications of Amended FRE 702," JDSupra, May 17, 2024. But the courtroom is not immune to changes in the way that society prefers to receive and digest information. Jurors today bring their habits for consuming information into the courtroom with them. They may also have shorter attention spans and strong convictions that complicated issues are simple and they can figure them out on their own. Trial attorneys must adjust to accommodate these changing preferences; they should adapt to use the changing jury pool to their advantage. Do not rely on an expert's credentials alone. Academic degrees and experience are important in establishing an expert's credibility and the admissibility of their testimony, but attorneys cannot rely on an expert's qualifications alone to persuade jurors. Jurors are not going to believe an expert just because of their degrees or the number of papers they have published. Similar to the social media news providers, the best experts have the ability to connect with both the material they are presenting and the audience, which comes across as more authentic. One benefit of not relying on credentials alone is that it opens the door to junior, more enthusiastic experts who may have previously been dismissed as lacking the gravitas assumed to come with age. Create relatable expert narratives. No one likes listening to a seemingly endless march through boring, technical material, but certain areas of law (patent, products liability, etc.) can require the presentation of large amounts of technical data. Even worse than boredom, inauthenticity renders obvious "hired guns" especially risky in this environment of skepticism. In contrast, skilled experts can tell a story that not only makes the technical information understandable and relatable to the jury, but also gives them a reason to care about the outcome. What can the expert provide that a juror could not get from his/her own internet research? The best expert testimony incorporates opportunities for the expert to interject personal experiences with the technology or field of expertise to make it more relatable, such as research that they care about personally or that solved a problem they faced in their own career. Effective expert testimony will also incorporate engaging material such as testing that the jury can see with their own eyes or personalized tutorials on the technical issues at hand, like the one presented by the pulmonologist in the Chauvin trial. When jurors expect a feeling of proximity to the source of information, connection with jurors and authenticity are paramount. Incorporate expert testimony into a cohesive, resonant story. Great trial lawyers know that even the most technically challenging cases require a resonant story that incorporates ethos (is your case morally right?), pathos (does your case connect on an emotional level?) and logos (does your case make sense?). Often these thematic points are conveyed through narratives that highlight sympathetic parties, such as a scrappy inventor who toiled to bring about her invention or an innocent party harmed by another's actions. Strategic use of expert testimony can amplify these thematic points. For example, an expert with the right experience can not only explain the technical details of a case, but can also share first-hand knowledge, such as the challenges faced in the field, the historical context of the dispute, and the moral factors at play. By carefully connecting this information to overall themes of the case, the trial team can highlight the ethos, pathos, and logos of the story. Implementing these recommendations requires investment both in the selection of experts at the beginning of a case and the detailed planning for expert testimony at trial. The benefit of that investment is a compelling trial story that meets jurors where they are and presents critical expert testimony in a way that can overcome any skepticism they may bring to the courtroom.


Asharq Al-Awsat
30-05-2025
- Business
- Asharq Al-Awsat
Generative AI's Most Prominent Skeptic Doubles Down
Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation, AFP said. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." 'Right answers matter' Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

News.com.au
29-05-2025
- Business
- News.com.au
Generative AI's most prominent skeptic doubles down
Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers. Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation. Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth. "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada. Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative. The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises. "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world." His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise. Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability. That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace. Yet for all the hype, the practical gains remain limited. The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business. Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will. "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained. This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources." - 'Right answers matter' - Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude. He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters." This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes. Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate. Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet." Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data. "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society. "They have all this private data, so they can sell that as a consolation prize." Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much. "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said. "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."
Yahoo
23-05-2025
- Health
- Yahoo
‘My Father Told Me...': RFK Jr. Makes Wild Warning Undermining Expert Health Advice
Health and Human Services Secretary Robert F. Kennedy Jr. on Thursday said assessing health guidance is similar to researching baby strollers as a new mom, urging Americans to 'be skeptical of authority' while serving in a top Cabinet position. CNN's Kaitlan Collins asked Kennedy if he stood by his earlier comment that people should not be taking medical advice from him, even though his job involves communicating health guidance and recommendations based on his department's expertise. 'Yeah, absolutely,' Kennedy said. 'I'm somebody who is not a physician... and they should also be skeptical about any medical advice. They need to do their own research.' Kennedy added that when 'you're a mom, you do your own research on your baby carriage, on your baby bottles, on your baby formula,' suggesting a similar approach should be taken when assessing medical advice. When Collins pointed out that most mothers do not have medical degrees and would rather rely on their physicians, Kennedy claimed that health experts in a democracy 'are subject to all kinds of biases.' 'One of the responsibilities of living in a democracy is to do your own research and to make up your own mind,' he added. Kennedy also recalled a piece of advice from his father, suggesting it was relevant to their discussion. 'I would say, be skeptical of authority. My father told me that when I was a young kid, people in authority lie,' Kennedy said, baselessly claiming that 'critical thinking was shut down' during the COVID-19 pandemic. Kennedy, a prominent vaccine skeptic, was nominated to serve in one of the country's top jobs by President Donald Trump, raising eyebrows during a House subcommittee hearing last week with his answer to a question about whether he would vaccinate his children against measles if they were still young. 'I don't think people should be taking advice, medical advice from me,' he said. 'I think if I answer that question directly that it will seem like I'm giving advice to other people, and I don't want to be doing that,' he continued. Kennedy, though, has not held back from lending credence to debunked conspiracy theories, including falsely suggesting that vaccines are linked to autism. While his Making America Healthy Again report, released on Thursday, did not touch on that specific claim, it still hinted that the growth of the immunization schedule for children may be detrimental to them, even though childhood vaccination saves millions of lives every year. 'Vaccines benefit children by protecting them from infectious diseases. But as with any medicine, vaccines can have side effects that must be balanced against their benefits,' the report reads. 'Parents should be fully informed of the benefits and risks of vaccines.' RFK Jr.'s MAHA Report Goes After Vaccines, Prescription Meds, Food Supply RFK Jr.'s MAHA Report Raises Concerns About Vaccines, U.S. Foods And Prescription Drugs RFK Jr. Has A Meltdown After Democrat Asks Him 1 Simple Question