Latest news with #deepLearning
Yahoo
10 hours ago
- Yahoo
‘Godfather of AI' believes it's unsafe - but here's how he plans to fix the tech
This week the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organisation dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI'. Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different level of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team are adding to Scientist AI is the 'world model' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritise safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is a Lecturer in the School of Computer Science at the University of Sydney. This article is republished from The Conversation under a Creative Commons license. Read the original article


Zawya
a day ago
- Science
- Zawya
AI and the crafting of parallel history
Used wisely, AI rekindles the Promethean spark, not to burn, but to illuminate the dark corridors of our shared past and guide us towards paths once unseen. With the rise of intelligent algorithms capable of generating language and imagery in ways that mirror the human mind, a new realm of historical imagination has emerged, what can be called the "galaxy of historical possibilities". This domain of counterfactual history asks: what if events had unfolded differently? What if Julius Caesar had not been assassinated, or the Arabs had triumphed at pivotal battles? Such questions, once confined to speculative philosophy, have gained new legitimacy through artificial intelligence (AI) and its simulation capabilities. AI, powered by deep learning and neural networks, can now be trained on massive datasets of historical, economic and demographic information. It can simulate countless alternate realities, tracking how a single altered event might cascade through time like a domino effect. These simulations do not recreate history as it was, but they revive possible histories grounded in plausible models and precise probabilities. While traditional historians rely on artefacts, documents and testimonies, AI adds a fourth dimension: the "simulated probability", a causality-based narrative framework that allows a tweak in one event to reveal systemic historical shifts. Used wisely, AI rekindles the Promethean spark, not to burn, but to illuminate the dark corridors of our shared past and guide us towards paths once unseen. Sceptics may argue this overstates AI's power. Indeed, AI does not possess conscious knowledge of the past; it merely generates outcomes from patterns. However, its value lies in offering a hypothetical mirror, an imaginative yet logical contrast to actual history, revealing how contingent the course of human events truly is. This approach reshapes how societies perceive history in three ways. First, it breaks the illusion of historical determinism, revealing that major outcomes are not inevitable but the result of human choices. This awakens political agency, showing individuals that the present is not a dead end but an open frontier. Second, it empowers historically marginalised peoples. For nations colonised or erased from dominant narratives, counterfactual simulations provide moral consolation and restore symbolic justice. It's not about rewriting history but imagining the dignity that was denied. Third, in political science, these simulated models become testing grounds for policy, revealing dangers or opportunities before real-world decisions are made. Such developments provoke a profound question: what is history? Is it a sequence of necessary outcomes dictated by natural and economic systems? Or is it a dance of probabilities around human decisions? AI cannot fully answer this, but it radically expands our perception. It allows us to envision history as a topological field of overlapping timelines, not a straight line. This view aligns with both philosophical critiques of linear progress and quantum physics' interpretation of reality as a wave function collapsing into one observable event. In this frame, parallel histories become a scientifically plausible concept. However, access to 'alternative histories' is not always neutral. Governments or corporations may misuse such simulations to construct persuasive, pseudo-scientific propaganda. By manipulating data inputs or assumptions, they can present a desired narrative as the "most likely" future, shaping public opinion through visually and linguistically compelling stories. This creates a risk that AI becomes not a tool for knowledge, but a factory of illusions. Ethical protocols must be established, requiring transparency in data, clarity about assumptions and openness to peer review, to prevent such misuse. Ultimately, humanity has given AI the unprecedented ability to dissect and recompose time. Not to escape the past, but to reinterpret it. Counterfactual simulations are more than narrative play, they are intellectual tools that reposition humans at the centre of historical agency. If, as philosopher Edmund Husserl said, philosophy is 'the science of absolute beginnings', then AI may be the technological key to rethinking history not as a record of what was, but as a spectrum of what could have been and what may yet still be. By transforming imaginative simulation into a mental laboratory, AI enhances our capacity to ask deeper questions, exercise creative freedom and prepare future generations to envision less tragic, more just futures. Used wisely, AI rekindles the Promethean spark, not to burn, but to illuminate the dark corridors of our shared past and guide us towards paths once unseen. 2022 © All right reserved for Oman Establishment for Press, Publication and Advertising (OEPPA) Provided by SyndiGate Media Inc. (


Forbes
02-06-2025
- Health
- Forbes
Stanford Initiative Leverages AI To Robustly Transform Mental Health Research And Therapy
In today's column, I explore the latest efforts to transform mental health research and therapy into being less subjective and more objective in its ongoing pursuits. This kind of transformation is especially spurred via the use of advanced AI, including leveraging deep learning (DL), machine learning (ML), artificial neural networks (ANN), generative AI, and large language models (LLMs). It is a vital pursuit well worth undertaking. Expectations are strong that great results and new insights will be gleaned accordingly. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis that also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH, see the link here. Indeed, today's discussion is substantively shaped around a recent seminar conducted by AI4MH. Let's begin with a cursory unpacking of what is generally thought of as a type of rivalry or balance amid being subjective versus objective. The conceptualization of 'objective' consists of a quality or property intended to convey that there are hard facts, proven principles and precepts, clear-cut observations, reproducible results, and other highly tangible systemic elements at play. In contrast, 'subjective' is characterized as largely speculative, sentiment-based, open to interpretation, and otherwise less ironclad. Where does the consideration of subjective vs. objective often arise? You might be surprised to know that the question of subjective versus objective has a longstanding root in two fields of endeavor, namely psychology and physics. Yes, it turns out that psychology and physics have historically been domains that richly illuminate the dialogue regarding subjective versus objective. The general sense is that people perceive physics as tilted more toward the objective and less toward the subjective, while the perception of psychology is that it is a field angled toward the subjective side more so than the objective. Turn back the clock to the 1890s, in which the famed Danish professor Harald Hoffding made these notable points about psychology and physics (source: 'Outlines of Psychology' by Harald Hoffding, London Macmillan, 1891): You might notice the rather stunning point that psychology and physics are themselves inclusive of everything that could potentially be the subject of human research. That's amazingly alluring to those in the psychology and physics fields, while perhaps not quite as affable for all other domains. In any case, on the thorny matter of subjective versus objective in the psychology realm, we can recall Pavlov's remarks made in 1930: Pavlov's comments reflect a longstanding aspiration of the field of psychology to ascertain and verify bona fide means to augment the subjective aspects of mental health analysis with more exacting objective measures and precepts. The final word on this goes to Albert Einstein as to the heady matter: It's always an uphill battle to refute remarks made by Einstein, so let's take them as they are. Shifting gears, the topic of psychology and the challenging properties of subjective vs. objective was a major theme during a recent seminar undertaken by Stanford University on May 28, 2025, at the Stanford campus. Conducted by the initiative known as AI4MH (Artificial Intelligence for Mental Health), see the link here, within the Stanford School of Medicine, Department of Psychiatry and Behavioral Sciences, the session was entitled 'Insights from AI4MH Faculty: Transforming Mental Health Research with AI' and a video recording of the session can be found at the link here. The moderator and the three speakers consisted of: I attended the session and will provide a recap and analysis here. In addition, I opted to look at various research papers by the speakers. I encompass selected aspects from the papers to further whet your appetite for learning more about the weighty insights provided during the seminar and based on their respective in-depth research studies. I'll proceed next in the same sequence as occurred during the seminar, covering each speaker one at a time, and then offer some concluding thoughts. The human brain consists of around 86 billion neurons and approximately 100 trillion synapses. This elaborate organ in our noggin is often referred to in the AI field as the said-to-be wetware of humans. That's a cheeky sendoff of computer-based hardware and software. Somehow, in ways that we still aren't quite sure, the human brain or wetware gives rise to our minds and our ability to think. In turn, we are guided in what we do and how we act via the miracle of what's happening in our minds. For my related discussion about the Theory of Mind (ToM) and its relationship to the AI realm, see the link here. In the presentation by Dr. Kaustubh Supekar, he keenly pointed out that the brain-mind indubitably is the source of our mental health and ought to be closely studied when trying to ascertain the causes of mental disorders. He and his team are using AI to derive brain fingerprints that can be associated with mental disorders. It's quite exciting to envision that we could eventually end up with a tight mapping between the inner workings of the brain-mind and how mental disorders manifest within the brain-mind. Imagine the incredible possibilities of anticipating, remedying, or at least aiding those incurring mental disorders. In case you aren't familiar with the formal definition of what mental disorders consist of, I covered the DSM-5 guidelines in a posting on AI-driven therapy using DSM-5, see the link here, and included this definition from the well-known manual: DSM-5 is a widely accepted standard and is an acronym for the Diagnostic and Statistical Manual of Mental Disorders fifth edition, which is promulgated by the American Psychiatric Association (APA). The DSM-5 guidebook or manual serves as a venerated professional reference for practicing mental health professionals. In a recent research article that Dr. Kaustubh Supekar was the lead author of, entitled 'Robust And Replicable Functional Brain Signatures Of 22q11.2 Deletion Syndrome And Associated Psychosis: A Deep Neural Network-Based Multi-Cohort Study' by Kaustubh Supekar, Carlo de los Angeles, Sikanth Ryali, Leila Kushan, Charlie Schleifer, Gabriela Repetto, Nicolas Crossley, Tony Simon, Carrie Bearden, and Vinod Meno, Molecular Psychiatry, April 2024, these salient points were made (excerpts): The study aimed to find relationships between those having a particular chromosomal omission, known as DiGeorge syndrome or technically as 22q11.2 deletion syndrome (DS), and linking the brain patterns of those individuals to common psychosis symptoms. The brain-related data was examined via the use of an AI-based artificial neural network (a specialized version involving space-time or spatiotemporal analyses underlying the data, referred to as stDNN). This and other such studies are significant steps in the erstwhile direction of mapping brain-mind formulations to the nature of mental disorders. Faithful readers might recall my prediction that ambient intelligence (AmI) would be a rapidly expanding field and will dramatically inevitably change the nature of our lives, see the link here. What is ambient intelligence? Simply stated, it is a mishmash term depicting the use of AI to bring together data from electronic devices and do so with a focus on detecting and reacting to human presence. This catchphrase got its start in the 1990s when it was considered state-of-the-art to have mobile devices and the Internet of Things (IoT) was gaining prominence. It is a crucial aspect of ubiquitous computing. Ambient intelligence has made strong strides due to advances in AI and advances in ubiquitous technologies. Costs are getting lower and lower. Embedded devices are here and there, along with the devices seemingly invisible to those within their scope. The AI enables adaptability and personalization. In the second presentation of the AI4MH seminar, Dr. Ehsan Adeli notably pointed out that we can make use of exhibited behaviors to try and aid the detection and mitigation of mental health issues. But how can we capture exhibited behavior? One strident answer is to lean into ambient intelligence. In a research article that he served as a co-author, entitled 'Ethical Issues In Using Ambient Intelligence In Healthcare Settings' by Nicole Martinez-Martin, Zelun Luo, Amit Kaushal, Ehsan Adeli, Albert Haque, Sara S Kelly, Sarah Wieten, Mildred K Cho, David Magnus, Li Fei-Fei, Kevin Schulman, and Arnold Milstein, Lancet Digital Health, December 2020, these salient points were made (excerpts): The idea is that by observing the exhibited behavior of a person, we can potentially link this to their mental health status. Furthermore, via the appropriate use of AI, the AI might be able to detect when someone is having mental health difficulties or perhaps incurring an actual mental health disorder. The AI could in turn notify clinicians or others, including the person themselves, as suitably determined. In a sense, this opens the door to undertaking continuous assessment of neuropsychiatric symptoms (NPS). Of course, as directly noted by Dr. Ehsan Adeli, the enabling of AmI for this purpose brings with it the importance of considering aspects of privacy and other AI ethics and patient ethics caveats underlying when to best use these growing capabilities. Being evidence-based is a hot topic, aptly so. The trend toward evidence-based medicine and healthcare has been ongoing and aims to improve both research and practice, doing so in a classic less subjective, and more objective systematic way. The American Psychological Association (APA) defines evidence-based practice in psychology (EBPP) as 'the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.' The third speaker in the AI4MH seminar was Dr. Shannon Wiltsey Stirman, a top researcher with a focus on how to facilitate the high-quality delivery of evidence-based psychosocial (EBPs) interventions. Among her research work is a framework for identifying and classifying adaptations made to EBPs in routine care. On the matter of frameworks, Dr. Stirman's presentation included a discussion about a newly formulated framework associated with evaluating AI-based mental health apps. The innovative and well-needed framework had been devised with several of her fellow researchers. In a co-authored paper entitled 'Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework' by Elizabeth Stade, Johannes Eichstaedt, Jane Kim, and Shannon Wiltsey Stirman, Technology, Mind, and Behavior, March 2025, these salient points were made (excerpts): Longtime readers know that I have been calling for an assessment framework like this for quite a while. For example, when OpenAI first allowed ChatGPT users to craft customized GPTs, there was a sudden surge in GPT-based applets that purportedly performed mental health therapy via the use of ChatGPT. In my review of those GPTs, I pointed out that many were not only vacuous, but they were at times dangerous in the sense that the advice being dispensed by these wantonly shaped ChatGPT applets was erroneous and misguided (see my extensive coverage at the link here and the link here). I have also repeatedly applauded the FTC for going after those who tout false claims about their AI for mental health apps (see my indication at the link here). Just about anyone can readily stand up a generative AI app that they claim is suitable for mental health therapy. They might have zero experience, zero training, and otherwise be completely absent from any credentials associated with a mental health professional. Meanwhile, consumers are at a loss to know which mental health apps are prudent and useful and which ones are problematic and ought to be avoided. It is for this reason that I have sought a kind of Consumer Reports scoring that might be used to differentiate AI mental health apps (see my discussion at the link here). The new READI framework is a substantial step in that profoundly needed direction. Moving the needle on the subjective vs. objective preponderance in psychology is going to take earnest and undeterred energy and attention. Newbie researchers especially are encouraged to pursue these novel efforts. Seasoned researchers might consider adjusting their usual methods to also incorporate AI, when suitable. The use of AI can be a handy tool and demonstrative aid. I've delineated the many ways that AI has already inspired and assisted psychology, and likewise, how psychology has aided and inspired advances in AI, see the link here for that discussion. There is a great deal at stake in terms of transforming psychology and the behavioral sciences as far forward as we can aim to achieve. Besides bolstering mental health, which certainly is crucial and laudable, Charles Darwin made an even grander point in his 'On the Origin of Species by Means of Natural Selection' in 1859: You see, the stakes also include revealing the origins of humankind and our storied history. Boom, drop the mic. Some might say it is ironic that AI as a computing machine would potentially have a hand in the discovery of that origin, but it isn't that far-fetched since AI is in fact made by the hand of humanity. It's our self-devised tool in an expanding toolkit to understand the world. And which gladly includes two very favored domains, e.g., the close and dear cousins of psychology and physics.


WIRED
01-06-2025
- Health
- WIRED
How to Make AI Faster and Smarter—With a Little Help from Physics
Jun 1, 2025 7:00 AM Rose Yu has drawn on the principles of fluid dynamics to improve deep learning systems that predict traffic, model the climate, and stabilize drones during flight. Photograph: Peggy Peattie for Quanta Magazine The original version of this story appeared in Quanta Magazine. When she was 10 years old, Rose Yu got a birthday present that would change her life—and, potentially, the way we study physics. Her uncle got her a computer. That was a rare commodity in China 25 years ago, and the gift did not go unused. At first, Yu mainly played computer games, but in middle school she won an award for web design. It was the first of many computer-related honors. Yu went on to major in computer science at Zhejiang University, where she won a prize for innovative research. For her graduate studies, she chose the University of Southern California (USC), partly because the same uncle—who was the only person she knew in the United States—was then working at the Jet Propulsion Laboratory in nearby Pasadena. Yu earned her doctorate in 2017 with an award for best dissertation. Her most recent honor came in January, when President Joe Biden, in his last week in office, gave her a Presidential Early Career Award. Yu, now an associate professor at the University of California, San Diego (UCSD), is a leader in a field known as 'physics-guided deep learning,' having spent years incorporating our knowledge of physics into artificial neural networks. The work has not only introduced novel techniques for building and training these systems, but it's also allowed her to make progress on several real-world applications. She has drawn on principles of fluid dynamics to improve traffic predictions, sped up simulations of turbulence to enhance our understanding of hurricanes, and devised tools that helped predict the spread of Covid-19. This work has brought Yu closer to her grand dream—deploying a suite of digital lab assistants that she calls AI Scientist. She now envisions what she calls a 'partnership' between human researchers and AI tools, fully based on the tenets of physics and thus capable of yielding new scientific insights. Combining inputs from a team of such assistants, in her opinion, may be the best way to boost the discovery process. Quanta spoke with Yu about turbulence in its many guises, how to get more out of AI, and how it might get us out of urban gridlock. The interview has been condensed and edited for clarity. Yu on the UCSD campus, where she is an associate professor. Photograph: Peggy Peattie for Quanta Magazine When did you first try to combine physics with deep learning? Rose Yu: It started with traffic. I was a grad student at USC, and the campus is right near the intersection of I-10 and I-110. To get anywhere, you have to go through a lot of traffic, which can be very annoying. In 2016, I began to wonder whether I could do anything about this. Deep learning—which uses multilayered neural networks to elicit patterns from data—was getting really hot back then. There was already a lot of excitement about applications in image classification, but images are just static things. I wondered whether deep learning could help with problems where things are constantly changing. I wasn't the first person to consider this, but my colleagues and I did find a novel way of framing the problem. What was your new approach? First, we thought of traffic in terms of the physical process of diffusion. In our model, the flow of traffic over a network of roads is analogous to the flow of fluids over a surface—motions that are governed by the laws of fluid dynamics. But our main innovation was to think of traffic as a graph, from the mathematical field of graph theory. Sensors, which monitor traffic on highways and other roads, serve as the nodes of this graph. And the edges of the graph represent the roads (and distances) between those sensors. Yu's interest in computers began with a gift for her 10th birthday. Photograph: Peggy Peattie for Quanta Magazine A graph provides a snapshot of the entire road network at a given time, telling you the average velocity of cars at every point on the graph. When you put together a series of these snapshots, spaced every five minutes apart, you get a good picture of how traffic is evolving. From there, you can try to predict what will happen in the future. The big challenge in deep learning is that you need a lot of data to train the neural network. Fortunately, one of my advisers, Cyrus Shahabi, had worked for many years on the problem of traffic forecasting, and he'd accumulated a vast amount of LA traffic data that I had access to. So how good were your predictions? Prior to our work, people could only make traffic forecasts that were reliable for about 15 minutes. Our forecasts were valid for one hour—a big improvement. Our code was deployed by Google Maps in 2018. A bit later, Google invited me to become a visiting researcher. That's about when you began working on climate modeling, right? Yes, that started in 2018, when I gave a talk at the Lawrence Berkeley National Laboratory. Afterward, I spoke with scientists there, and we looked for a problem that would be a good testbed for physics-guided deep learning. We settled on predicting the evolution of turbulent flow, which is a key factor in climate models, as well as an area of major uncertainty. Familiar examples of turbulence are the swirling patterns you see after pouring milk into a cup of coffee and giving it a stir. In the oceans, swirls like this can span thousands of miles. Predictions of turbulent behavior that are based on solving the Navier-Stokes equation, which describes the flow of fluids, are considered the gold standard in this field. But the required calculations are very slow, which is why we don't have good models for predicting hurricanes and tropical cyclones. The heavy congestion of Los Angeles first inspired Yu to model highway traffic as the flow of fluids. Photograph: Peggy Peattie for Quanta Magazine And deep learning can help? The basic idea is that deep neural networks that are trained on our best numerical simulations can learn to imitate—or as we say, 'emulate'—those simulations. They do that by recognizing properties and patterns buried within the data. They don't have to go through time-consuming, brute-force calculations to find approximate solutions. Our models sped up predictions by a factor of 20 in two-dimensional settings and by a factor of 1,000 in three-dimensional settings. Something like our turbulence prediction module might someday be inserted into bigger climate models that can do better at predicting things like hurricanes. Where else does turbulence show up? It's pretty much everywhere. Turbulence in blood flow, for instance, can lead to strokes or heart attacks. And when I was a postdoc at Caltech, I coauthored a paper that looked into stabilizing drones. Propellor-generated airflows interact with the ground to create turbulence. That, in turn, can cause the drone to wobble. We used a neural network to model the turbulence, and that led to better control of drones during takeoffs and landings. I'm currently working with scientists at UCSD and General Atomics on fusion power. One of the keys to success is learning how to control the plasma, which is a hot, ionized phase of matter. At temperatures of about 100 million degrees, different kinds of turbulence arise within the plasma, and physics-based numerical models that characterize that behavior are very slow. We're developing a deep learning model that should be able to predict the plasma's behavior in a split second, but this is still a work in progress. Yu and doctoral student Jianke Yang in her office at UCSD. Photograph: Peggy Peattie for Quanta Magazine Where did your AI Scientist idea come from? In the past couple of years, my group has developed AI algorithms that can automatically discover symmetry principles from data. For example, our algorithm identified the Lorentz symmetry, which has to do with the constancy of the speed of light. Our algorithm also identified rotational symmetry—the fact, for example, that a sphere doesn't look any different regardless of how you rotate it—which is something it was not specifically trained to know about. While these are well-known properties, our tools also have the capability to discover new symmetries presently unknown to physics, which would constitute a huge breakthrough. It then occurred to me that if our tools can discover symmetries from raw data, why don't we try to generalize this? These tools could also generate research ideas or new hypotheses in science. That was the genesis of AI Scientist. What exactly is AI Scientist—just a fancy kind of neural net? It's not a single neural network, but rather an ensemble of computer programs that can help scientists make new discoveries. My group has already developed algorithms that can help with individual tasks, such as weather forecasting, identifying the drivers of global temperature rise, or trying to discover causal relationships like the effects of vaccination policies on disease transmission. We're now building a broader 'foundation' model that's versatile enough to handle multiple tasks. Scientists gather data from all types of instruments, and we want our model to include a variety of data types—numbers, text, images, and videos. We have an early prototype, but we want to make our model more comprehensive, more intelligent and better trained before we release it. That could happen within a couple of years. What do you imagine it could do? AI can assist in practically every step of the scientific discovery process. When I say 'AI Scientist,' I really mean an AI scientific assistant. The literature survey stage in an experiment, for example, typically requires a massive data-gathering and organization effort. But now, a large language model can read and summarize thousands of books during a single lunch break. What AI is not good at is judging scientific validity. In this case, it can't compete with an experienced researcher. While AI could help with hypothesis generation, the design of experiments and data analysis, it still cannot carry out sophisticated experiments. How far would you like to see the concept go? As I picture it, an AI Scientist could relieve researchers of some of the drudgery while letting people handle the creative aspects of science. That's something we're particularly good at. Rest assured, the goal is not to replace human scientists. I don't envision—nor would I ever want to see—a machine substituting for, or interfering with, human creativity. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


Medscape
30-05-2025
- Health
- Medscape
Deep Learning Aids Diagnosis of Chronic Pancreatitis
A deep learning (DL)–based model achieved a high accuracy in pancreas segmentation for patients with chronic pancreatitis (CP) and healthy individuals, a new study finds. The model showed robust performance across diverse scanning protocols and anatomic variations, although its accuracy was affected by visceral fat area and pancreas volume. METHODOLOGY: Researchers developed a DL-based tool using the neural network U-Net (nnU-Net) architecture for the automated segmentation of retrospectively collected CT scans of the pancreas of healthy individuals and of patients with CP. Scans were obtained from one hospital each in Aalborg (n = 373; 223 patients with CP and 150 healthy individuals) and Bergen (n = 97 patients with CP), along with an online dataset from the National Institutes of Health (NIH; n = 80 healthy individuals). The tool was validated and tested using internal and external datasets, and its performance was compared with manual processing done by radiologists using the Sørensen-Dice index. The tool's performance was examined for potential correlation with factors including visceral fat area at the third lumbar level, pancreas volume, and CT scan parameters. TAKEAWAY: The tool demonstrated strong performance with mean Sørensen-Dice scores of 0.85 for the Aalborg test dataset, 0.79 for the Bergen dataset, and 0.79 for the NIH dataset. Sørensen-Dice scores were positively correlated with visceral fat area across datasets (correlation coefficient [r], 0.45; P < .0001) and with pancreas volume in the Aalborg test dataset (r, 0.53; P = .0002). < .0001) and with pancreas volume in the Aalborg test dataset (r, 0.53; = .0002). CT scan parameters had no significant effect on model performance. The tool maintained accuracy across diverse anatomic variations, except in cases with severe pancreatic fat infiltration. IN PRACTICE: "This study presents a novel AI [artificial intelligence]–based pancreas segmentation model trained on both healthy individuals and CP [chronic pancreatitis] patients, demonstrating consistent and robust performance across internal and external test datasets that vary in patient characteristics and scanner parameters. The model has the potential to significantly enhance the efficiency and accuracy of pancreas segmentation in clinical practice and research, particularly for CP patients with complex anatomical features," the authors wrote. SOURCE: This study was led by Surenth Nalliah, Radiology Research Center, Department of Radiology, Aalborg University Hospital, Aalborg, Denmark. It was published online on May 14 in European Journal of Radiology . LIMITATIONS: Comprehensive hyperparameter optimisation was not performed due to computational constraints. Additionally, architectures beyond nnU-Net and other segmentation methods were not explored. Post hoc visualisation methods were not studied. Small sizes of datasets could have hindered model performance, and cases of severe pancreatic fat infiltration were not included. DISCLOSURES: Funding information was not provided for this study. One author reported receiving financial support from Health Hub, founded by the Spar Nord Foundation.