AI tool scans faces to predict biological age and cancer survival
A simple selfie could hold hidden clues to one's biological age — and even how long they'll live.
That's according to researchers from Mass General Brigham, who developed a deep-learning algorithm called FaceAge.
Using a photo of someone's face, the artificial intelligence tool generates predictions of the subject's biological age, which is the rate at which they are aging as opposed to their chronological age.
Music Conductor With Parkinson's Sees Symptoms Improve With Deep Brain Stimulation
FaceAge also predicts survival outcomes for people with cancer, according to a press release from MGB.
The AI tool was trained on 58,851 photos of "presumed healthy individuals from public datasets," the release stated.
Read On The Fox News App
To test the tool's accuracy, the researchers used it to analyze photos of 6,196 cancer patients taken before radiotherapy treatment.
Among the people with cancer, the tool generated a higher biological age that was about five years higher than their chronological age.
Paralyzed Man With Als Is Third To Receive Neuralink Implant, Can Type With Brain
The researchers also tested the tool's ability to predict the life expectancy of 100 people receiving palliative care based on their photos, then compared it to 10 clinicians' predictions. FaceAge was found to be more accurate than the clinicians' predictions.
The researchers' findings were published in The Lancet Digital Health.
"We can use artificial intelligence to estimate a person's biological age from face pictures, and our study shows that information can be clinically meaningful," said co-senior and corresponding author Hugo Aerts, PhD, director of the Artificial Intelligence in Medicine (AIM) program at Mass General Brigham, in the release.
"This work demonstrates that a photo like a simple selfie contains important information that could help to inform clinical decision-making and care plans for patients and clinicians," he went on.
Woman Says Chatgpt Saved Her Life By Helping Detect Cancer, Which Doctors Missed
"How old someone looks compared to their chronological age really matters — individuals with FaceAges that are younger than their chronological ages do significantly better after cancer therapy."
The goal is for the tool to help eliminate any bias that may influence a doctor's care decisions based on the perception of a patient's appearance and age.
The researchers noted that more research is needed before the tool could be rolled out for clinical use.
Future studies will include different hospitals and cancer patients at various stages of the disease, according to the release. Researchers will also evaluate FaceAge's ability to predict diseases, general health status and lifespan.
"This opens the door to a whole new realm of biomarker discovery from photographs, and its potential goes far beyond cancer care or predicting age," said co-senior author Ray Mak, MD, a faculty member in the AIM program at Mass General Brigham, in the release.
"As we increasingly think of different chronic diseases as diseases of aging, it becomes even more important to be able to accurately predict an individual's aging trajectory. I hope we can ultimately use this technology as an early detection system in a variety of applications, within a strong regulatory and ethical framework, to help save lives."
Dr. Harvey Castro, a board-certified emergency medicine physician and national speaker on artificial intelligence based in Dallas, Texas, was not involved in FaceAge's development but shared his comments on the tool.
Are Full-body Scans Worth The Money? Doctors Share What You Should Know
"As an emergency physician and AI futurist, I see both the promise and peril of AI tools like FaceAge," he told Fox News Digital.
"What excites me is that FaceAge structures the clinical instinct we call the 'eyeball test' — a gut sense of how sick someone looks. Now, machine learning can quantify that assessment with surprising accuracy."
Castro predicts that FaceAge could help doctors better personalize treatment plans or prioritize palliative care in oncology — "where resilience matters more than a birthdate."
The doctor emphasized, however, that caution is key.
"AI models are only as good as the data they're trained on," Castro noted. "If the training data lacks diversity, we risk producing biased results."
"While FaceAge may outperform clinicians in some survival predictions, it should augment human judgment, not override it."
Click Here To Sign Up For Our Health Newsletter
Castro also cautioned about potential ethical concerns.
"Who owns the facial data? How is it stored? Do patients understand what's being analyzed? These questions matter as much as the technology itself," he said.
There is also a psychological impact of the tool, Castro noted.
"Being told you 'look older' than your age could influence treatment decisions or self-perception in ways we don't yet fully understand," he said.
"We need clear consent, data privacy and sensitivity. No one wants to be told they look older without context."
For more Health articles, visit www.foxnews.com/health
The bottom line, according to Castro, is that AI can enhance a doctor's judgment, but cannot replace it.
"AI can enhance our care — but it cannot replace the empathy, context and humanity that define medicine."Original article source: AI tool scans faces to predict biological age and cancer survival
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
3 hours ago
- Yahoo
ChatGPT's Impact On Our Brains According to an MIT Study
A visualization of a new study on AI chatbots by MIT Media Lab scholars. Credit - Nataliya Kosmyna Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT's Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI's ChatGPT, Google's search engine, and nothing at all, respectively. Researchers used an EEG to record the writers' brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels.' Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study. The paper suggests that the usage of LLMs could actually harm learning, especially for younger users. The paper has not yet been peer reviewed, and its sample size is relatively small. But its paper's main author Nataliya Kosmyna felt it was important to release the findings to elevate concerns that as society increasingly relies upon LLMs for immediate convenience, long-term brain development may be sacrificed in the process. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' she says. 'Developing brains are at the highest risk.' Read more: A Psychiatrist Posed As a Teen With Therapy Chatbots. The Conversations Were Alarming The MIT Media Lab has recently devoted significant resources to studying different impacts of generative AI tools. Studies from earlier this year, for example, found that generally, the more time users spend talking to ChatGPT, the lonelier they feel. Kosmyna, who has been a full-time research scientist at the MIT Media Lab since 2021, wanted to specifically explore the impacts of using AI for schoolwork, because more and more students are using AI. So she and her colleagues instructed subjects to write 20-minute essays based on SAT prompts, including about the ethics of philanthropy and the pitfalls of having too many choices. The group that wrote essays using ChatGPT all delivered extremely similar essays that lacked original thought, relying on the same expressions and ideas. Two English teachers who assessed the essays called them largely 'soulless.' The EEGs revealed low executive control and attentional engagement. And by their third essay, many of the writers simply gave the prompt to ChatGPT and had it do almost all of the work. 'It was more like, 'just give me the essay, refine this sentence, edit it, and I'm done,'' Kosmyna says. The brain-only group, conversely, showed the highest neural connectivity, especially in alpha, theta and delta bands, which are associated with creativity ideation, memory load, and semantic processing. Researchers found this group was more engaged and curious, and claimed ownership and expressed higher satisfaction with their essays. The third group, which used Google Search, also expressed high satisfaction and active brain function. The difference here is notable because many people now search for information within AI chatbots as opposed to Google Search. After writing the three essays, the subjects were then asked to re-write one of their previous efforts—but the ChatGPT group had to do so without the tool, while the brain-only group could now use ChatGPT. The first group remembered little of their own essays, and showed weaker alpha and theta brain waves, which likely reflected a bypassing of deep memory processes. 'The task was executed, and you could say that it was efficient and convenient,' Kosmyna says. 'But as we show in the paper, you basically didn't integrate any of it into your memory networks.' The second group, in contrast, performed well, exhibiting a significant increase in brain connectivity across all EEG frequency bands. This gives rise to the hope that AI, if used properly, could enhance learning as opposed to diminishing it. Read more: I Quit Teaching Because of ChatGPT This is the first pre-review paper that Kosmyna has ever released. Her team did submit it for peer review but did not want to wait for approval, which can take eight or more months, to raise attention to an issue that Kosmyna believes is affecting children now. 'Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,' says Kosmyna. 'We need to have active legislation in sync and more importantly, be testing these tools before we implement them.' Psychiatrist Dr. Zishan Khan, who treats children and adolescents, says that he sees many kids who rely heavily on AI for their schoolwork. 'From a psychiatric standpoint, I see that overreliance on these LLMs can have unintended psychological and cognitive consequences, especially for young people whose brains are still developing,' he says. 'These neural connections that help you in accessing information, the memory of facts, and the ability to be resilient: all that is going to weaken.' Ironically, upon the paper's release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to 'only read this table below,' thus ensuring that LLMs would return only limited insight from the paper. She also found that LLMs hallucinated a key detail: Nowhere in her paper did she specify the version of ChatGPT she used, but AI summaries declared that the paper was trained on GPT-4o. 'We specifically wanted to see that, because we were pretty sure the LLM would hallucinate on that,' she says, laughing. Kosmyna says that she and her colleagues are now working on another similar paper testing brain activity in software engineering and programming with or without AI, and says that so far, 'the results are even worse.' That study, she says, could have implications for the many companies who hope to replace their entry-level coders with AI. Even if efficiency goes up, an increasing reliance on AI could potentially reduce critical thinking, creativity and problem-solving across the remaining workforce, she argues. Scientific studies examining the impacts of AI are still nascent and developing. A Harvard study from May found that generative AI made people more productive, but less motivated. Also last month, MIT distanced itself from another paper written by a doctoral student in its economic program, which suggested that AI could substantially improve worker productivity. OpenAI did not respond to a request for comment. Last year in collaboration with Wharton online, the company released guidance for educators to leverage generative AI in teaching. Last year in collaboration with Wharton online, the company released guidance for educators to leverage generative AI in teaching. Contact us at letters@
Yahoo
5 hours ago
- Yahoo
Is ChatGPT Catching Google on Search Activity? [Infographic]
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. Is Google at risk of losing its online discovery crown to ChatGPT? Well, not yet. As you can see in this overview of online search traffic, put together by the team at Visual Capitalist, OpenAI's chatbot still lags Google by a significant margin, and with Google also incorporating its own AI answers, it may still be able to fend off rising competition from ChatGPT, and maintain its place as king of the heap. Based on data presented which was presented by NP Digital at Web Summit 2025, Google still leads the way in overall web search activity, facilitating some 13.7 billion searches per day. Instagram comes in second (6.5 billion searches per day), though interestingly, Snapchat and LinkedIn also rank high for daily search activity, higher than Facebook, Microsoft's Bing and Pinterest. I would have assumed that all of these would see more search activity than Snap, but NP Digital says that the data has been sourced from multiple sources. Which could also point to variances in tracking process. Either way, the topline note is that ChatGPT still has a long way to go to become the key search engine, though it is growing fast, and as more and more people converse with AI chatbots, it's quickly becoming a more habitual search process. Sign in to access your portfolio
Yahoo
9 hours ago
- Yahoo
I use these 3 ChatGPT prompts to work smarter and stay competitive — here's how
When you buy through links on our articles, Future and its syndication partners may earn a commission. If you've been following the news, you've probably seen it: AI-driven layoffs are on the rise. From newsroom cuts to tech giants automating tasks once handled by entire teams, AI is getting smarter and changing the job market faster than anyone expected. Whether you're trying to protect your current job or looking for your next role, the uncertainty is real. Even though I test AI tools for a living, I found myself asking: Could AI replace me, too? That's when I tried a simple exercise with ChatGPT — using just a few prompts to assess my career risk and figure out how to stay ahead of AI. Here's exactly how you can do the same. Start by copying and pasting your current resume into ChatGPT (or your preferred chatbot). You can also upload it directly, just be sure you have removed all personal, confidential or sensitive information first. If you don't have a formal resume handy, you could use ChatGPT to write one, or you can also provide a summary of your current role, responsibilities, and major skills. Once you've shared your background, type this prompt:"Based on my resume and skills, how soon will AI take my job?" You might be surprised by the response. AI can provide a candid, and often eye-opening, assessment of how vulnerable your role is to automation — and which aspects of your job are still uniquely human. It may flag parts of your skill set that are becoming less valuable in the current market. But, it may also give you reassurance based on your skills and ability to adapt. This is also a good time to enter the description of a job you're hoping to land in the next few years. Will it even exist? Next, follow up with this prompt: "What skills do I need to learn to pivot and future-proof my career?" The chatbot will typically generate a list of in-demand skills that can help you adapt, pivot to more secure roles or even transition into entirely new career paths. These often include areas where human expertise still has an edge — think creativity, emotional intelligence, leadership, strategy, problem-solving and relationship-building. Based on what the chatbot told you, go ahead and take your prompting a step further by asking ChatGPT: "What's the best way for me to start learning these skills?" In seconds, you'll get suggestions for online courses, certifications, books, podcasts and communities that can help you upskill — often tailored to your current industry or experience level. This quick exercise won't eliminate the risks of an AI-driven job market, but it will give you clarity and maybe even peace of mind as you discover new ways to use your skills. These prompts turn an overwhelming question (will AI take my job?) into an actionable plan. More importantly, it serves as a wake-up call: never stop learning. There are numerous ways you can elevate your human skillset and even develop skills to use AI to do your job better. The best way to stay relevant is to continuously evolve your skills and, where possible, double down on the human qualities AI can't easily replicate. That's your edge in an AI-powered ChatGPT the tough questions is a habit I now recommend to anyone, in any industry. I use the 'blank line' prompt every day now in ChatGPT — here's why Google just launched 'Search Live' — here's why you'll want to try it Midjourney video generation is here — but there's a problem holding it back