
4 principles for using AI to spot abuse—without making it worse
Artificial intelligence is rapidly being adopted to help prevent abuse and protect vulnerable people—including children in foster care, adults in nursing homes, and students in schools. These tools promise to detect danger in real time and alert authorities before serious harm occurs.
Developers are using natural language processing, for example—a form of AI that interprets written or spoken language—to try to detect patterns of threats, manipulation, and control in text messages. This information could help detect domestic abuse and potentially assist courts or law enforcement in early intervention. Some child welfare agencies use predictive modeling, another common AI technique, to calculate which families or individuals are most 'at risk' for abuse.
When thoughtfully implemented, AI tools have the potential to enhance safety and efficiency. For instance, predictive models have assisted social workers to prioritize high-risk cases and intervene earlier.
But as a social worker with 15 years of experience researching family violence —and five years on the front lines as a foster-care case manager, child abuse investigator, and early childhood coordinator—I've seen how well-intentioned systems often fail the very people they are meant to protect.
Now, I am helping to develop iCare, an AI-powered surveillance camera that analyzes limb movements—not faces or voices—to detect physical violence. I'm grappling with a critical question: Can AI truly help safeguard vulnerable people, or is it just automating the same systems that have long caused them harm?
New tech, old injustice
Many AI tools are trained to 'learn' by analyzing historical data. But history is full of inequality, bias, and flawed assumptions. So are people, who design, test, and fund AI.
That means AI algorithms can wind up replicating systemic forms of discrimination, like racism or classism. A 2022 study in Allegheny County, Pennsylvania, found that a predictive risk model to score families' risk levels—scores given to hotline staff to help them screen calls—would have flagged Black children for investigation 20% more often than white children, if used without human oversight. When social workers were included in decision-making, that disparity dropped to 9%.
Language-based AI can also reinforce bias. For instance, one study showed that natural language processing systems misclassified African American Vernacular English as 'aggressive' at a significantly higher rate than Standard American English—up to 62% more often, in certain contexts.
Meanwhile, a 2023 study found that AI models often struggle with context clues, meaning sarcastic or joking messages can be misclassified as serious threats or signs of distress.
These flaws can replicate larger problems in protective systems. People of color have long been over-surveilled in child welfare systems—sometimes due to cultural misunderstandings, sometimes due to prejudice. Studies have shown that Black and Indigenous families face disproportionately higher rates of reporting, investigation, and family separation compared with white families, even after accounting for income and other socioeconomic factors.
Many of these disparities stem from structural racism embedded in decades of discriminatory policy decisions, as well as implicit biases and discretionary decision-making by overburdened caseworkers.
Surveillance over support
Even when AI systems do reduce harm toward vulnerable groups, they often do so at a disturbing cost.
In hospitals and eldercare facilities, for example, AI-enabled cameras have been used to detect physical aggression between staff, visitors, and residents. While commercial vendors promote these tools as safety innovations, their use raises serious ethical concerns about the balance between protection and privacy.
In a 2022 pilot program in Australia, AI camera systems deployed in two care homes generated more than 12,000 false alerts over 12 months—overwhelming staff and missing at least one real incident. The program's accuracy did 'not achieve a level that would be considered acceptable to staff and management,' according to the independent report.
Children are affected, too. In U.S. schools, AI surveillance like Gaggle, GoGuardian, and Securly are marketed as tools to keep students safe. Such programs can be installed on students' devices to monitor online activity and flag anything concerning.
But they've also been shown to flag harmless behaviors—like writing short stories with mild violence, or researching topics related to mental health. As an Associated Press investigation revealed, these systems have also outed LGBTQ+ students to parents or school administrators by monitoring searches or conversations about gender and sexuality.
Other systems use classroom cameras and microphones to detect 'aggression.' But they frequently misidentify normal behavior like laughing, coughing, or roughhousing—sometimes prompting intervention or discipline.
These are not isolated technical glitches; they reflect deep flaws in how AI is trained and deployed. AI systems learn from past data that has been selected and labeled by humans—data that often reflects social inequalities and biases. As sociologist Virginia Eubanks wrote in Automating Inequality, AI systems risk scaling up these long-standing harms.
Care, not punishment
I believe AI can still be a force for good, but only if its developers prioritize the dignity of the people these tools are meant to protect. I've developed a framework of four key principles for what I call 'trauma-responsive AI.'
Survivor control: People should have a say in how, when, and if they're monitored. Providing users with greater control over their data can enhance trust in AI systems and increase their engagement with support services, such as creating personalized plans to stay safe or access help.
Human oversight: Studies show that combining social workers' expertise with AI support improves fairness and reduces child maltreatment —as in Allegheny County, where caseworkers used algorithmic risk scores as one factor, alongside their professional judgment, to decide which child abuse reports to investigate.
Bias auditing: Governments and developers are increasingly encouraged to test AI systems for racial and economic bias. Open-source tools like IBM's AI Fairness 360, Google's What-If Tool, and Fairlearn assist in detecting and reducing such biases in machine learning models.
Privacy by design: Technology should be built to protect people's dignity. Open-source tools like Amnesia, Google's differential privacy library, and Microsoft's SmartNoise help anonymize sensitive data by removing or obscuring identifiable information. Additionally, AI-powered techniques, such as facial blurring, can anonymize people's identities in video or photo data.
Honoring these principles means building systems that respond with care, not punishment.
Some promising models are already emerging. The Coalition Against Stalkerware and its partners advocate to include survivors in all stages of tech development—from needs assessments to user testing and ethical oversight.
Legislation is important, too. On May 5, 2025, for example, Montana's governor signed a law restricting state and local government from using AI to make automated decisions about individuals without meaningful human oversight. It requires transparency about how AI is used in government systems and prohibits discriminatory profiling.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
15 minutes ago
- Yahoo
5 AI stocks to consider buying and holding for the long term
Many AI applications are still in development, offering ground-floor buying opportunities in their stocks. Below are some established companies that five of contract writers like as investments to consider buying to capitalise on this transformational technology. What it does: Alphabet is a global technology company best known for Google, YouTube, Android, and cloud services. By Mark Hartley. When considering an AI investment for the long term, Google's parent company Alphabet (NASDAQ: GOOG) stands out. It has emerged as a key player in the AI space, leveraging its vast data resources and computational power to dig deep roots into the industry. Through DeepMind and its Gemini AI models, Alphabet is at the forefront of generative AI development. Google Cloud offers scalable AI tools and infrastructure for businesses, while AI enhancements in products like Search, Gmail, and YouTube are well-positioned to benefit from advertising revenue. Alphabet's expansive ecosystem gives it a strategic advantage in training and deploying AI models at scale. A significant risk, however, lies in the potential disruption of its core search business. As AI chatbots and generative search become more prevalent, traditional search advertising could face margin pressure. Additionally, if faces increased regulatory scrutiny on data usage, antitrust concerns and competition from rivals like Microsoft and Amazon. Mark Hartley doesn't own shares in any of the stocks mentioned. What it does: Cellebrite is the global leader in decrypting mobile phones and other devices supporting digital forensic investigations. By Zaven Boyrazian. Many AI stocks today are unproven. That's why I prefer established players leveraging AI to improve their existing mission-critical products like Cellebrite (NASDAQ:CLBT). Cellebrite specialises in extracting encrypted data from mobile phones and other devices aiding law enforcement and enterprises in criminal and cybersecurity investigations. Over 90% of crime commited today has a digital element. And when it comes to decrypting mobile phones, Cellebrite is the global gold standard. The company is now leveraging AI to analyse encrypted data – drastically accelerating a task that's historically been increadibly labour intensive identifying patterns, discovering connections, and establishing leads. Most of Cellebrite's revenue comes from law enforcement, exposing Cellebrite to the risk of budget cuts. In fact, fears of lower US federal spending is why the stock dropped sharply in early 2025. And with a premium valuation, investors can expect more volatility moving forward. But in the long run, Cellebrite has what it takes to be an AI winner in my mind. That's why I've already bought shares. Zaven Boyrazian owns shares in Cellebrite. What it does: Dell Technologies provides a broad range of IT products and services and is an influential player in AI. By Royston Wild. Dell Technologies (NYSE:DELL) isn't one of the more fashionable names in the realm of artificial intelligence (AI). The good news is that this means it trades at a whopping discount to many of its peers. For this financial year (to January 2026), City analysts think earnings will soar 41% year on year, leaving it on a price-to-earnings (P/E) multiple of 12.6 times. Such readings are as rare as hen's teeth in the high-growth tech industry. In addition, Dell shares also trade on a price-to-earnings growth (PEG) ratio of 0.3 for this year. Any reading below 1 implies a share is undervalued. These modest readings fail to reflect the exceptional progress the company's making in AI, in my opinion. Indeed, Dell last month raised guidance for the current quarter as it announced 'unprecedented demand for our AI-optimised servers' during January-March. It booked $12.1bn in AI orders in the last quarter alone, beating the entire total for the last financial year. Dell is a major supplier of server infrastructure that let Nvidia's high-power chips do their thing. Dell's shares could sink if unfavourable developments in the ongoing tariff wars transpire. But the company's low valuation could help limit the scale of any falls. Royston Wild does not own shares in Dell or Nvidia. What it does: Salesforce is a customer relationship management (CRM) software company that is developing AI agents. By Edward Sheldon, CFA. We've all seen the potential of artificial intelligence (AI) in recent years. Using apps like ChatGPT and Gemini, we can do a lot of amazing things today. These apps are just the start of the AI story, however. I expect the next chapter to be about AI agents – software programmes that can complete tasks autonomously and increase business productivity exponentially. One company that is active in this space is Salesforce (NYSE: CRM). It's a CRM software company that has recently developed an agentic AI offering for businesses called 'Agentforce'. It's still early days here. But already the company is having a lot of success with this offering, having signed up 8,000 customers since the product's launch last October. Now, Salesforce is not the only company developing AI agents. So, competition from rivals is a risk. I like the fact that the company's software is already embedded in over 150,000 organisations worldwide though. This could potentially give it a major competitive advantage in the agentic AI race. Edward Sheldon has positions in Salesforce. What it does: Salesforce is a cloud-based software company specialising in customer relationship management, helping businesses manage sales, marketing, support, and data. By Ben McPoland. I think Salefsforce (NYSE: CRM) looks well set up to benefit in the age of AI. Specifically, its Agentforce platform, which lets businesses deploy AI agents to handle various tasks, could be the company's next big growth engine. By the end of April, it had already closed over 8,000 deals, just six months after launching Agentforce. Half of those were paid deals, taking its combined data cloud and AI annual recurring revenue above $1bn. Granted, that looks like small potatoes set against the $41.2bn in sales it's expected to generate this fiscal year. But it's still very early days, and management reckons the digital labour market opportunity could run into the trillions of dollars. Of course, it's always best to treat such mind-boggling projections with a healthy dose of scepticism. And the company does face stiff competition in the AI agent space, especially from Microsoft and ServiceNow. Nevertheless, I'm bullish here. Salesforce is already deeply embedded in sales, service, and marketing. Its AI agents slot into existing workflows, which I think will prove to be a big advantage over unproven AI upstarts. Ben McPoland owns shares of Salesforce. The post 5 AI stocks to consider buying and holding for the long term appeared first on The Motley Fool UK. More reading 5 Stocks For Trying To Build Wealth After 50 One Top Growth Stock from the Motley Fool John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. The Motley Fool UK has recommended Alphabet, Amazon, Cellebrite, Microsoft, Nvidia, and Salesforce. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors. Motley Fool UK 2025 Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


CNET
19 minutes ago
- CNET
Stop Putting Your Phone Face Up on the Table
Have you ever been phone snubbed? That's what happens when you're spending time with someone who seems more interested in their phone. Your friend might be sitting right across from you but when they're laughing at a video or meme that only they can see, it feels like they're a million miles away. I've been guilty of paying more attention to my screen than my companion and I feel bad about it afterward. There's nothing wrong with replying to an urgent Slack message or pulling up a funny TikTok to share. But I know I probably spend too much time staring at screens and a lot of that time is unhealthy doomscrolling. These days, when I'm not using my phone, I try to be more deliberate about keeping it out of sight and out of mind. If I do need to keep my phone at hand, I always have it face down. It could help save your phone battery I have a few reasons for making sure my phone screen is turned away. The first one is practical: Because my screen is face down and won't turn on for each notification, I can save a little bit of battery charge. A single notification won't mean the difference between my phone lasting the whole day or dying in the afternoon, but notifications can add up, especially if I've enabled them across all of my apps. If I'm in a lot of group chats, my screen might end up turning on dozens of times throughout the day (and that's on the low side because many teenagers have hundreds of notifications a day). It also shows that you pay attention Keeping my phone face down is also a good rule of social etiquette: If I'm hanging out with someone, I keep my screen hidden from view as a subtle way of showing that I won't be distracted by it. I don't want incoming notifications to light up my screen every few seconds, especially if I'm in a bar or other dimly lit setting. I want to keep my eyes on the person I'm talking to. "Eye contact is one of the most powerful forms of human connection. Neuroscience research indicates that when two people make direct eye contact, their brain activity begins to synchronize, supporting more effective communication and increasing empathy. This synchrony can be disrupted when attention shifts to a phone, even briefly," says Michelle Davis, clinical psychologist at Headspace. When I'm with the people I've chosen to spend time with, I want to be fully present with them. A sudden notification will tempt me to glance at, or worse, pick up my phone in the middle of a conversation. It minimizes your phone's presence I also have a more personal reason for keeping my phone face down and I suspect that other people have had this same thought: My phone takes up too much space in my life. I mean that quite literally. My phone is bigger than it needs to be. That's been especially true since I upgraded from my iPhone Mini to a "normal-sized" iPhone. Yes, I got a much needed boost in battery life but I also got a screen with more pixels to lure me into the next news headline or autoplaying Instagram reel. A small smartphone isn't something that really exists anymore. My phone is bigger and better at grabbing my attention. It competes against my friends and family, books and movies, the entire world outside of its 6-inch screen. It often wins. But there's still one small thing I can do to minimize its presence: I can keep the screen turned away from me whenever possible. It can sometimes feel like there's no escaping from my phone. Whether that ever changes, or phones evolve into a new form factor, I can't say. I can't control everything about my phone, but I can control whether the screen stares at me when I'm not staring at it.

Associated Press
20 minutes ago
- Associated Press
MountBay Energy Unlocks Microbial Biofilm Technology to Revolutionize Battery Longevity
NEW YORK, June 21, 2025 (GLOBE NEWSWIRE) -- MountBay Energy has unveiled groundbreaking research on microbial biofilms that could redefine the future of grid-scale energy storage. The study, led by founder Vrushabhraj Tanawade, introduces a bio-integrated insulation method using thermophilic and mesophilic microbial consortia to regulate heat inside battery modules. The results are striking: up to a 22% reduction in internal temperature and a 30% improvement in carbon lifecycle efficiency. 'This innovation is about biology meeting infrastructure,' says Tanawade. 'We've discovered how nature's mechanisms can dramatically extend the life of our clean energy systems.' Unlike conventional synthetic cooling solutions, MountBay's microbial approach is circular, biodegradable, and scalable—opening up new frontiers for climate resilience and fire-risk reduction in hot environments. The research aligns perfectly with MountBay's mission to power the AI economy through clean, sustainable, and advanced infrastructure. It also positions the company as a frontrunner in biological material integration across the energy sector. Additionally, MountBay has released a preliminary transformative feasibility report for a Lunar Solar Belt—a continuous solar array on the Moon that can beam uninterrupted, clean energy back to Earth. The report outlines how in-situ resource utilization (ISRU), autonomous lunar robotics, and microwave power transmission could enable the construction of a moon-based solar plant by the 2030s. With an energy return on investment (EROI) of 8:1, the system offers a scalable, emission-free solution to humanity's growing power demands. 'This is not just an energy project—it's a civilization-scale investment in global stability,' said Tanawade. 'We believe the Moon should be a cooperative utility, not a geopolitical race.' MountBay is also proposing a new diplomatic framework—The Earth-Moon Energy Accord (EMEA)—to ensure equitable access, safety, and international cooperation. The concept directly supports MountBay's mission: to push the frontiers of clean power while securing energy independence for AI-driven economies. Tanawade is rallying governments, institutions, and innovators to join him. 'It's time for America to lead the most ambitious energy project in human history,' he said. Media Contact: Vrushabhraj Tanawade Founder @ MountBay Energy Contact : [email protected] Website: Linkedin: Linkedin - Vrushabhraj T Disclaimer: This press release is provided by MountBay Energy. The statements, views, and opinions expressed are solely those of the provider and do not necessarily reflect those of this media platform or its publisher. Any names or brands mentioned are used for identification purposes only and remain the property of their respective owners. No endorsement or guarantee is made regarding the accuracy, completeness, or reliability of the information presented. This material is for informational purposes only and does not constitute financial, legal, or professional advice. Readers are encouraged to conduct independent research and consult qualified professionals. The publisher is not liable for any losses, damages, or legal issues arising from the use or publication of this content. Photos accompanying this announcement are available at: