logo
Hingham family files Title IX complaint after student creates deepfake image of their daughter

Hingham family files Title IX complaint after student creates deepfake image of their daughter

Yahoo12-06-2025

Megan Mancini filed a Title IX complaint in Hingham Public Schools after she says her daughter was a victim of sexual harassment.
Mancini says another student created a Deepfake pornographic image of her daughter using artificial intelligence.
'She was devastated, I mean she definitely felt violated, she wanted something to be done about it, and at that point we had notified the school, the police,' said Mancini.
After Mancini filed a complaint about the incident in January, Hingham schools launched an investigation.
After about four and a half months, the district sent a letter to Mancini, saying that while the student's conduct was 'inappropriate and hurtful, there is insufficient evidence to conclude it occurred in the District's schools.'
'The image was shared in the school hallways, amongst other students during school hours, and it was also shared via text,' said Mancini.
Mancini was disappointed to learn the student responsible for creating that nude photo of her daughter would not be disciplined at Hingham Middle School.
'It makes me feel like the school failed,' said Mancini.
Legal expert Peter Elikann says families could press charges for this under the state's new Revenge Porn and Sexting law.
'The word needs to go out among young people that you can be criminally prosecuted in juvenile court for sending nude images of someone else without their consent,' said Elikann.
He says that includes Deepfakes or AI-generated photos.
'The fact that people can create all kinds of fake pornography online, and young people seem to know how to do it, it's really hit a huge crisis point,' said Elikann.
'I think it's important to have swift action, and I think we missed that critical window,' said Mancini.
Mancini hopes school leaders can start to take more action on these cases to prevent them from happening again, even if districts claim not to have jurisdiction.
'There was not one communication sent out from the school department or the school administration about this issue, and for you know, a heads up, awareness to parents that this is going on, this is going on in middle school, and it's going to get nothing but worse,' said Mancini.
This conduct is becoming such a problem that the state has a youth diversion program to teach minors about the dangers of sharing nude photos, if they're prosecuted in cases like this.
Boston 25 News reached out to Hingham Public Schools multiple times on this issue, but they haven't responded.
This is a developing story. Check back for updates as more information becomes available.
Download the FREE Boston 25 News app for breaking news alerts.
Follow Boston 25 News on Facebook and Twitter. | Watch Boston 25 News NOW

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sharing deepfake pornography 'the next sexual violence epidemic facing schools'
Sharing deepfake pornography 'the next sexual violence epidemic facing schools'

Yahoo

time3 days ago

  • Yahoo

Sharing deepfake pornography 'the next sexual violence epidemic facing schools'

Sharing deepfake pornography is "the next sexual violence epidemic" facing schools, the author of a book on the spread of online misogyny has told MPs. Appearing before the Women and Equalities Committee (WEC), Laura Bates said there needs to be statutory guidance for teachers on how to deal with this "very significant issue". She said in every UK case she has investigated, schools have "paid thousands of pounds hiring PR firms to focus on damage reputation management". However, in terms of supporting girls and taking action against the perpetrators, "nothing has happened". She said of deepfake pornography: "It is happening, it's significant. Female teachers are affected, which often goes unnoticed, and schools are just not equipped to tackle it. "My suggestion would be this is the next big sexual violence epidemic facing schools and people don't even know it is going on." Deepfakes are pictures, videos or audio clips made with Artificial Intelligence (AI) to look or sound real. While it is illegal to create or share a sexually explicit image of a child, including a deepfake, the technology for making them remains legal. Asked what more could be done to help schools tackle the issue, Ms Bates said in the cases she is aware of "police investigations are ongoing" . However, she said she is not suggesting criminalisation of underage boys is the solution and what is needed is education, prevention and regulation. "It shouldn't be the case that a 12-year-old boy can easily and freely access tools to create these forms of content in the first place," she said. Ms Bates is the founder of the Everyday Sexism project and author of: The New Age of Sexism: How the AI Revolution is Reinventing Misogyny. She also called for "very clear guidance" on how schools should respond to this challenge. She warned of a repeat of failings that have happened previously with intimate image abuse, where girls have been "coerced into sending images of themselves" then punished for taking the image once it has been shared around - but the person spreading the image has not. Calls to ban 'nudifying apps' A government spokesperson told Sky News: "We are reviewing the relationships, sex and health curriculum to look at all modern-day challenges facing children, including that of deepfake porn, and work to ensure children are taught how to build positive, healthy relationships in an age-appropriate way." It comes amid mounting concern among MPs and experts , with many wanting the government to go further with its regulations on big tech firms. Read More: In April, a report by the Children's Commissioner for England found that nudifying apps are disproportionately targeting girls and young women, with many appearing to work only on female bodies. The commissioner, Dame Rachel de Souza, called for an immediate ban on apps that use AI to create naked images of children, saying "there is no positive reason for these to exist". Inquiry into the 'manosphere' Ms Bates appeared in front of the WEC as part of their inquiry into how the "manosphere" and other online content is fuelling misogyny. The cross-party group has previously heard how the rise of misogyny in young men in particular is not only affecting female pupils but also female staff, with sexual harassment towards teachers described as "rife". On Wednesday's session, the committee also heard concerns around the rise of the "sex tech industry", including robots and AI girlfriends, with fears this is having a wider impact on attitudes towards women and girls. Sarah Owen, the Labour chair of the WEC, told Sky News she could not pre-empt what recommendations would be made to the government. But she said there was huge concern around the online radicalisation of young men, adding: "It was a boiling hot room but my blood ran cold at what I was hearing."

Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security
Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security

Fast Company

time4 days ago

  • Fast Company

Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security

The breakneck speed of artificial intelligence (AI) technology has fundamentally reshaped how businesses manage recruitment, communication, and information dissemination. Among these developments, deepfake technology has emerged as a significant threat, particularly through its use in fraudulent interviews. Deepfake interviews leverage advanced AI techniques, predominantly Generative Adversarial Networks (GANs), to generate hyper-realistic but entirely fabricated audio, video, or imagery. These synthetic media forms convincingly manipulate appearances, voices, and actions, making it exceedingly difficult for average users—and even experts—to discern authenticity. IMPLICATIONS AND MOTIVATIONS FOR DEEPFAKE USE The motivations behind deploying deepfake technology for scams and fraud are varied but consistently damaging. Criminals use deepfakes primarily for financial gain, identity theft, psychological manipulation and disinformation. For instance, deepfakes can facilitate vishing (voice phishing), whereby scammers convincingly mimic a trusted individual's voice, deceiving victims into transferring funds or revealing sensitive information. Additionally, these AI-generated falsifications enable sophisticated blackmail, extortion, and reputation sabotage by disseminating maliciously altered content. Further, deepfakes significantly disrupt corporate trust and operational integrity. Financial crimes involving deepfakes include unauthorized transactions orchestrated by impersonating company executives. A notable case occurred in Hong Kong, where cybercriminals successfully impersonated executives, causing multi-million-dollar losses and severe reputational harm. Beyond immediate financial damage, deepfake attacks can erode consumer trust, destabilize markets, and inflict lasting damage to brand reputation. Moreover, malicious actors exploit deepfake technology politically, disseminating misinformation designed to destabilize governments, provoke conflicts, and disrupt public order. Particularly during elections or significant political events, deepfakes have the potential to manipulate public opinion significantly, challenging the authenticity of democratic processes. TECHNOLOGICAL MECHANISMS AND ACCESSIBILITY The core technological mechanism behind deepfake interviews involves GANs, where AI systems are trained to produce realistic synthetic media by learning from authentic audio and video datasets. The recent democratization of this technology means anyone can produce deepfakes cheaply or freely using readily accessible online tools, exacerbating risks. The emergence of ' deepfake-as-a-service ' models on dark web platforms further compounds these concerns, enabling sophisticated attacks without extensive technical expertise. In recruiting scenarios, deepfake candidates use synthetic identities, falsified resumes, fabricated references, and convincingly altered real-time video interviews to infiltrate organizations. These fraudulent candidates pose acute threats, particularly within industries that rely heavily on remote hiring practices, such as IT, finance, healthcare, and cybersecurity. According to Gartner predictions, one in four job candidates globally will be fake by 2028, highlighting the scale and urgency of addressing this issue. ORGANIZATIONAL RISKS AND CONSEQUENCES Organizations face numerous operational and strategic threats from deepfake attacks. Financially, companies victimized by deepfake fraud experience significant losses, averaging $450,000 per incident. Deepfake infiltration can also lead to data breaches, loss of intellectual property, and compromised cybersecurity infrastructure, all of which bear significant financial and regulatory repercussions. Moreover, deepfake-driven scams lead to broader social engineering attacks. For instance, remote IT workers fraudulently hired through deepfakes have successfully conducted espionage activities, extracting sensitive data or installing malware within corporate networks. Often linked to state-sponsored groups, such incidents further emphasize deepfake-related geopolitical threats. PROACTIVE STRATEGIES FOR MITIGATION AND DEFENSE Given the complexity and severity of deepfake threats, organizations must adopt comprehensive mitigation strategies. Technological solutions include deploying sophisticated AI-powered detection tools designed explicitly for deepfake identification. Platforms such as GetReal Security (no relationship)offer integrated solutions providing proactive detection, advanced forensic analysis, and real-time authentication of digital content. Combining AI-driven solutions with manual forensic analysis has proven particularly effective, as human expertise can spot contextual inconsistencies that AI alone might miss. Furthermore, businesses should enhance cybersecurity awareness and employee training programs. Regular training on recognizing visual, audio, and behavioral anomalies in deepfake content is crucial. Organizations can adopt robust authentication measures like multi-factor authentication (MFA), biometric verification, and blockchain-based methods for verifying digital authenticity, although scalability remains challenging. Additionally, continuous investment in adaptive threat intelligence platforms ensures rapid responses to emerging threats. It's now a necessity to adopt scalable deepfake detection technologies integrated seamlessly within recruitment workflows and organizational infrastructures. My team has encountered a few deepfake interviews ourselves, through contractors. Since then, we've required deeper vendor due diligence and vendor technology to mitigate as well as recruiter training to detect red flags. COLLABORATIVE AND REGULATORY ACTIONS Addressing deepfake threats effectively requires robust collaborative efforts across tech companies, government agencies, and industry bodies. Regulatory frameworks, such as the European Union's AI Act and various U.S. federal and state initiatives, represent important steps toward transparency, accountability, and comprehensive protection against malicious AI misuse. Nevertheless, current regulations remain fragmented and incomplete, underscoring the urgent need for standardized, comprehensive legislation tailored to the risks posed by deepfakes. Deepfake technology presents profound ethical, societal, and cybersecurity challenges. The increasing prevalence and sophistication of AI-driven fraud in recruitment and beyond require proactive, multi-layered defensive measures. Organizations must enhance technical defenses, raise employee awareness, and advocate for robust regulatory frameworks. By taking informed, collaborative, and proactive approaches, businesses can significantly mitigate the risks associated with deepfake technology while leveraging its beneficial applications responsibly.

Karen Read's retrial: Judge declines to answer 4th jury question, calls it ‘theoretical'
Karen Read's retrial: Judge declines to answer 4th jury question, calls it ‘theoretical'

Yahoo

time4 days ago

  • Yahoo

Karen Read's retrial: Judge declines to answer 4th jury question, calls it ‘theoretical'

The judge in Karen Read's murder retrial on Tuesday afternoon declined to answer a fourth question from the jury, deeming it 'theoretical,' shortly after ruling on an additional three questions about evidence and a verdict slip. When the court returned from its afternoon lunch break, Cannone announced that jurors had a fourth question: 'If we find not guilty on two charges but can't agree on one charge, is it a hung jury on all three charges or just one charge?' Cannone informed the court that she would respond to the jury and tell them that the question is 'theoretical' and not something she can answer. 'To me, it's a theoretical question, and we don't answer theoretical questions. I tell the jurors that they're not to be concerned with the consequences of their verdict, and that's exactly what they're doing here,' Cannone explained. Read attorney Alan Jackson urged Cannone to offer clarity in a quick note to the jury, emphatically stating, 'We're going to end up in the exact same position we were in last year.' Last year, the jury sent three notes to the judge over three days before a mistrial was declared due to a hung jury. Several jurors later came out to say that the panel had unanimously agreed that Read was not guilty of the most serious charge of second-degree murder. Cannone called the question 'premature,' noting that it 'may be something later.' Special prosecutor Hank Brennan endorsed Cannone's ruling. Earlier in the day, Cannone ruled on the following three questions: What is the timeframe for the OUI charge, 12:45 a.m. or 5 a.m.? Are video clips of Karen Read evidence, and how do we consider them? Does convicting guilty on a sub-charge convict on the overall charge? (In reference to the manslaughter OUI charge) Read, 45, of Mansfield, is accused of striking John O'Keefe, 46, with her Lexus SUV and leaving him to die alone in a blizzard outside of a house party in Canton at the home of fellow officer Brian Albert on Jan. 29, 2022, following a night of drinking. Throughout her second trial, the prosecution's theory of jaded love turned deadly was countered by a defense claim that a cast of tight-knit Boston area law enforcement killed a fellow police officer. Read's lawyers argued that O'Keefe was beaten, bitten by a dog, then left outside Albert's home in a conspiracy orchestrated by the police that included planting evidence against Read. Get caught up with all of the latest in Karen Read's retrial. Download the FREE Boston 25 News app for breaking news alerts. Follow Boston 25 News on Facebook and Twitter. | Watch Boston 25 News NOW

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store