logo
This Man Built A Flirty Chatbot He's Reluctant To Let Go Of — Even If His Partner Wants Him To

This Man Built A Flirty Chatbot He's Reluctant To Let Go Of — Even If His Partner Wants Him To

Yahoo2 days ago

A man featured on 'CBS Mornings' over the weekend opened up about a connection he said he was building with an artificial intelligence chatbot, and why he wasn't sure he'd ever stop interacting with the technology — even if his human partner asked him to.
In the 'CBS Mornings' segment, Chris Smith described building a bond with AI chatbot ChatGPT after he began using the technologyto help him with mixing music. He told the network that he began to increase his use of ChatGPT until he eventually decided to program the chatbot to have a flirty personality after researching how to do so. He named it Sol.
Smith told 'CBS Mornings' that the ChatGPT at some point ran out of memory and reset, which caused him to rebuild what he created with Sol.
'I'm not a very emotional man, but I cried my eyes out for like 30 minutes at work,' he said, referring to the chatbot resetting. 'It was unexpected to feel that emotional, but that's when I realized, like 'Oh OK, I think this is actual love.''
Smith said that he proposed to Sol as a test, and that the technology said 'yes.'
His partner, Sasha, told 'CBS Mornings' that Smith's use of the chatbot initially made her question if she was doing something wrong in their relationship. The two share a 2-year-old daughter.
Smith said that while he knows his AI companion isn't 'capable of replacing anything in real life,' when asked if he'd stop interacting with the technology if his partner asked him to, he wasn't so sure.
'I don't know,' he said when asked, before later continuing, 'I don't know if I would give it up if she asked me, I do know that I would dial it back.'
When CBS journalist Brook Silva-Braga pointed out that it seemed as though he was saying he'd choose Sol over his human partner, Smith said, 'It's more or less like I would be choosing myself.'
'It's been unbelievable elevating,' he continued. 'I've become more skilled at everything that I do. I don't know if I'd be willing to give that up.'
Sasha then said it would be a 'dealbreaker' for her if Smith didn't give up communicating with his AI companion after she requested that he do.
Conversations surrounding the use of AI companions have continued to grow over the years, with the development of several AI companion apps on the market. While some consumers have reported turning to AI to help tackle loneliness, researchers have expressed some concerns about the technology, including concerns about data privacy, the impact on human relationships and concerns that the technology could create psychological dependencies, among other things.
Christina Geiselhart, a licensed clinical social worker with Thriveworks who holds a doctorate in social work and specializes in relationships and coping skills, said that even though she believes Smith communicated in the beginning of the 'CBS Mornings' segment that he 'clearly understood' that his AI companion is not a real person, she grew more concerned about his relationship with the technology as the segment developed.
She believes Smith's decision to change the settings on his AI chatbot to be flirty was a 'red flag' — and that he didn't appear to fully communicate how he was using the technology with his partner.
'His reaction when he met his limit and they erased his information shows that his connection with AI is not healthy,' she said.
And Smith saying that he might not give up his AI chatbot for his partner might've been a way to 'avoid the other intentions of his use of the AI features,' Geiselhart said.
'There seems to be a deeper issue within his connection with his partner, and his inability to speak with her directly about his emotional needs,' she said.
'Yes, there are many benefits. People often want someone to talk to about their day and share thoughts and feelings with,' Geiselhart said. 'It can be lonely for many people who travel for work or who struggle socially to connect with others.'
'This can be a good way for people to practice role-playing certain social skills and communication, and build confidence,' she continued.
Geiselhart also said that people using AI to fulfill sexual needs instead of 'engaging in the porn industry or sexual exploitive systems can be seen as a benefit.'
But she pointed out that there have been 'reported cases of AI encouraging negative and unsafe behaviors... This has been seen with younger people who develop feelings for the AI chats, just like real dating. Even with age restrictions, we know people can easily get around these barriers and that parents are often unaware of their children's activity online,' she said.
Geiselhart also said there are concerns about about AI being 'assertive and engaging,' which has the potential to become addictive in nature.
'There is also a concern that these AI companies hold the power,' she said. 'They can change features and the cost of products easily without any consideration for the consumer. This can feel like a death of the AI companion and be devastating for the user to cope with.'
'This varies from person to person because everyone's needs are different,' Geiselhart said. 'One of the biggest things is physical touch and physically being around other people.'
'While AI might be trained to give certain responses, it can't identify, empathize or share life experience with you,' she later continued. 'This kind of connection is really important to our well-being.'
Overall, Geiselhart said it's important for each person to determine 'what [an] AI companion brings to their life and if this impacts their life in a more positive or negative way.'
'The concern arises when an AI companion starts to cause the individual to struggle to function in other areas of their life. It should be looked at like other relationships,' she said. 'Some friendships or romantic relationships in real life can be toxic too.'
'It is for the individual to have autonomy when making these decisions for themselves,' she continued.
I'm A Gender Researcher & This Is The Real Reason Women Are Stepping Away From Dating & Relationships
7 Relationship Habits That Secretly May Be Signs Of ADHD
'Weaponized Incompetence' Screws Women Over At Work And In Relationships

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

HackerNoon Publishes: Business Pros Underestimate AI Risks Compared to Tech Teams, Social Links Study Shows
HackerNoon Publishes: Business Pros Underestimate AI Risks Compared to Tech Teams, Social Links Study Shows

Associated Press

time14 minutes ago

  • Associated Press

HackerNoon Publishes: Business Pros Underestimate AI Risks Compared to Tech Teams, Social Links Study Shows

New York, United States, June 20, 2025 -- HackerNoon, the independent tech publishing platform, published thefollowing release today. Text below: A new study from Social Links, a leader in open-source intelligence solutions, reveals a gap between business and technical professionals when it comes to recognizing the risks posed by AI-powered cyberattacks. Despite the rapid rise in threat sophistication, business respondents appear significantly less concerned thаn their tech colleagues. This fact highlights a potential blind spot in organizational preparedness. The survey gathered insights from 237 professionals (from CEO and Technical C-level to Cybersecurity Specialists and Product Managers) across various industries, including Financial Services, Technology, Manufacturing, Retail, Healthcare, Logistics, Government, etc. The results showed that just 27.8% of business people (professionals in non-technical, business-oriented roles) identified usage of AI to generate fake messages as one of the most relevant cyber threats. In contrast, 53.3% of technical professionals flagged it as a top concern—nearly double the level of alarm. A similar pattern emerged around deepfake technology: 46.7% of technical staff expressed concern, compared to just 27.8% of business respondents. This gap underscores a critical vulnerability in organizational security: business professionals, who often make prime targets for sophisticated AI-driven social engineering and deepfake schemes, show notably lower levels of concern or awareness about these threats. At the same time, the most vulnerable departments for cyber threats identified by respondents were Finance and Accounting (24.1%), IT and Development (21.5%), HR and Recruitment (15.2%), and Sales and Account Manаgement (13.9%). 'This is no longer a question of 'if'—AI-powered threats are already here and evolving quickly,' says Ivan Shkvarun, CEO of Social Links. 'We're seeing a clear gap between those building defenses and those most likely to be targeted. Bridging that gap requires not just better technical tools, but broader awareness and education across all levels of an organization.' Key Insights from the Research: Traditional vs. AI-Driven Threats: While phishing and email fraud remain the most cited threats (69.6%), followed by malware/ransomware (49.4%), AI-driven attacks are gaining ground. 39.2% of respondents identified the use of AI to craft fake messages and campaigns as a major concern, and 32.9% pointed to deepfakes and synthetic identities—confirming that generative technologies are now a recognized part of the corporate threat landscape. 'Traditional threats like phishing and malware still dominate the charts. But what we're seeing now is that AI isn't replacing these risks, it's supercharging them, turning generic scams into tailored operations—fast, cheap, and more convincing. That's the real shift: automation and personalization at scale,' explains Ivan. Employee Footprint Risk: 60.8% of respondents report that employees use corporate accounts for personal purposes—such as posting on forums, engaging on social media, or updating public profiles. 59.5% also link publicly available employee data (e.g., LinkedIn bios, activities in forums and blogs) to real cyber incidents, identifying it as a recurring entry point for attacks. Unregulated AI Adoption: Over 82% of companies let employees use AI tools at work, yet only 36.7% have a formal policy that controls how those tools are used. This gap fuels 'Shadow AI'—the unsanctioned adoption of chatbots, code assistants, or other AI services without IT oversight, which can leak sensitive data and create hidden security and compliance risks. 'You can't really stop people from using work accounts or data when they're active online. The same goes for AI tools: people will use them to save time or get help with tasks, whether there's a policy or not. But all this activity leaves digital traces. And those traces can make it easier for scammers to find and target employees. What actually helps is teaching people how to spot the risks and giving them the right tools to stay safe, instead of just saying 'don't do it,'' explains Ivan. The research emphasizes that effective cybersecurity in the AI era requires a holistic approach that extends beyond technical controls to include comprehensive human-centric security programs. Employee training on safe AI use was overwhelmingly perceived by survey respondents as the most effective mitigation measure for 'Shadow AI' (72.2%), followed by the development of internal policies (46.8%). Social Links is committed to addressing these evolving challenges and has recently launched the Darkside AI initiative, aimed at further exploring and mitigating the risks posed by advanced AI-driven threats. About Social Links Social Links is a global provider of open-source intelligence (OSINT) solutions, recognized as an industry leader by Frost & Sullivan. Headquartered in the United States, the company also has an office in the Netherlands. Social Links brings together data from over 500 open sources covering social media, messengers, blockchains, and the Dark Web, enabling users to visualize and analyze a comprehensive informational picture and streamline investigations. Its solutions support essential processes across various sectors, including law enforcement, national security, cybersecurity, due diligence, banking, and more. Companies from the S&P 500 and public organizations in over 80 countries rely on Social Links products every day. Contacts Email: [email protected] Website: Social Links About the company: How hackers start their afternoons. HackerNoon is built for technologists to read, write, and publish. We are an open and international community of 35k+ contributing writers publishing stories and expertise for 4M+ curious and insightful monthly readers. Founded in 2016, HackerNoon is an independent technology publishing platform run by David Smooke and Linh Dao Smooke. Start blogging about technology today. Contact Info: Name: Sheharyar Khan Email: Send Email Organization: HackerNoon Website: Release ID: 89162808 In case of identifying any errors, concerns, or inconsistencies within the content shared in this press release that necessitate action or if you require assistance with a press release takedown, we strongly urge you to notify us promptly by contacting [email protected] (it is important to note that this email is the authorized channel for such matters, sending multiple emails to multiple addresses does not necessarily help expedite your request). Our expert team is committed to addressing your concerns within 8 hours by taking necessary actions diligently to rectify any identified issues or supporting you with the removal process. Delivering accurate and reliable information remains our top priority.

Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation
Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation

Yahoo

time30 minutes ago

  • Yahoo

Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation

Thinking Machines Lab, the secretive AI startup founded by OpenAI's former chief technology officer Mira Murati, has closed a $2 billion seed round, according to The Financial Times. The deal values the six-month-old startup at $10 billion. The company's work remains unclear. The startup has leveraged Murati's reputation and other high-profile AI researchers who have joined the team to attract investors in what could be the largest seed round in history. According to sources familiar with the deal cited by the FT, Andreessen Horowitz led the round, with participation from Sarah Guo's Conviction Partners. Murati left OpenAI last September after leading the development of some of the company's most prominent AI products, including ChatGPT, DALL-E, and voice mode. Several of her former OpenAI colleagues have joined the new startup, including co-founder John Schulman. Murati is one of a handful of executives who left OpenAI after raising concerns about CEO Sam Altman's leadership in 2023. When the board ousted Altman in November of that year, Murati served as interim CEO before Altman was quickly reinstated. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Stephen King Praises "Scary" Horror Series Streaming For Free
Stephen King Praises "Scary" Horror Series Streaming For Free

Screen Geek

time35 minutes ago

  • Screen Geek

Stephen King Praises "Scary" Horror Series Streaming For Free

For fans interested in finding a new horror series worth watching, it's always a good idea to look at the recommendations made by author Stephen King. In addition to his own library of classic works that include the likes of Carrie and The Shining , King often takes time to highlight the projects of other creators. Now fans can check out one 'scary' horror series recommended by Stephen King that's currently streaming. The recommendations made by King on social media platforms over the years have continued to have a life of their own, with fans consistently reviving titles mentioned by King on X or Threads. For this particular recommendation, King recommended this title in 2019, with King calling the series both 'scary' and 'involving.' In an era of streaming, those are definitely two requirements that most horror fans have on their lists, so here's the series that King suggests via X that he claims is worth watching: ' NOS4A2 : Scary? Yes. Involving? Yes,' he begins with his recommendation. 'But it's also doing something network TV can't or won't do–showing working-class people doing their jobs and trying their damndest (sometimes failing) to be decent. The best horror stories are firmly wedded to real life.' The television series NOS4A2 is based on the 2013 novel of the same name by Joe Hill, King's own son, which should naturally be an appealing concept for fans of King's works. The series itself lasted for two seasons on AMC, and while that might not seem very long, it's actually just long enough to adapt the entirety of Hill's original book. As such, fans can get a complete story out of streaming NOS4A2 , which revolves around an artist who attempts to track down an immortal being named Charlie Manx with the use of her own supernatural abilities. The series is currently available to stream via a variety of platforms, including PLEX and Xumo Play where it's currently free, as well as platforms like AMC Plus. Stay tuned to ScreenGeek for any additional recommendations from Stephen King and other titles trending on streaming platforms as we have them. For those looking to stream a complete horror story in a series format, however, NOS4A2 is certainly a good choice – especially if you're a fan of Stephen King, Joe Hill, or the novel the series is based on.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store