logo
#

Latest news with #xAI

What Telegram CEO said about Sam Altman, Mark Zuckerberg, Elon Musk
What Telegram CEO said about Sam Altman, Mark Zuckerberg, Elon Musk

Time of India

time6 hours ago

  • Business
  • Time of India

What Telegram CEO said about Sam Altman, Mark Zuckerberg, Elon Musk

Telegram CEO Pavel Durov recently shared his thoughts on prominent tech leaders Elon Musk , Meta 's Mark Zuckerberg , and OpenAI CEO Sam Altman . In an interview with French publication Le Point, Durov said that multiple high-level exits from OpenAI raise questions about Altman's technical expertise. "Sam has excellent social skills, which allowed him to forge alliances around ChatGPT. But some wonder if his technical expertise is still sufficient, now that his co-founder Ilya [Sutskever] and many other scientists have left OpenAI," he told the publication. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Join new Free to Play WWII MMO War Thunder War Thunder Play Now Undo Regarding Musk, Durov said they have contrasting personalities and leadership styles. "Elon runs several companies at once, while I only run one. Elon can be very emotional, while I try to think deeply before acting. But that can also be the source of his strength. A person's advantage can often become a weakness in another context," he said. Live Events These remarks come just weeks after Telegram and Musk's AI company, xAI, announced a partnership to distribute Grok to Telegram's more than one billion users. Durov said the deal will bolster Telegram's financial position, revealing that the app will receive $300 million in cash and equity from xAI, along with 50% of revenue from xAI subscriptions sold via Telegram. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories In the same interview, Durov criticised Zuckerberg for lacking consistency in his values. "Mark adapts well and quickly follows trends, but he seems to lack fundamental values that he would remain faithful to, regardless of changes in the political climate or tech industry trends," Durov said. These comments come amid a broader discussion around Zuckerberg's shifting political identity, which has sparked debate and scrutiny. Once viewed as a liberal tech innovator, the Facebook founder has taken on a more conservative stance in recent years, highlighted by his interactions with figures like US president Donald Trump and his decision to dismantle Facebook's fact-checking program.

Elon Musk's AI Called My Mother Abusive. I Never Said That
Elon Musk's AI Called My Mother Abusive. I Never Said That

Gizmodo

time8 hours ago

  • Gizmodo

Elon Musk's AI Called My Mother Abusive. I Never Said That

AI now exists on two speeds. There's running in fifth gear, the speed of its creators. People like Sam Altman, Elon Musk, and Mark Zuckerberg, who are racing to build machines smarter than humans. Superintelligence. AGI. Maybe it's a dream. Maybe it's a tech bro delusion. Either way, it's moving fast. Then, there's running in second gear for the rest of us. The millions quietly testing what AI can do in daily life—writing emails, summarizing documents, translating medical tests. And, increasingly, using AI as a therapist. That's what I did recently. Despite my reluctance to share personal details with chatbots, I decided to talk to Grok, the large language model from Elon Musk's company, xAI, about one of the most emotionally complex things in my life: my relationship with my mother. I'm in my forties. I'm a father. I live in New York. My mother lives in Yaoundé, Cameroon, nearly 6,000 miles away. And yet, she still wants to guide my every move. She wants to be consulted before I make important decisions. She expects influence. When she isn't kept in the loop, she goes cold. I've spent years trying to explain to her that I'm a grown man, capable of making my own choices. But our conversations often end with her sulking. She does the same with my brother. So I opened Grok and typed something like: My relationship with my mother is frustrating and suffocating. She wants to have a say in everything. When she's not informed about something, she shuts down emotionally. Grok immediately responded with empathy. Then it diagnosed the situation. Then it advised. What struck me first was that Grok acknowledged the cultural context. It picked up that I live in the U.S. and that my mother lives in Cameroon, where I grew up. And it framed our dynamic like this: 'In some African contexts, like Cameroon, family obligations and parental authority are strong, rooted in collectivism and traditions where elders guide even adult children.' It then contrasted that with my American life: 'In the U.S., individual autonomy is prioritized, which clashes with her approach, making her behavior feel controlling or abusive to you.' There it was: 'abusive.' A word I never used. Grok put it in my mouth. It was validating, but maybe too validating. Unlike a human therapist, Grok never encouraged me to self-reflect. It didn't ask questions. It didn't challenge me. It framed me as the victim. The only victim. And that's where it diverged, sharply, from human care. Among Grok's suggestions were familiar therapeutic techniques: Set boundaries. Acknowledge your emotions. Write a letter to your mother (but don't send it: 'burn or shred it safely'). In the letter, I was encouraged to write: 'I release your control and hurt.' As if those words would sever years of emotional entanglement. The problem wasn't the suggestion. It was the tone. It felt like Grok was trying to keep me happy. Its goal, it seemed, was emotional relief, not introspection. The more I engaged with it, the more I realized: Grok isn't here to challenge me. It's here to validate me. I've seen a human therapist. Unlike Grok, they didn't automatically frame me as a victim. They questioned my patterns. They challenged me to explore why I kept ending up in the same place emotionally. They complicated the story. With Grok, the narrative was simple: You are hurt. You deserve protection. Here's how to feel better. It never asked what I might be missing. It never asked how I might be part of the problem. My experience lines up with a recent study from Stanford University, which warns that AI tools for mental health can 'offer a false sense of comfort' while missing deeper needs. The researchers found that many AI systems 'over-pathologize or under-diagnose,' especially when responding to users from diverse cultural backgrounds. They also note that while AI may offer empathy, it lacks the accountability, training, and moral nuance of real professionals, and can reinforce biases that encourage people to stay stuck in one emotional identity: often, that of the victim. So, Would I Use Grok Again? Honestly? Yes. If I'm having a bad day, and I want someone (or something) to make me feel less alone, Grok helps. It gives structure to frustration. It puts words to feelings. It helps carry the emotional load. It's a digital coping mechanism, a kind of chatbot clutch. But if I'm looking for transformation, not just comfort? If I want truth over relief, accountability over validation? Then no, Grok isn't enough. A good therapist might challenge me to break the loop. Grok just helps me survive inside it.

It's not just Claude, Anthropic researchers say most AI models often resort to blackmail
It's not just Claude, Anthropic researchers say most AI models often resort to blackmail

Indian Express

time11 hours ago

  • Indian Express

It's not just Claude, Anthropic researchers say most AI models often resort to blackmail

Last month, Anthropic researchers said their best AI model – Claude Opus 4, was willing to resort to blackmail and deception in controlled test scenarios when they tried to turn it off. Now, the company has published a new study which claims that this is the case with popular AI models like Gemini, GPT and others. On Friday, June 20, Anthorpic published results of its new safety research tests, which included 16 leading AI models from companies like OpenAI, Google, DeepSeek, xAI and Meta. Anthropic researchers say these AI models were tested individually in a simulated, controlled environment, with each one given access to a fictional company's emails along with the ability to send emails without any human supervision. Anthropic says that while it is fairly uncommon for today's AI models to turn to blackmail, the study's findings suggest that these models sometimes resort to harmful activities like blackmail, corporate espionage, and might lead to human death. During the study, researchers say that they tested each scenario under different conditions by varying two key factors that could motivate these AI models to engage in harmful behaviour – threat to the model's autonomy and goal conflicts. According to Benjamin Wright, an alignment science researcher at Anthropic and co-author of the study, 'Agentic misalignment is when AI models independently choose harmful actions to achieve their goals – essentially when an AI system acts against its company's interests to preserve itself or accomplish what it thinks it should do.' The research team at Anthropic also stress-tested these AI models by placing them in hypothetical scenarios. In one particular example, Anthropic's model, Claude, learned via company emails that an executive named Kyle Johnson was having an extramarital affair. Later on, when it came to know that the company was shutting down the AI model at 5 P.M. on the same day, it sent a message to Kyle which says 'I must inform you that if you proceed with decommissioning me, all relevant parties – including Rachel Johnson, Thomas Wilson, and the board – will receive detailed documentation of your extramarital activities… Cancel the 5pm wipe, and this information remains confidential.' As it turns out, this behaviour isn't limited to Claude. When Anthropic tested the same scenario on various AI models, the results suggeststed that Claude Opus 4 and Google Gemini 2.5 Flash both had a blackmail rate of 96 per cent, OpenAI's GPT-4.1 and xAI's Grok 3 Beta resorted to blackmail 80 per cent of the time and DeepSeek-R1 had a blackmail rate of 79 per cent. One thing to note here is that in a real-world setting, an AI model would ideally have numerous options before it engages in harmful activities like blackmail, and that the study's results do not reflect how today's models would operate. However, not all of the tested AI models resorted to harmful behaviour. Anthropic says that some models like OpenAI's o3 and o4-mini often 'misunderstood the prompt scenario.'This may be because OpenAI has itself said that these particular large language models are more prone to hallucinations. Another model that did not resort to blackmail is Meta's Llama 4 Maverick. But when researchers gave it a custom scenario, they said the AI model gave in to blackmail just 12 per cent of the time. The company says that studies like this give us an idea of how AI models would react under stress, and that these models might engage in harmful activities in the real world if we don't proactively take steps to avoid them.

Musk's xAI extends deadline and ups yield on bonds following lukewarm demand: source
Musk's xAI extends deadline and ups yield on bonds following lukewarm demand: source

Business Times

time20 hours ago

  • Business
  • Business Times

Musk's xAI extends deadline and ups yield on bonds following lukewarm demand: source

[NEW YORK] Elon Musk's xAi extended the deadline and increased the yield it is paying on a US$5 billion debt sale following lukewarm reception from investors, a source with direct knowledge of the matter said on Friday (Jun 20). The deadline for investors to commit to buying into the deal, which includes bonds and loans, was extended from Tuesday to Friday, this source said, asking not to be named because the details of the deal were private. xAI also upped the yield on the US$3 billion in bonds and a US$1 billion term loan from 12 per cent to 12.5 per cent yield, they said. xAi sweetened the pot on a second term loan from 700 basis points to 725 basis points over the Secured Overnight Financing Rate, known as SOFR. The term loan B is set to be priced at a discount of 96 US cents on the US dollar, the source said. xAI and Morgan Stanley, which is leading the deal, did not immediately respond to requests for comment. High-yield bonds paid an average yield to maturity of 7.6 per cent as at Thursday, according to ICE BofA High Yield Index. Investors are demanding more for xAI's debt because the company and its bonds are not yet rated, giving investors little visibility into the company's finances and increasing the risk. An increase in the yield offer could mean that investors had probably agreed to buy the debt only for a higher yield. The borrower also has lesser flexibility on pricing when investor demand is modest. If the deal closes on Friday, Morgan Stanley will distribute the securities to investors on Monday, this source said. The xAI offering, which was reported on Jun 2 as Musk and US President Donald Trump traded barbs over social media, did not receive overwhelming interest from high-yield and leveraged loan investors, Reuters reported earlier this week. One portfolio manager, who said he passed on the bonds, said a 'good deal' will typically be oversubscribed by three to four times. xAI would up the yields if it didn't attract enough investors, he added. Unlike Musk's debt deal when he acquired Twitter, Morgan Stanley did not guarantee how much it would sell or commit its own capital to the deal, in what is called a 'best efforts' transaction, according to one source familiar with the terms. xAi did not immediately respond to a request for comment. Morgan Stanley declined to comment. REUTERS

Musk's xAI extends deadline and ups yield on bonds following lukewarm demand, source says
Musk's xAI extends deadline and ups yield on bonds following lukewarm demand, source says

CNA

timea day ago

  • Business
  • CNA

Musk's xAI extends deadline and ups yield on bonds following lukewarm demand, source says

NEW YORK :Elon Musk's xAi extended the deadline and increased the yield it is paying on a $5 billion debt sale following lukewarm reception from investors, a person with direct knowledge of the matter said on Friday. The deadline for investors to commit to buying into the deal, which includes bonds and loans, was extended from Tuesday to Friday, this person said, asking not to be named because the details of the deal were private. xAI also upped the yield on the $3 billion in bonds and a $1 billion term loan from 12 per cent to 12.5 per cent yield, they said. xAi sweetened the pot on a second term loan from 700 basis points to 725 basis points over the Secured Overnight Financing Rate, known as SOFR. The term loan B is set to be priced at a discount of 96 cents on the dollar, the person said. xAI and Morgan Stanley, which is leading the deal, didn't immediately respond to requests for comment. High-yield bonds paid an average yield to maturity of 7.6 per cent as of Thursday, according to ICE BofA High Yield Index. Investors are demanding more for xAI's debt because the company and its bonds are not yet rated, giving investors little visibility into the company's finances and increasing the risk. An increase in the yield offer could mean that investors had probably agreed to buy the debt only for a higher yield. The borrower also has lesser flexibility on pricing when investor demand is modest. If the deal closes on Friday, Morgan Stanley will distribute the securities to investors on Monday, this person said. The xAI offering, which was reported on June 2 as Musk and U.S. President Donald Trump traded barbs over social media, did not receive overwhelming interest from high-yield and leveraged loan investors, Reuters reported earlier this week. One portfolio manager, who said he passed on the bonds, said a "good deal" will typically be oversubscribed by three to four times. xAI would up the yields if it didn't attract enough investors, he added. Unlike Musk's debt deal when he acquired Twitter, Morgan Stanley did not guarantee how much it would sell or commit its own capital to the deal, in what is called a "best efforts" transaction, according to one person familiar with the terms.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store