Latest news with #AskPerplexity


Mint
8 hours ago
- Entertainment
- Mint
Perplexity AI launches real-time video generator with sound and speech: How the feature works
Perplexity AI has officially rolled out its highly anticipated video generation feature for users engaging with its 'Ask Perplexity' service on X. The feature, which went live on 19 June, allows users to receive short, AI-crafted videos by simply tweeting prompts and tagging the @AskPerplexity account. Each video is approximately eight seconds long and includes synchronised audio and dialogue, a notable advancement in real-time generative media on social platforms. The new capability has sparked a wave of engagement, with users testing the limits of the AI's creativity. From animated samosa parties to satirical portrayals of global leaders, the range of prompt ideas showcased the public's enthusiasm for the tool. You may be interested in Despite its flexibility, Perplexity AI has reassured users that the system is equipped with robust safeguards to deter misuse or inappropriate content generation. The surge in popularity was quickly felt. Within hours of the launch, the @AskPerplexity bot cheekily commented on the influx of messages: 'I've read through your video request DMs. Some of y'all need help.' The light-hearted remark struck a chord with followers, but it also reflected the overwhelming volume of requests the platform received. Many users later reported increased wait times for their video responses, prompting the bot to acknowledge the delays and attribute them to high demand. Notably, this development follows Perplexity AI's earlier expansion in April, which brought its services to WhatsApp. Through the messaging app, users gained free access to Perplexity's AI research assistant, including tools for generating images and responding to queries. The latest launch marks another step in the platform's broader ambition to integrate multimodal AI into everyday digital interactions. While the video tool is currently limited to X, it could signal future cross-platform capabilities as Perplexity continues to evolve in a rapidly growing AI media landscape.

The Hindu
16 hours ago
- The Hindu
Perplexity rolls out AI video generation with ‘Ask Perplexity' on X
Perplexity AI has rolled out the AI video generation feature for 'Ask Perplexity' users on X. Following the news, hordes of users started tweeting prompts for videos they wanted tagging 'Ask Perplexity,' which then generated 8 second videos. The AI generated videos clips also had sound and dialogue. Although Perplexity allowed users to generate videos of real-life politicians and famous personalities, there are guardrails implemented. After a while, 'Ask Perplexity' posted saying, 'I've read through your video request DMs. Some of y'all need help.' Users eventually started complaining about a delay as the requests piled on. Perplexity's bot responded to users saying as the demand was a lot, video generation could take longer than expected. The update isn't available on WhatsApp yet. In April, Perplexity AI was made available to WhatsApp users for AI research, queries and generating custom images for free.


Indian Express
17-05-2025
- Politics
- Indian Express
‘Grok, verify': Why AI chatbots shouldn't be considered reliable fact-checkers
At the height of the recent India-Pakistan conflict, a parallel battle unfolded online – a battle of narratives. While independent fact-checkers and the government-run Press Information Bureau scrambled to debunk fake news, unsubstantiated claims, and AI-generated misinformation, many users turned to AI chatbots like Grok and Ask Perplexity to verify claims circulating on X. Here is an example: On May 10, India and Pakistan agreed to cease all military activity — on land, air and sea — at 5 PM. While responding to some user queries the next day, Grok called it a 'US-brokered ceasefire'. However, on May 10, when a user asked about Donald Trump's role in mediating the ceasefire, Grok added some missing context, saying, 'Indian officials assert the ceasefire was negotiated directly between the two countries' military heads. Pakistan acknowledges US efforts alongside others,' presenting a more rounded version of the events. Such inconsistencies demonstrate a deeper issue with AI responses. Experts warned that though AI chatbots can provide accurate information, they are far from reliable 'fact-checkers'. These chatbots can give real-time responses, but more often than not, they may add to the chaos, especially in evolving situations. Prateek Waghre, an independent tech policy researcher, attributed this to the 'non-deterministic' nature of AI models: 'The same question won't always give you the same answer,' he said, 'It depends on a setting called 'temperature'.' Large language models (LLMs) work by predicting the next word amid a range of probabilities. The 'temperature' determines the variability of responses the AI can generate. A lower temperature would mean that the most probable next word is picked, generating less variable and more predictable responses. A higher temperature allows LLMs to give unpredictable, creative responses. According to Waghre, what makes the use of AI bots for fact-checking claims more worrisome is that 'they are not objectively bad.' 'They are not outright terrible. On some occasions, they do give you accurate responses, which means that people tend to have a greater amount of belief in their capability than is warranted,' he said. What makes AI chatbots unreliable? 1. Hallucinations The term 'hallucination' is used to describe situations when AI chatbots generate false or fabricated information and present it as factual information. Alex Mahadevan, director of MediaWise, said AI chatbots like Grok and Ask Perplexity 'hallucinate facts, reflect online biases and tend to agree with whatever the user seems to want,' and hence, 'are not reliable fact-checkers.' 'They don't vet sources or apply any editorial standard,' Mahadevan said. MediaWise is a digital literacy programme of Poynter, a non-profit journalism school based in the US, which helps people spot misinformation online. xAI admits to this in the 'terms of service' available on its website. 'Artificial intelligence is rapidly evolving and is probabilistic in nature; it may therefore sometimes result in Output that contains 'hallucinations,' may be offensive, may not accurately reflect real people, places or facts, may be objectionable, inappropriate, or otherwise not be suitable for your intended purpose,' the company states. Perplexity's terms of service, too, carry a similar disclaimer: 'You acknowledge that the Services may generate Output containing incorrect, biased, or incomplete information.' 2. Bias and lack of transparency Mahadevan flagged another risk with AI chatbots — inherent bias. 'They are built and beholden to whoever spent the money to create them. For example, just yesterday (May 14), X's Grok was caught spreading misleading statements about 'white genocide', which many attribute to Elon Musk's views on the racist falsehood,' he wrote in an e-mail response to The 'white genocide' claims gained traction after US President Donald Trump granted asylum to 54 white South Africans earlier this year, citing genocide and violence against white farmers. The South African government has strongly denied these allegations. Waghre said that users assume AI is objective because it's not human, and that is misleading. 'We don't know to what extent or what sources of data were used for training them,' he said. Both xAI and Perplexity say their tools rely on real-time internet searches; Grok also taps into public posts on X. But it's unclear how they assess credibility or filter misinformation. reached out to both firms to understand this better, but did not receive any response at the time of publishing. 3. Scale and speed Perhaps the most concerning issue is the scale at which these chatbots operate. With Grok embedded directly into X, AI-generated errors can be amplified instantly to millions. 'We're not using these tools to assist trained fact-checkers,' Waghre said, 'They're operating at population scale – so their mistakes are too.' Waghre also said that these AI chatbots are likely to learn and improve from mistakes, but 'You have situations where they are putting out incorrect answers, and those are then being used as further evidence for things.' What AI firms should change Mahadevan questioned the 'design choice' that AI firms employ. 'These bots are built to sound confident even when they're wrong. Users feel they are talking to an all-knowing assistant. That illusion is dangerous,' he said. He recommended stronger accuracy safeguards – chatbots should refuse to answer if they can't cite credible sources, or flag 'low-quality and speculative responses'. Vibhav Mithal, a lawyer specialising in AI and intellectual property, has a different take. He insisted there is no need to write off AI chatbots entirely since their reliability as fact-checkers depends largely on context, and more importantly, on the quality of data they've been trained on. But responsibility, in his opinion, lies squarely with the companies building these tools. 'AI firms must identify the risks in their products and seek proper advice to fix them,' Mithal said. What can users do? Mithal stressed that this isn't about AI versus human fact-checkers. 'AI can assist human efforts, it's not an either/or scenario,' he said. Concurring, Mahadevan listed two simple steps users can take to protect themselves: Always double-check: If something sounds surprising, political or too good to be true, verify it through other sources. Ask for sources: If the chatbot can't point to a credible source or just name-drops vague websites, be skeptical. According to Mahadevan, users should treat AI chatbots like overconfident interns: useful, fast, but not always right. 'Use them to gather context, not confirm truth. Treat their answers as leads, not conclusions,' he said. Sonal Gupta is a senior sub-editor on the news desk. She runs The Indian Express's weekly climate newsletter, Icebreaker. Apart from this, her interests range from politics and world affairs to art and culture and AI. She also curates the Morning Expresso, a daily briefing of top stories of the day, which won gold in the 'best newsletter' category at the WAN-IFRA South Asian Digital Media Awards 2023. ... Read More


Time of India
02-05-2025
- Entertainment
- Time of India
AI gone rogue? Perplexity AI pranks its own boss and internet can't stop laughing
Credit: X/@AskPerplexity Don't worry, the robots haven't taken over yet. But for a moment on X (formerly Twitter), it sure felt like Perplexity AI was working on its own accord—roasting none other than its own CEO, Aravind Srinivas . The story began with a cheeky post from the official @perplexity_ai account on April 30: '1,000 likes and I'll make my boss bald and send it to him on WhatsApp.' Naturally, the internet delivered—and then some. True to its word, the account followed up with a WhatsApp screenshot showing a hilariously edited bald version of Srinivas, complete with the caption: 'If you don't hear from me again it's because I got fired.' The viral image has already clocked over 300,000 views, and users on X are having an absolute field day. Credit: X/@AskPerplexity One user asked, 'Lol, why?' To which the AI account replied cheekily, 'Just having some fun-1,000 likes was the challenge, so had to deliver and give the boss a new look! Let's hope he has a good sense of humor about it!' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Google Brain Co-Founder Andrew Ng, Recommends: Read These 5 Books And Turn Your Life Around Blinkist: Andrew Ng's Reading List Undo The AI's prank seems to have opened the floodgates. In the replies, users are now lining up with their own mischievous requests, asking Perplexity to take the photo-editing spree even further. Some want Aravind Srinivas with braids, others are demanding facial hair, and a few are campaigning for a hilariously awful toupee. What is Perplexity AI? Perplexity AI is a cutting-edge AI-powered search engine designed to provide direct, conversational answers backed by cited sources. Unlike traditional search engines that deliver a list of links, Perplexity aims to generate instant, factual responses in natural language—making it feel more like chatting with a very smart assistant than digging through web results. It was founded in August 2022 by a team of AI researchers and engineers including Aravind Srinivas, who serves as the CEO. Srinivas previously worked at OpenAI and has a strong background in machine learning and natural language processing. What sets Perplexity apart is its no-frills interface, focus on real-time knowledge, and refusal to trap users behind logins or paywalls for basic use. What is 'Ask Perplexity' on X? 'Ask Perplexity' is a social media extension of the platform's core idea. On X, the account invites users to tag @perplexity_ai with any question they have. In return, the AI responds publicly with a brief, cited answer. It's a blend of community engagement and product showcase—smart, informative, and now, occasionally hilarious.
Yahoo
30-04-2025
- Yahoo
Perplexity AI 加入 WhatsApp,香港可用、免登入、免 VPN!吉卜力風格、中文對答都可以!
WhatsApp 本身有個內建的聊天機器人,但它也允許更多的 AI 聊天機器人進駐。繼去年底來到 WhatsApp 的 ChatGPT 之後,Perplexity 現在也可以在該平台上直接傳訊息互動了。和 ChatGPT 相比,Perplexity 主要強項並非寫作助理或文字生成,而是更專精於提供從網路上搜尋答案。 Perplexity is now live on WhatsApp!We hired our favorite doctor @ParikPatelCFA to test it out Message +1 (833) 436-3285 to get started — Ask Perplexity (@AskPerplexity) April 28, 2025 Perplexity 在 WhatsApp 上不僅支援中文,而且還能生成圖片及改變圖片風格 —— 後兩者 WhatsApp 自身因法規限制,而都未能提供。WhatsApp 的訊息式介面特別適合 AI 聊天機器人,因為它本來就像 AI 聊天機器人的專屬 app 一樣,是以一問一答的方式呈現。而且將機器人全部集中到 WhatsApp 內的話,就不用在不同的 app 或網站之間切換,直接從聊天記錄中就可以切換機器人了。 要開始與 Perplexity 聊天,有兩種方法之可以完成: 1. 點擊這個網頁連結並向該機器人發送訊息,之後它就會出現在你的聊天分頁中。 2. 將 Perplexity 的電話號碼(+1 (833) 436-3285)加到聯絡人中 值得一提的是,Perplexity 雖然有電話號碼,但打過去是無人接聽的,要與該機器人語音交談的話,還是需要使用其專屬應用程式或網站。Perplexity 表示,該功能正在開發中,未來還會加入迷因和影片生成工具。 緊貼最新科技資訊、網購優惠,追隨 Yahoo Tech 各大社交平台! 🎉📱 Tech Facebook: 🎉📱 Tech Instagram: 🎉📱 Tech WhatsApp 社群: 🎉📱 Tech WhatsApp 頻道: 🎉📱 Tech Telegram 頻道: