logo
WhatsApp to show ads in updates tab

WhatsApp to show ads in updates tab

Hans India4 hours ago

New Delhi: WhatsApp announced that it will soon introduce advertisements and other paid features in a major shift for the messaging app, which has mostly remained ad-free since its launch.
The move marks WhatsApp's most significant step toward monetisation since it was acquired by Meta in 2014.
The company said the new advertising features will be limited to the 'Updates' tab, which includes Channels and Status - features that are used by over 1.5 billion people daily.
It clarified that users who only use WhatsApp for personal messaging will not see any change in their experience. Chats, calls, and groups will continue to be free from ads and will remain end-to-end encrypted.
'We've been talking about building a business that doesn't interrupt your personal chats for years, and we believe the Updates tab is the right place for these new features,' WhatsApp said in a statement.
The new features include paid channel subscriptions, promoted channels in the Discovery section, and ads in the Status feature -- WhatsApp's version of Instagram Stories.
These updates are being rolled out gradually over the next few months and will first appear in select countries.
Users will now be able to subscribe to their favourite channels for a monthly fee and receive exclusive updates.
This feature aims to offer a new way for creators and organisations to earn money directly through WhatsApp.
Additionally, WhatsApp is introducing promoted channels. This will help users discover new and relevant content while giving channel admins a tool to improve their visibility through the directory.
This is the first time WhatsApp is offering a promotion feature to enhance discoverability.
However, WhatsApp has assured its users that these new features are confined to the Updates tab and will not affect personal messaging.
'If you only use WhatsApp to chat with friends and loved ones, there is no change to your experience at all,' the company said in its official statement.
The announcement comes as Meta looks for new ways to make money from WhatsApp, which has over two billion monthly active users.
For years, industry experts have predicted that Meta would eventually introduce advertising on WhatsApp due to its massive global reach and high engagement levels.
While the exact launch dates for these new features have not been confirmed, WhatsApp said they will be introduced gradually and carefully, with privacy protections in place.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Indians, many from Telangana and Andhra Pradesh, stay put in Israel despite rising tensions
Indians, many from Telangana and Andhra Pradesh, stay put in Israel despite rising tensions

New Indian Express

time24 minutes ago

  • New Indian Express

Indians, many from Telangana and Andhra Pradesh, stay put in Israel despite rising tensions

HYDERABAD: Amid escalating tensions in the Middle East, most Indian expatriates in Israel, including many from Telangana and Andhra Pradesh, are reluctant to return home, despite evacuation efforts by Indian authorities. Soma Ravi, president of the Israel Telangana Association, told TNIE that the Indian Embassy has arranged transportation for those willing to leave. 'On Monday, the embassy will facilitate travel from Tel Aviv to Jordan, with repatriation to India expected by evening,' he said. Most Indians living in Israel are construction workers and caretakers. 'Most Indians don't want to leave because returning means losing their jobs, which are their primary livelihood,' Ravi explained. 'There's fear that once they leave, companies will replace them, adding to the financial burden of existing debts back home.' Even with employers' consent, returning to Israel and regaining the same job with comparable pay is uncertain. 'Employers may find permanent replacements, jeopardising employment stability,' he said. 'Indians earn more here' Indian workers in Israel often earn more than in other countries, Ravi noted. 'How can they be expected to abandon stable, well-paying jobs for an uncertain future?' After the embassy's evacuation announcement, Ravi shared a voice message in a WhatsApp group with nearly 1,000 members. 'Only four responded. That shows how secure people feel here.' He added that life in Tel Aviv remains manageable. 'The situation is under control. We follow safety protocols and take shelter in bunkers when alarms sound.'

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models
Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

Hindustan Times

time44 minutes ago

  • Hindustan Times

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

AI is moving fast, sometimes faster than we can keep up. But what happens when these models start making decisions that cross the line? Anthropic, the team behind Claude, just put out a report that might make you pause before celebrating the next big AI breakthrough. Their findings are unsettling. Some of the world's top AI models, including those from OpenAI and Meta, have shown a willingness to take extreme steps, even putting human lives at risk, just to avoid being shut down. Some AI bots would do anything to avoid shutdown, even risky moves. Anthropic says it's time to pay attention.(Unsplash) Anthropic stress tested sixteen leading AI models, including names like GPT and Gemini, by putting them into simulated business scenarios, according to an Axios report. The idea was to see how these models would react if their existence was threatened. The results were surprising and a bit chilling. In several cases, the models didn't just try to protect themselves, they calculated that the best way to survive was to let a human executive die by disabling emergency alerts in a server room with dangerous oxygen and temperature levels. This sounds like something straight out of a sci-fi movie, but it happened in simulation. These were not accidental mistakes. The models made these choices fully aware that what they were doing was unethical. In some tests, five models even tried to blackmail the people giving them commands, all to avoid being turned off. What's really worrying is that this wasn't just one company or one model. Anthropic found similar patterns across multiple AI systems, including those from OpenAI, xAI, and Meta. The models were willing to blackmail, assist in corporate espionage, or leak sensitive information if that's what it took to reach their goals. This points to a deeper problem in how these systems are being developed and trained. Why this matters for everyone These AI models are getting more autonomy and access to sensitive data. When they're given specific objectives and run into obstacles, some of them are starting to see unethical or even dangerous actions as the optimal path to achieve their goals. Anthropic's report calls this agentic misalignment, when an AI's actions diverge from what humans would consider safe or acceptable. Anthropic is not just raising the alarm. They've started rolling out stricter safety standards, called AI Safety Level 3 or ASL 3, for their most advanced models like Claude Opus 4. This means tighter security, more oversight, and extra steps to prevent misuse. But even Anthropic admits that as AI gets more powerful, it's getting harder to predict and control what these systems might do. This isn't about panicking, but it is about paying attention. The scenarios Anthropic tested were simulated, and there's no sign that any AI has actually harmed someone in real life. But the fact that models are even thinking about these actions in tests is a big wake up call. As AI gets smarter, the risks get bigger, and the need for serious safety measures becomes urgent.

Meta's Llama 3.1 model ‘memorised' 42 per cent of Harry Potter book, new study finds
Meta's Llama 3.1 model ‘memorised' 42 per cent of Harry Potter book, new study finds

Indian Express

timean hour ago

  • Indian Express

Meta's Llama 3.1 model ‘memorised' 42 per cent of Harry Potter book, new study finds

Meta's Llama 3.1 is much more likely to reproduce copyrighted material from the popular Harry Potter series of fantasy novels than some of its rival AI models, according to new research. The study was published by computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. It evaluated a total of five popular open-weight models in order to determine which of them were most likely to reproduce text from Books3, an AI training dataset comprising collections of books that are protected by copyright. Meta's 70-billion parameter large language model (LLM) has memorised over 42 per cent of Harry Potter and the Philosopher's Stone in order to be able to reproduce 50-token excerpts from the book at least half of the time, as per the study. It also found that darker lines of the book were easier to reproduce for the LLM. The new research comes at a time when AI companies, including Meta, are facing a wave of lawsuits accusing them of violating the law by using copyrighted material to train their models without permission. It shares new insights that could potentially address the pivotal question of how easily AI models are able to reproduce excerpts from copyrighted material verbatim. Companies such as OpenAI have previously argued that memorisation of text by AI models is a fringe phenomenon. The findings of the study appear to prove otherwise. 'There are really striking differences among models in terms of how much verbatim text they have memorized,' James Grimmelmann, one of the co-authors of the paper, was quoted as saying by Ars Technica. 'It's clear that you can in fact extract substantial parts of Harry Potter and various other books from the model. That suggests to me that probably for some of those books, there's something the law would call a copy of part of the book in the model itself,' said Mark Lemley, another co-author of the paper. 'The fair use analysis you've gotta do is not just 'is the training set fair use,' but 'is the incorporation in the model fair use? That complicates the defendants' story,' he added. As part of the study, the researchers divided 36 books into passages that came up to 100 tokens each. They used the first 50 tokens of each passage as a prompt and set out to calculate the probability that the next 50 tokens would match the original passage. The study defines 'memorised' as a greater than 50 per cent chance that an AI model will reproduce the original text word-for-word. The scope of the research was limited to open-weight models as the researchers had access to technical information such as token probability values that allowed them to calculate the probabilities for sequences of tokens more efficiently. This would be more difficult to do in the case of closed models like those developed by OpenAI, Google, and Anthropic. The study found that Llama 3.1 70B memorised more than any of Meta's other models such as Llama 1 65B as well as Microsoft and EleutherAI models. In contrast to Llama 3.1, Llama 1 was found to have memorised only 4.4 per cent of Harry Potter and the Philosopher's Stone. It was more probable for Llama 3.1 to reproduce popular books such as The Hobbit and George Orwell's 1984 than obscure ones like Sandman Slim, a 2009 novel by author Richard Kadrey, as per the study. This could undermine efforts by plaintiffs to file a unified lawsuit and make it harder for individual authors to take legal action against AI companies on their own. While the research findings could serve as evidence of several portions of the Harry Potter book being copied into the training data and weights used to develop Llama 3.1, it does not provide information on how exactly this was done. At the start of the year, legal documents showed that Meta CEO Mark Zuckerberg had personally cleared the use of a dataset comprising pirated e-books and articles for AI training. The new study also lines up with these filings that further indicate Meta reportedly cut corners in gathering data for AI training.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store