
An Unexpected Move by Meta Changes the Rules of Artificial Intelligence
Meta, the social media giant, has launched its first standalone application powered by intelligent AI assistance, in a clear move to compete with platforms like ChatGPT by providing users with direct access to generative AI models.
Mark Zuckerberg, the company's founder and CEO, announced the launch in a video on Instagram, noting that over one billion users are already interacting with the 'Meta AI' system across the company's various apps. The new release comes in the form of a standalone app, offering users a personalized and direct experience.
Zuckerberg explained that the app is designed to serve as a personal assistant for each user, relying primarily on voice interaction and tailoring responses to individual interests. Initially, the app uses minimal contextual information, but over time—and with user consent—it will be able to learn more about users' habits and social circles through Meta's connected apps. The AI is based on the open-source generative model 'LLaMA,' which has garnered significant attention from developers and has been downloaded over a billion times, making it one of the most widely used models in its category.
The app features a design aligned with Meta's social nature, allowing users to share AI-generated posts and view them in a personalized feed. It's powered by a newer version of the model known as 'LLaMA 4,' which brings more personalized and flexible interactions. Users can also choose to save shared information to avoid repeating it in future conversations. Additionally, the app offers the ability to search within Facebook and Instagram content—provided prior permission is granted.
This app serves as an alternative to the 'Meta View' app used with Ray-Ban Meta smart glasses, enabling seamless interaction across glasses, mobile, and desktop platforms through a unified interface.
The launch comes at a time when major tech companies are racing to release intelligent assistants aimed directly at users, with OpenAI still leading the market through the ongoing development of ChatGPT and its continuous integration of advanced features.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Ya Biladi
4 days ago
- Ya Biladi
Internet tops as news source in Morocco, but trust remains low
The Digital News Report 2025, published by the Reuters Institute for the Study of Journalism, highlights notable changes in Morocco's media landscape. After years of criticism over limited press freedom and state control of media outlets, some positive signs are beginning to emerge. In recent months, Morocco has seen the release of detained and exiled journalists, along with the rise of new independent voices on digital platforms. These developments offer a glimmer of hope for a freer and more open media environment. However, this progress is unfolding in a digital ecosystem where trust remains fragile, and recent regulatory reforms have drawn mixed reactions. The report also points to a sharp increase in content production. In August 2024 alone, over 136,000 articles were published, most of them online, representing a 23.7% year-on-year increase. This surge spans multiple languages, reflecting growing momentum as Morocco prepares to co-host the 2030 FIFA World Cup with Spain and Portugal. Digital Platforms Dominate News Consumption The internet has become the primary source of news for most Moroccans, with 78% of respondents saying they rely on it. Social media and messaging apps play a central role in this shift. YouTube is now the most-used news source (49%), followed by Facebook (47%), Instagram (32%), and TikTok (24%). WhatsApp groups are also widely used for news sharing (30%), alongside Telegram, which is gaining ground. Yet this shift to digital has brought new challenges, chief among them, the spread of misinformation. More than half of respondents (54%) say they struggle to tell real news from fake online. Digital influencers are seen as the main culprits (52%), followed by local politicians (30%). Social platforms and video apps have fueled the rise of a new generation of content creators who are reshaping how news is produced and consumed, particularly among young people. YouTube, in particular, has become a hub for bloggers, political commentators, and influencers, some of whom test the limits of acceptable discourse in Moroccan public life. Despite the growth in digital engagement, trust in news remains low in Morocco, among the lowest globally. According to the report, only 28% of respondents said they trust the news. Many cite a lack of media independence and the tendency of outlets to avoid sensitive issues or echo official government positions.


Morocco World
4 days ago
- Morocco World
OpenAI Signs $200 Million with US Pentagon, Raising Alarm
On Monday, June 16, the United States Department of Defense signed a $200 million contract with OpenAI to deploy generative Artificial Intelligence (AI) for military use, despite the company's previous commitments not to develop AI tools for warfare. According to the Pentagon, OpenAI—the US-based creator of ChatGPT—will 'develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains.' Under this cooperation, OpenAI plans to demonstrate how advanced AI can enhance administrative functions, such as healthcare for military service members and cyber defense. The new deal follows revelations that OpenAI's Chief Product Officer, Kevin Weil, and two former AI executives have been commissioned as lieutenant colonels in the US Army. Similarly, the US military has recruited top executives from Meta and Palantir—a data analytics firm notorious for enabling surveillance—to form Detachment 201, a unit dedicated to embedding AI and tech expertise into military operations. While OpenAI had collaborated with defense contractors before, this marks its first direct partnership with a government. OpenAI claims that all military applications will comply with its own usage guidelines—standards the company itself sets and which have failed to uphold consistent ethical principles. Initially, OpenAI had explicitly banned its AI tools from being used for military and warfare purposes. However, the explicit wording was quietly removed in January of last year. OpenAI later announced a partnership with defense contractor Anduril Industries to integrate its AI into counter-drone systems. Palestine as a testing ground These developments raise alarm over the rapid militarization and weaponization of AI, especially as these technologies are already deployed in the ongoing genocide against Palestinians in Gaza and the occupied West Bank. OpenAI has been linked to the Israeli Occupation Forces (IOF) through collaborations with companies like Microsoft, contributing to the development and deployment of AI systems such as Gospel and Lavender. These systems have reportedly been used to identify, track, and target individuals and civilian structures in Gaza, including homes, residential buildings, and even aid workers—playing a direct role in facilitating Israel's genocide. Meta has long enforced systemic censorship against pro-Palestinian content since October 2023. Human Rights Watch has documented how Meta's platforms—including Facebook and Instagram—have suppressed posts about Palestinian human rights, peaceful protests, and documentation of abuses, driven by flawed moderation policies, over-reliance on automated tools, and likely government influence. Palantir Technologies has been implicated in the Gaza genocide by supplying advanced AI-powered surveillance and data analytics to the IOF, used to identify and preemptively detain Palestinians in Gaza and the West Bank, enabling gross violations of international humanitarian law. In January 2024, Palantir cemented its complicity by signing a strategic partnership with Israel's Ministry of Defense, with CEO Alex Karp publicly expressing pride in supporting Israel's 'war effort.' These egregious violations of Palestinian rights and lives continue unchecked, largely due to Western indifference. Gaza and the occupied West Bank have long served as experimental grounds for the latest and deadliest warfare technologies—where AI-powered surveillance, automated targeting systems, and predictive policing tools are tested on a captive population under a brutal occupation and apartheid. The US government agencies now openly partnering with the very tech companies behind these systems, further legitimizes the concern that the brutal tactics refined on Palestinians will be normalized and exported on a much wider scale, expanding state violence and repression under the guise of technological progress.


Morocco World
14-06-2025
- Morocco World
Moroccan Software Engineer Accuses UN of Whitewashing Genocide
Rabat – Moroccan software engineer and former Microsoft AI employee Ibtihal Aboussad is sounding the alarm about the United Nations' upcoming 'AI for Good' summit, scheduled for July 8–11. Aboussad accuses the UN of whitewashing tech companies' role in enabling Israel's AI-assisted genocide against Palestinians in Gaza by giving them a platform at the summit. 'These companies provide the cloud infrastructure and AI technologies that allow Israel to accelerate its genocide in Gaza and uphold its regime of apartheid against all Palestinians,' said Aboussad, naming Google, Microsoft, Amazon, Oracle, IBM, Cisco, and Palantir as examples of complicit corporations. She warned that unless these technologies are regulated, their weaponization poses a threat to all of humanity, denouncing the UN's collaboration with these firms as 'UNlawful, UNacceptable, and truly UNbelievable.' Backed by the Palestinian-led Boycott, Divestment and Sanctions (BDS) movement, No Azure for Apartheid, and millions across the globe, Aboussad is calling for global pressure on the UN and its member states to end partnerships with genocide-enabling tech companies wherever possible, and to formally designate and regulate AI and cloud computing as dual-use technologies subject to international regulation. Dual-use designation would mean recognizing that these tools—often marketed as neutral or humanitarian—can serve both civilian and military purposes, including surveillance, targeting, and warfare, just like nuclear materials or chemical agents. Such a classification would subject them to legal controls, export restrictions, and transparency requirements. 'I'm appalled that the United Nations, which is supposed to uphold international law, is now partnering with corporations that are openly violating it,' Aboussad added, urging summit speakers and supporters to either publicly endorse these demands or withdraw if the UN refuses to meet its legal and ethical responsibilities. This is not the first time Aboussad has made headlines for her outspoken support for Palestine. In April of this year, she was fired by Microsoft after publicly confronting company executives during a live presentation at their Redmond headquarters. Addressing Microsoft AI CEO Mustafa Suleyman directly, Aboussad declared, 'Mustafa, shame on you. You claim to care about using AI for good, but Microsoft sells AI weapons to the Israeli military. Fifty thousand people have died, and Microsoft powers this genocide in our region.' Microsoft-enabled atrocities Aboussad, who directly witnessed Microsoft AI's provision of tools to the Israeli Occupation Forces (IOF) and Israeli government to surveil and target Palestinians, called on UN Secretary-General Antonio Guterres to launch an investigation into corporate capture within the UN system and to sever ties with Microsoft's UN Affairs offices in Geneva and New York. 'Let's remind him that Microsoft knowingly provides Israel with customized technology, including AI, that enables its atrocious crimes against Palestinians,' she said. BDS has identified Microsoft as one of the most complicit companies in Israel's apartheid regime and ongoing genocide in Gaza, accusing it of knowingly supplying technologies that facilitate war crimes, crimes against humanity, and grave human rights violations. Microsoft's complicity extends to deep collaborations with the IOF, Israeli ministries, and the Israeli prison system, which is notorious for documented, systematic torture of Palestinian detainees. 'Microsoft has failed its corporate obligation to prevent genocide, war crimes, and crimes against humanity. Its board of directors and executives may face criminal liability for this complicity,' BDS warned, citing the International Court of Justice's (ICJ) legally binding, provisional rulings. Aboussad concluded by reaffirming her belief that AI can be used for the good of humanity—if and only if it is properly regulated and governed by enforceable legal and ethical frameworks that prevent its weaponization. 'Let's regulate AI before it's too late. Palestinians and humanity cannot wait any longer,' she said. The AI for Good Global Summit brands itself as the UN's leading platform in showcasing how artificial intelligence can address pressing global challenges. First held in 2017, it is organized by the International Telecommunication Union (ITU) in collaboration with over 40 UN agencies and aims to promote AI applications aligned with the Sustainable Development Goals (SDGs)—from healthcare and poverty reduction to climate action and gender equality. This year's program includes the grand finale of the Robotics for Good Youth Challenge, pitch sessions for women entrepreneurs from the Global South, and panels on AI in brain health, including Alzheimer's treatments—noble causes that risk being undermined by the summit's silence and whitewashing of AI's deployment in state violence and genocide.