logo
The Week in AI: "All incumbents are gonna get nuked."

The Week in AI: "All incumbents are gonna get nuked."

Globe and Mail17 hours ago

Welcome back to The Week in AI. I'm Kevin Cook, your field guide and storyteller for the fascinating arena of artificial intelligence.
On Friday, my colleague Ethan Feller and I ran through a dozen developments that are transforming the economy right before our eyes. Here were 7 of the highlights...
1) Jensen at NVIDIA GTC Paris: "We are going to sell hundreds of billions worth of GB200/300."
CEO Jensen Huang has forecast spending on AI-enabled data centers will double to $2 trillion over the next four to five years. As Grace Blackwell systems deploy, with 208 billion transistors per GPU -- or nearly 15 trillion per GB200 NVL72 rack system -- NVIDIA NVDA engineers are building the roadmap for Rubin and Feynman systems with likely orders of magnitude greater power.
This is something I've talked about repeatedly for the past year: Wall Street analysts and investors are vastly underestimating the potential of the AI economy and the upgrades in infrastructure that need to occur to support self-driving cars, humanoid robots, and other autonomous machines.
And this doesn't include sovereign nation-states that need to build their own AI infrastructure for security and growth.
If you ever need clarity about the AI Revolution, or just to recalibrate your expectations and convictions, there is one place you need to visit: the NVIDIA Newsroom -- especially around a GPU Tech Conference (GTC). (I show you where in the video.)
For last week's Paris GTC, they rolled out 6 press releases and 19 blogs covering as many new innovations and partnerships across industry, enterprise, science and healthcare.
Nobody Wanted AI GPUs in 2016
Jensen also retells the story of the first DGX-1 in 2016. It was the mini supercomputer about the size of a college dorm fridge and it held 8 Volta GPUs with 21 billion transistors each.
And nobody wanted it. Except a little startup called OpenAI.
I like to use this story as an example of how NVIDIA has been in a very unique position ever since. They don't have to find "product-market fit" like most companies. Instead, they have been inventing a stack that developers didn't know they needed.
Get the whole story in the replay of last Friday's The Week in AI: The Reasoning Wars, Sam's Love Letter, Zuck's Land Grab.
Even if you don't have time for the 60-minute replay, at least do a quick scroll of the comments where I post all the relevant links to the topics we discussed.
With over 25 links, you are guaranteed to find something that answers your top questions about the AI revolution!
2) The New Civil War In AI: Not Safety, But Efficacy
There are many exciting debates going on in "the revolution" right now. A recent hot conflict is over whether or not the LLMs (large language models) are doing real reasoning, and even thinking.
This one heated up after Apple AAPL researchers released their paper "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models."
We are amazed by the research, writing, pattern-finding and puzzle-solving of these models. But Apple researchers found some limitations where the models "give up" on large problems without enough context.
And it's worth pondering if they are simply "token prediction" machines that eventually get wrapped around their own axles.
I've experienced this with some of the "vibe-coding" app developer tools like Replit and Bolt.
But then other analysts and papers quickly responded and surfaced with the "limitations" of the Apple research, suggesting that the imposed cost expenditure limits imposed were the defining factor in the models giving up.
One of the rebuttal papers was titled "The Illusion of the Illusion of Thinking." Again, all these links are in the comments section of The Week in AI.
3) Google Offers Buyouts: AI Headcount Crunch Beginning?
My third topic was once again about the employment impact from generative AI and agentic AI being adopted in corporations.
I ran a query on ChatGPT for the "top 100 jobs most likely to be disrupted" in the next 3 years. You can find the link in the comments of the X Space.
Another tangible angle on job displacement was the revolutionary ad during the NBA finals by the prediction market platform Kalshi. It was created using the new Veo 3 graphics creator from Google by a filmmaker named PJ Ace.
Ethan and I discussed how this innovation is certain to disrupt advertising, marketing, and film as the machines can do in minutes what it used to take a team of people weeks.
And wait until you see the new Veo 3 ad from a Los Angeles dentist that is taking social media by storm. We'll talk about that in this Friday's Space.
Welcome to the Machine
But the most eye-opening news flash for me was the story on a company called Mechanize. While lots of job displacement will happen organically, this outfit is like a mercenary going after headcount.
The New York Times titled their article "This A.I. Company Wants to Take Your Job."
And here's how an X post described the piece about the startup that wants to automate white-collar work "as fast as possible"...
"Mechanize wants to abolish all jobs. They make no secret of this. They are developing an AI program that is extremely promising and is being financed by everyone from Google to Stripe."
Then there is Anthropic co-founder Ben Mann saying we'll know AI is transformative when it passes the "Economic Turing Test:
Give an AI agent a job for a month. Let the hiring manager choose: human or machine? When they pick the machine more often than not, we've crossed the threshold."
I have several posts in the comments of the "The Week in AI" X Space on the employment wars. Plus, just about every post is from a particular source of AI insight or expertise whose account you should be following on X.
4) Marc Andreessen: "All incumbents are gonna get nuked. Everything gets rebuilt."
Translation: AI isn't an economic upgrade. It's a total reset.
Which brings me to my favorite part of our Friday X Space...
Cooker's RANT of the WEEK: "The Magical AI Transformation Won't Be So Gentle." Here I take the other side of Sam Altman's blog post from last week titled "The Gentle Singularity."
I call it his "love letter" not to make fun of him, but to highlight his optimism in the face of brewing storms.
A few weeks ago it was Anthropic CEO Dario Amodei warning us about the rapid disruption of work and its impacts on citizens and families, not just the economy. Then the old wise-man of AI, Geoffrey Hinton, shared these sentiments in a recent interview...
The best-case future is a "symbiosis between people and AI" -- where machines handle the mundane, and humans live more interesting lives.
But in the kind of society we have now, he warns, AI won't free most people. It will concentrate power, and as massive productivity increases create joblessness, it will mostly benefit the rich.
This sober view instantly made me think of the 2016 book by Yuval Noah Harari Homo Deus in which the historian described how technology usually gets concentrated in the hands of the rich and powerful. It's just how economics works, no matter the political flavor.
In this way, AI can move quickly beyond issues of personal safety, to those of economic security. In the X Space replay and the comments below it, I discuss the implications of "post-labor economics" as well as share more expert resources on these topics.
Be sure to catch the replay of The Week in AI to hear my sense of the "not-so-gentle" transition we are headed into.
5) Apple WWDC: The Non-Event of the Week in AI
For what to expect (or not) from Apple in AI innovation, I always turn to Robert Scoble on X @Scobleizer. Here were some of his summary posts...
Cynical take on Apple's WWDC: just doing things Microsoft did back in 2003. Liquid glass. Menus on tablets.
Dark take on it: it's way behind in AI, and didn't demonstrate any attempt to catch up.
Light take: Lots of new AI features, like your phone will wait on hold for you now.
Hopeful take: the new design joins Apple Vision Pro into its ecosystem, showing that the Apple Vision Pro is the future of Apple.
Scoble adds: I really hate the recorded product demos and the old people showing new features and attempting to be "hip."
On a more Apple-positive note, Scoble is looking forward to the next devices which should be coming in the AR space...
Later this year both Apple and Google are introducing heavyweight category wearables. Lighter than the first Vision Pro.
We will judge them by who has the best AI inside.
That is more important than resolution.
Google, today, looks like it is way ahead and pulling further away because this is a game of exponents.
I will buy both anyway. :-)
(end of @Scobleizer rants)
Many experts are sensing that Alphabet GOOGL is "firing on all cylinders across AI" as we've discussed previously. From Gemini 2.5 Pro and the astonishing new Veo 3 to building AI capabilities with their own with TPUs (instead of relying on NVIDIA GPUs), they're the only vertically-integrated player across all realms of tech.
Google will probably also figure out the shift from classic search to generative search, as Daniel Newman of the Futurum technology research group says. Reports of Google's demise have been greatly exaggerated according to @DanielNewmanUV and I wish I was listening before I sold my shares on the last "search is dead" scare.
6) Zuck Splashes the Pot with $14.3 Billion
Meta Platforms META plunked down that amount for only 49% of a private company called Scale AI. But the price tag made it the biggest pure-AI acquisition, following OpenAI's $6 billion purchase of Jony Ive's company.
Just like Sam wasn't waiting around to find out what AI-native device Apple will build, so too Zuck isn't waiting around for permission to have access to the premier company in the data supply chain -- what some are calling the oil refinery of the AI economy.
What does that mean? Well if you think about data as various grades of crude oil, it needs to be cleaned and prepped in a number of ways before it can be "mined and modeled" for quality results.
That's where Scale AI comes in with data prep and labeling because major AI models need structured and labeled training data to generate knowledge tokens, insights, and deep learning.
Scale AI is a San Francisco-based artificial intelligence company founded in 2016 by Alexandr Wang and Lucy Guo. The company specializes in providing high-quality data labeling, annotation, and model evaluation services that are essential for training advanced AI models, including large language models (LLMs) and generative AI systems.
Scale AI is known for its robust data engine, which powers AI development for leading tech firms, government agencies, and startups worldwide. Its research division, the Safety, Evaluation and Alignment Lab (SEAL), focuses on evaluating and aligning AI models for safety and reliability
7) AMD Unveils AI Server Rack, Sam on Stage with Lisa
I am still shaking my head at all the stuff that happened last week! As if all of the above wasn't enough, Advanced Micro Devices AMD held its annual Advancing AI conference last Thursday with a product roadmap for hyperscale inferencing that caught investor attention.
In addition to leaps forward in performance for the existing Instinct MI350 Series GPU systems, AMD CEO Lisa Su unveiled the Helios AI Rack-scale architecture supporting up to 72 MI400 GPUs, with 432GB of HBM4 memory per GPU and 19.6 TB/sec bandwidth. Available in 2026, this is clearly an answer to NVIDIA's GB200/300 series rack systems.
AI Market Growth: CEO Lisa Su projected an 80% increase in AI inference demand by 2026, driven by the rapid adoption and expansion of AI applications in enterprise and cloud environments.
Roadmap: AMD reaffirmed its commitment to an annual cadence of AI chip releases, with the MI400 and MI450 series already in development and expected to challenge Nvidia's flagship offerings in 2026 and beyond.
And then Sam Altman showed up during Lisa's keynote. Since he clearly can't get enough compute or GPUs, he's as tight with Lisa as he is with Jensen.
Lisa welcomed the founder and CEO of OpenAI as a key design partner for AMD's upcoming MI450 GPU who will help shape the next generation of AMD's AI hardware. OpenAI will use AMD GPUs and Helios servers for advanced AI workloads, including ChatGPT.
And AMD's other happy customers continue to come back for more with Meta deploying AMD Instinct MI300X GPUs for Llama 3/4 inference and collaborating on future MI350/MI400 platforms.
Meanwhile Microsoft Azure runs proprietary and open-source models on AMD Instinct MI300X GPUs in production and Oracle Cloud Infrastructure will deploy zettascale AI clusters with up to 131,072 MI355X GPUs, offering massive AI compute capacity to customers.
This event made AMD shares a clear buy last week -- and this week if you can still grab some under $130!
OLD RANT: The Fundamental Difference
Finally, did you hear what another OpenAI co-founder said at the University of Toronto commencement address? Ilya Sutskever, the OpenAI architect and deep learning pioneer who in 2024 started his own model firm, Safe Superintelligence, spoke these words to the new grads...
"The day will come when AI will do all the things we can do. The reason is the brain is a biological computer, so why can't the digital computer do the same things?
"It's funny that we are debating if AI can truly think or give the illusion of thinking, as if our biological brain is superior or fundamentally different from a digital brain."
I had to dig out my old rant about the fundamental difference(s) between human brains and computer "thinking." If you haven't heard me on this, you owe it to yourself so you can easily explain the differences to other "intelligence experts" telling you how consciousness works.
Bottom line: To stay informed in AI, listen to The Week in AI replay, or just go to that post to see all the links and sources. And be sure to follow me on X @KevinBCook so you see the announcement for the new live Space every Friday.
Only $1 to See All Zacks' Buys and Sells
We're not kidding.
Several years ago, we shocked our members by offering them 30-day access to all our picks for the total sum of only $1. No obligation to spend another cent.
Thousands have taken advantage of this opportunity. Thousands did not - they thought there must be a catch. Yes, we do have a reason. We want you to get acquainted with our portfolio services like Surprise Trader, Stocks Under $10, Technology Innovators, and more, that closed 256 positions with double- and triple-digit gains in 2024 alone.
See Stocks Now >>
Want the latest recommendations from Zacks Investment Research? Today, you can download 7 Best Stocks for the Next 30 Days. Click to get this free report
Apple Inc. (AAPL): Free Stock Analysis Report
Advanced Micro Devices, Inc. (AMD): Free Stock Analysis Report
NVIDIA Corporation (NVDA): Free Stock Analysis Report
Alphabet Inc. (GOOGL): Free Stock Analysis Report
Meta Platforms, Inc. (META): Free Stock Analysis Report

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Pope Leo XIV flags AI impact on kids' intellectual and spiritual development
Pope Leo XIV flags AI impact on kids' intellectual and spiritual development

Winnipeg Free Press

timean hour ago

  • Winnipeg Free Press

Pope Leo XIV flags AI impact on kids' intellectual and spiritual development

ROME (AP) — Pope Leo XIV warned Friday that artificial intelligence could negatively impact the intellectual, neurological and spiritual development of young people as he pressed one of the priorities of his young pontificate. History's first American pope sent a message to a conference of AI and ethics, part of which was taking place in the Vatican in a sign of the Holy See's concern for the new technologies and what they mean for humanity. In the message, Leo said any further development of AI must be evaluated according to the 'superior ethical criterion' of the need to safeguard the dignity of each human being while respecting the diversity of the world's population. He warned specifically that new generations are most at risk given they have never had such quick access to information. 'All of us, I am sure, are concerned for children and young people, and the possible consequences of the use of AI on their intellectual and neurological development,' he said in the message. 'Society's well-being depends upon their being given the ability to develop their God-given gifts and capabilities,' and not allow them to confuse mere access to data with intelligence. 'In the end, authentic wisdom has more to do with recognizing the true meaning of life, than with the availability of data,' he said. Leo, who was elected in May after the death of Pope Francis, has identified AI as one of the most critical matters facing humanity, saying it poses challenges to defending human dignity, justice and labor. He has explained his concern for AI by invoking his namesake, Pope Leo XIII. That Leo was pope during the dawn of the Industrial Revolution and made the plight of workers, and the need to guarantee their rights and dignity, a key priority. Toward the end of his pontificate, Francis became increasingly vocal about the threats to humanity posed by AI and called for an international treaty to regulate it. Francis said politicians must take the lead in making sure AI remains human-centric, so that decisions about when to use weapons or even less-lethal tools always remain made by humans and not machines. Sundays Kevin Rollason's Sunday newsletter honouring and remembering lives well-lived in Manitoba. ___ Associated Press religion coverage receives support through the AP's collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content.

In the AI revolution, universities are up against the wall
In the AI revolution, universities are up against the wall

Globe and Mail

timean hour ago

  • Globe and Mail

In the AI revolution, universities are up against the wall

Mark Kingwell is a professor of philosophy at the University of Toronto. His latest book is Question Authority: A Polemic About Trust in Five Meditations. It's convocation season. Bored graduates everywhere will be forced to listen to earnest speeches about how they should make their way in a world short on decent jobs. I've given a couple of those orations myself. Here's the one I won't be giving this year but would have if asked. Hey guys! You've probably heard that philosophers are in the habit of declaring their discipline dead. Thinkers are forever claiming that everyone before them had the wrong ideas about time, being, or knowledge. Great – it's a vibrant patricidal enterprise. But I'm here today to tell you that philosophy is dead for good this time. So is humanistic education in general, maybe academia itself. The murderous force isn't just anti-elitist, Trump-driven depredation. No, as Nietzsche said of the death of god, we have done the killing. Smartness destroys from the inside out: The AI revolution has signalled the demise of the university as we know it. After all, how do we teach undergraduates philosophy, history or anything else when it's now so easy to fake the whole process? Students still think it might be wrong, or maybe risky, to have an algorithm write their essays wholesale. But increasingly they don't see what's wrong with using programs to take notes, summarize readings and create or correct first drafts. Reading, meanwhile, is tedious and hard, and so the idea of assigning entire books – even novels – is sliding out of academic fashion. Average attentions spans have shrunk from several minutes to about 40 seconds. You won't counter that by putting Aquinas's Summa or Spinoza's Ethics on the syllabus. At the same time, these same students resent knowing that professors might use countervailing programs to grade their work. They also dislike the idea that somebody in authority might consider them cheaters. Indeed, some students now resort to surveillance-society mechanisms, once the bugbear of free citizens everywhere, to prove that they are not cheating, including YouTube videos of them composing their guaranteed-human-origin essays. So: programs for recording screen activity or documenting keystrokes are now being asked to view performative acts of being-watched. And programs for cheating on essays confront programs designed to catch cheaters but also programs designed to counter the need for human grading altogether. These countervailing programs produce and consume each other; they watch and are watched, cheat and are cheated, pursue grades and are duly graded. I'm not the first to notice that there is no further need for human middle men here. Students and professors alike are extraneous to the system. A techno-bureaucratic loop enfolds them, then snips them off as messy loose ends. We have created the ultimate state of frictionless exchange, a circulating economy of the already-thought, the banal, the pre-digested, where every Google search leads to a fabricated source that eventually bounces back to base. Peak efficiency, with net gains in eliminated boredom. Yay! So why resist assimilation? Recently I sat in a seminar organized by my colleagues to consider ways of testing students in class, as a foil to chatbot cheating. The proposed tests involved various small-scale fact-finding exercises, truncated arguments, and the logic-skills equivalent of a magazine puzzle page. One professor suggested that actual written essays should be reserved only for upper-level undergraduates and graduate students, if anyone. Fine, I suppose, but how would those upper-level students ever learn how to write in the first place, let alone write well? Forget AI essay cheating. Basic writing ability, always prone to deterioration, is now disappearing faster than map-reading skills and short-term memory. You can no longer assume that first-year students know how to compose even the most basic 'hamburger' essay (bun, lettuce, tomato, patty, bun). And still we believe – do we not? – that clear writing is the foundation of clear thought. Alas, that faith no longer seems so warranted. Writing seems more and more surplus to requirements. It can be off-loaded as a dreary chore, like so much dirty laundry sent out for cleaning. I recently wondered, not for the first time, if I had been labouring under a mistaken notion of philosophy, and teaching it, all along. If the subject can be distilled down to a roster of positions, specific argumentative moves and technical terms – which is how I believe some of my colleagues see it – then we can indeed dispense with sustained discursive engagement, and the clunky old-fashioned fraud-prone essay with it. But then, what would education be like? What would it be for? Good questions. Maybe the current proclaimed academic death-rattle is actually an opportunity to go back to first principles, inside the walls and out. In my discipline's case, the issue is not so much the end of philosophy, in other words, but the ends of philosophy. Like most teachers of the subject, I have long been conflicted about our mechanisms of assessment. Essays are a slog for everyone, even when they're legit products of individual minds. In-person final exams can control for essay cheating, most of the time, but they are a poor method of gauging the depth of philosophical insight. The old joke from Annie Hall makes the point: 'I was thrown out of college for cheating on the metaphysics exam,' it goes. 'I looked into the soul of the boy sitting next to me.' Like many philosophy professors, I prefer discussion in seminars, close reading of textual passages, and face-to-face assessment over both essays and exams. I ask for short, ungraded weekly reflection papers that my students seem to enjoy writing and I certainly enjoy reading. But these small-bore tools are not scaleable for our vast budget-driven enrolments. And always, grades loom far larger than they should over the whole enterprise. Once you start questioning assessment, you slide very quickly into uncomfortable thoughts about the larger purpose of any teaching. The irony is doubled because asking 'What is the use of use?' is one of those typical philosophical moves. Updated version for the age of neo-liberal overproduction: What is the use of asking what is the use of use, when large language models can do it for you?' I admit I get impatient when, at this stage of things, people invoke some vague notion of distinctive humanness, a form of species-centric superiority. I mean those hand-wavy claims that there is something about what we humans do that is just, well, different from AI versions of things. Different and better. No AI could ever match the uniqueness of the human spirit! Well, maybe. But let's be serious: This line of argument is ideological special pleading. There are some 8.2 billion unique human souls on the planet. Yes, a minority break free of the sludge of mediocrity, and we celebrate them. We also cherish the experience of our own lives, however mundane. But we're now forced to realize that some, even many, sources of human pride can be practised as well, if not better, by non-human mechanisms. Art and poetry fall before the machines' totalizing recombinative invention. Even athletics, apparently deeply wedded to the human form, are being colonized by cyborg technology. You might think this is just griping from another worker whose sector is destined for obsolescence. True, neoliberal overproduction and dire job prospects have likely produced more philosophy teachers – and many more student essays – than the world needs. From this angle, AI's great academic replacement is just a market correction. It completes a decades-long self-inflicted irrelevance program, those thousands of punishing essays that nobody reads, the best ones published in journals that are, more and more, pay-as-you-go online boondoggles. I still think those abstruse debates are important, though, and you should too. We are at a transitional point that demands every tool of critical reflection, human or otherwise. Anxiety about the future of work and life is pitched high, for good reason. For now we are still mostly able to spot uncanny AI slop, bizarre search-engine confabulations, and bot-generated recommendations for books that have been invented by bots – presumably so that other bots can then not-read them, scrape the data for future reconstitution, and maybe submit unread book reports for academic credit somewhere. We can even, for the moment, recognize that non-bot government bans on actual books, and state-sponsored punishment of legacy liberal education, pose a threat to everyone's freedom. But I still think we are losing, in the current murk, something that only philosophy can provide. It's something that has always been posthuman in the dual sense of transcendent and transformative. I don't just mean a critical-thinking skill set, or body of facts, or even the basics of media literacy and fallacy-spotting – though these are essential tools for life. I mean, rather, the things that animate the hundreds of students who still come to our classes: the value of self-given meaning and purpose, the pleasure of being good at hard things for their sake alone, a consuming joy in the free play of imagination. A desire to flourish, and to bend the arc of history toward justice. I don't know if those things are exclusive to humans; I do know that they are threatened and in short supply among existing humans. The love of wisdom can't really be taught, for it is a turning of the soul toward the beautiful and good. You can't justify the value of that turning to someone who has not yet felt the necessary shift in value. That's the paradox of all philosophy, and of all philosophy teaching. There will be no exam after this lecture, graduates. The real test is no more, but also no less, than life itself. You are a speck of dust in an indifferent universe. Now make the most of it. Is AI dulling critical-thinking skills? As tech companies court students, educators weigh the risks Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe Stopping the brain drain: U of T professor aims to launch 50 AI companies with new venture studio Axl AI adoption is upending the job market for entry-level workers In Imagination: A Manifesto, Ruha Benjamin argues that the Musks and Zuckerbergs of the world have usurped our ability to dream of better futures. But it doesn't have to be that way. She spoke with Machines Like Us about what could be done differently.

Telegram founder Pavel Durov says all his 100+ children will receive share of his estate
Telegram founder Pavel Durov says all his 100+ children will receive share of his estate

CTV News

timean hour ago

  • CTV News

Telegram founder Pavel Durov says all his 100+ children will receive share of his estate

Telegram CEO Pavel Durov said he didn't want his children to fight over his estate. (Thomas Samson/AFP/Getty Images via CNN Newsource) Pavel Durov, the founder and CEO of instant messaging app Telegram, plans to leave his fortune to the more than 100 children he has fathered. The Russian-born tech tycoon has revealed that his estate will be split between his six children from relationships and the scores of others whom he fathered through sperm donation. In a wide-ranging interview published Thursday in French political magazine Le Point, 40-year-old Durov revealed that he does not differentiate between his legal children with three different women and those conceived with the sperm he donated. 'They are all my children and will all have the same rights! I don't want them to tear each other apart after my death,' he said, after revealing that he recently wrote his will. Durov revealed the number of children he has fathered on his social media last year. He said a doctor told him that it was his 'civic duty' to donate his 'high quality donor material,' which he did over the course of 15 years. According to Bloomberg, Durov is worth an estimated US$13.9 billion, but he dismissed such estimates as 'theoretical,' telling Le Point: 'Since I'm not selling Telegram, it doesn't matter. I don't have this money in a bank account. My liquid assets are much lower – and they don't come from Telegram: they come from my investment in bitcoin in 2013.' Regardless, his children will have a long wait for their inheritance. He said: 'I decided that my children would not have access to my fortune until a period of 30 years has elapsed, starting from today. I want them to live like normal people, to build themselves up alone, to learn to trust themselves, to be able to create, not to be dependent on a bank account. I want to specify that I make no difference between my children: there are those who were conceived naturally and those who come from my sperm donations.' When asked why he has written his will now, Durov, who lives in Dubai, said: 'My work involves risks – defending freedoms earns you many enemies, including within powerful states. I want to protect my children, but also the company I created, Telegram. I want Telegram to forever remain faithful to the values I defend.' Telegram, which has more than a billion monthly users, is known for its high-level encryption and limited oversight on what its users post. Last year, Durov was arrested in Paris on charges relating to a host of crimes, including allegations that his platform was complicit in aiding money launderers, drug traffickers and people spreading child pornography. Durov, who is Telegram's sole shareholder, has denied the charges, which he described as 'absurd.' 'Just because criminals use our messaging service among many others doesn't make those who run it criminals,' he told the French magazine. Lianne Kolirin, CNN

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store