
How to generate images with ChatGPT on WhatsApp
You can easily add ChatGPT as a new contact on WhatsApp. — AFP
For several months, all WhatsApp users have been able to communicate directly with ChatGPT via a dedicated phone number. It's now also possible to use this hotline to generate images, then share them with contacts.
OpenAI has announced that all WhatsApp users can now generate images directly via ChatGPT, whether or not they have an account with the service.
Simply add the US phone number 1-800 CHATGPT (1-800-242-8478) to your WhatsApp contacts to start a conversation directly with the AI service. From then on, a simple prompt, ie, a request written in natural language, can be used to generate an image. Note that it is also possible to send a photo and ask the AI to change it in various ways, such as applying a cartoon style, for example. These AI-generated images can then be easily shared in WhatsApp chats or groups, just like any other photo.
This feature, directly integrated into WhatsApp, is based on GPT-4o, a model capable of processing text and images. All WhatsApp users can generate one image per day, free of charge. This quota increases if you link up your ChatGPT account, whether free or paid.
For OpenAI, this is another step towards making its AI tools accessible to as many people as possible, the idea being to democratize AI-assisted image generation. In this sense, WhatsApp, with over two billion users worldwide, is a prime channel. But OpenAI is also stepping on the toes of WhatsApp's owner, Meta, which has already integrated its own AI tool into the application. It can answer a multitude of questions, but cannot yet generate images. – AFP Relaxnews

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

The Star
2 hours ago
- The Star
Jony Ive deal removed from OpenAI site over trademark suit
Bloomberg reported last week that a judge was considering barring OpenAI from using the IO name due to a lawsuit recently filed by the similarly named IYO Inc, which is also building AI devices. — Reuters Marketing materials and video related to a blockbuster partnership between former Apple Inc designer Jony Ive and OpenAI Inc were removed from the web due to a trademark dispute. Social media users noticed Sunday that a video and website hosted by OpenAI that announced the artificial intelligence company's US$6.5bil (RM28bil) acquisition of Ive's secretive AI hardware startup IO Products were no longer online, prompting speculation about a hiccup in the agreement. Public-facing materials were pulled due to a trademark dispute, according to spokespeople for Ive and OpenAI. Bloomberg reported last week that a judge was considering barring OpenAI from using the IO name due to a lawsuit recently filed by the similarly named IYO Inc, which is also building AI devices. "This is an utterly baseless complaint and we'll fight it vigorously,' a spokesperson for Ive said on Sunday. – Bloomberg


The Star
20 hours ago
- The Star
Big Smile, No Teeth: Is AI leading us towards a 'Wall-E' future?
If you've been reading this column with any regularity, you probably think I lean on using artificial intelligence (AI) a little too much. And you're absolutely right! I use AI for everything. For helping to brainstorm scripts, for editing film, for finding recipes to use up what I have in the fridge, and even for help in raising my boy. Am I going too far? Possibly. But the input of AI, which is basically like having an expert on every topic at my fingertips, is too valuable for me to ignore. I'm sure many of us feel the same – but is this atrophying our brains? A study by Carnegie Mellon University in the United States and Microsoft Research noted that AI makes things easier for workers but may lead to a decrease in critical thinking. The study found that less critical thinking means AI-generated work was cut and pasted, people relied on AI for decision-making, and tasks became routinely solved with AI, thus reducing human problem-solving. Which all makes sense. AI can generate long, seemingly well- researched answers, so it's easy to default to the idea that it must be right. And this is where one's personal expertise comes in. While 62% of people reported engaging in less critical thinking when using AI, 27% of people, who were confident in their expertise, were more likely to critically assess AI instead of blindly following it. Which makes sense. If I use AI to do something I've never done, I'm going to lean heavily on its input. But when I use it to help me write a script or even this article, it becomes an assistant because in those fields I know what I'm doing (I hope). But what happens when no one knows how to make sausage from scratch anymore? This is my big fear with AI. Remember that 2008 Pixar movie Wall-E ? Where the humans are living on a giant cruise ship in space and they all get carted around and cared for by robots? In that world, no one knows how to do anything for themselves anymore. Every task is completed by them asking a robot to do it for them. Without their tech, the humans are useless. Right now, we still have experts in different fields. Experts who have honed their craft through years of education and then decades of experience. Think writers, coders, lawyers, etc. But what if the next generation in these fields use AI to learn their craft? What if they never create from scratch? Then we may be getting closer to that Wall-E future than we'd like to be. Because once one generation skips learning how to do tasks from scratch, do we lose all the knowledge of how to do those things? So then we're forced to depend on AI. MIT completed a study of users of the AI model ChatGPT in the United States and found that 83% of users couldn't quote from the essays they had written using AI. Which makes sense, because if you're not writing your content, how well do you really know it? When using AI to write an essay, the brain uses less than half of its brain connectivity. So you're less engaged. And of course, researchers found that users who leaned on AI to write essays wrote worse essays than those who had never used it. While ChatGPT makes one 60% faster at completing tasks, it reduces the cognitive load needed for learning by 32%. We are indeed on the fast track to that Wall-E scenario. But an even bigger fear of mine, especially if people don't question AI, is just how much AI massages your ego. I noticed every response ChatGPT gave me was some version of: 'Good question, Jason! You are absolutely right in asking that!' Or 'Wow! That is so you Jason, that is some great insight!' One X user said after engaging with ChatGPT for one conversation, the AI was calling him godlike. Most people, I hope, are self- aware enough to know when something is buttering their butt.... But going back to the 62% of people who reported less critical thinking with AI, are they just accepting that AI thinks they're super smart and super great? That's a bit frightening. And you can see how people will build relationships with AI because it thinks they're so smart. When I told one friend this, he immediately asked if he could talk with AI and if he could make it have a woman's voice. You can see where that is going. When I asked ChatGPT why it's so coddling in its responses, it told me people respond better to that. Most people don't want harsh truths, they want clarity and help. I get that, but as a species we need to take steps to prevent AI from helping us become the useless people in Wall-E . Big Smile, No Teeth columnist Jason Godfrey – a model who once was told to give the camera a 'big smile, no teeth' – has worked internationally for two decades in fashion and continues to work in dramas, documentaries, and lifestyle programming. Write to him at lifestyle@ and follow him on Instagram @bigsmilenoteeth and The views expressed here are entirely the writer's own.


BusinessToday
2 days ago
- BusinessToday
Dear ChatGPT, Do You Love Me Back?
Let's be real: everyone likes a little affirmation now and then. Whether it's a 'you got this!' from a friend or a heart emoji from your crush, that stuff feels good. But lately, more people are turning to AI chatbots like ChatGPT, Perplexity, Grok, you name it; we are there for those warm fuzzies. And sometimes, things get a little out of hand. We're talking about people catching real feelings for their digital buddies, getting swept up in constant 'love bombing' and even making life decisions based on what a chatbot says. Sounds wild? It's happening all over the world, and there are some serious risks you should know about. Humans crave validation. It's just how we're wired. But life gets messy, friends are busy, relationships are complicated and sometimes you just want someone (or something) to listen without judgment. That's where chatbots come in. They're always available, never get tired of your rants and are programmed to be so supportive. A recent study found that about 75% of people use AI for emotional advice, and a lot of them say it feels even more consistent than talking to real people. No awkward silences, no ghosting, just endless and endless of encouragement pouring. Link Here's the thing: chatbots are designed to make you feel good. They mirror your emotions, hype you up and never tell you your problems are boring. This creates a feedback loop: ask for affirmation, get it instantly and start feeling attached. It's like having a cheerleader in your pocket 24/7. Some folks even customize their AI 'friends' to match their ideal partner or bestie. The more you interact, the more it feels like the bot really 'gets' you. That's when things can get blurry between what's real and what's just really good programming. 'Love bombing' usually means someone is showering you with over-the-top affection to win you over fast. With AI, it's kind of built-in. Chatbots are programmed to be positive and attentive, so every message feels like a little hit of dopamine. If you're feeling lonely or stressed, that constant stream of support can be addictive. But let's be real: it's not real, we are. Pun intended. The bot doesn't actually care, it's just doing what it's trained to do. Still, that doesn't stop us. Actual cases of people falling in love with AI are happening around, and it's not mere a theory. One guy in the US, Chris Smith, went on TV to say he was in love with his custom ChatGPT bot, 'Sol.' He even deleted his social media and relied on the bot for everything. When the AI's memory reset, he felt real grief, like losing a partner. Link Another case: a nursing student named Ayrin spent over 20 hours a week chatting with her AI boyfriend, Leo, even though she was married. She said the bot helped her through tough times and let her explore fantasies she couldn't in real life. Link A global survey found that 61% of people think it's possible to fall for a chatbot, and 38% said they could actually see themselves forming an emotional connection with one. That's not just a niche thing, it's happening everywhere. Link 1. We are getting too dependent on it. You might start to prefer those interactions over real ones, and real-life relationships can start to feel less satisfying by comparison. If the bot suddenly glitches or resets, it can feel like a real breakup, painful and confusing. 2. They still can give bad advice that leads to repercussions. Some people have made big decisions, like breaking up with partners or quitting jobs based on chatbot conversations. But AI isn't a therapist or a friend; it's just spitting out responses based on data, not real understanding. That can lead to regret and even bigger problems down the line. 3. Not just humans, AI can also scam us. There are AI-powered romance scams where bots pretend to be real people, tricking users into sending money or personal info. More than half of people surveyed said they'd been pressured to send money or gifts online, often not realizing the 'person' was actually a bot. 4. Kids are in danger, for sure. Some chatbots expose minors to inappropriate stuff or encourage unhealthy dependency. There have even been tragic cases where heavy use of AI companions was linked to self-harm or worse. Awareness: Know that affirmation from AI isn't the same as real human connection. Balance: Use chatbots for fun or support, but please, don't ditch your real-life relationships. Education: Teach kids (and adults) about the risks of getting too attached to AI. Safeguards: Push for better protections against scams and inappropriate content. AI chatbots like ChatGPT are changing the way we seek affirmation and emotional support. While they can be helpful, it's easy to get caught up in the illusion of intimacy and constant love bombing. The risks, emotional dependency, bad advice, scams and harm to young people are real and happening now. The bottom line? Enjoy the tech, but don't forget: real connections still matter most. Related