Latest news with #GPT


Mint
6 hours ago
- Mint
ChatGPT now lets you create and edit images on WhatsApp, here's how to get started and what to expect
Cross-service ChatGPT integration just got a serious upgrade and now WhatsApp is part of the action. If you have ever wanted to create or edit images without leaving your chat window with GPT's advanced capabilities, that is now possible. What does this OpenAI update mean? You can generate images right inside WhatsApp. No need to install extra apps or switch between tabs. Just start a conversation and watch your ideas take shape. This new feature is available for free in regions where ChatGPT is officially supported on WhatsApp. You can interact with the chatbot using text, images or even voice notes. The process is designed to be simple and accessible for anyone who wants to try their hand at AI-powered creativity. There are a few things to know before you jump in. Free users can create one image per day. After that, you will need to wait about 24 hours before you can try again. If you have a paid ChatGPT subscription, you get a higher daily limit. Not everyone can link their account yet and the process can sometimes be a bit slow. OpenAI is still rolling out the feature and making improvements. Getting started is straightforward. Here is what you need to do Save the official ChatGPT WhatsApp number, +1 800 242 8478, to your contacts Open WhatsApp and send a greeting to start the chat When prompted, link your OpenAI account by following the secure link and logging in Send a prompt describing the image you want or share a photo for a creative twist, like turning your selfie into a Studio Ghibli-style illustration Wait a few minutes and your generated image will appear in the chat ChatGPT on WhatsApp is not just about images. You can ask for recipes, get help with writing, or even upload photos for quick descriptions. It is a handy little productivity boost that fits right into your daily conversations. It doesn't matter whether you need a social media caption or want to try something creative, this tool is built to make things easier. OpenAI is not the only one bringing AI to WhatsApp. Meta, which owns WhatsApp, has its own Meta AI assistant with image generation. Perplexity is another tool offering similar features. So if you are curious, you have plenty of options to explore. If you want to see what AI can really do, this new WhatsApp feature is worth a try. Your next chat could become a mini art project or just a bit more fun than usual.


The Hill
11 hours ago
- Science
- The Hill
ChatGPT use linked to cognitive decline: MIT research
ChatGPT can harm an individual's critical thinking over time, a new study suggests. Researchers at MIT's Media Lab asked subjects to write several SAT essays and separated subjects into three groups — using OpenAI's ChatGPT, using Google's search engine and using nothing, which they called the 'brain‑only' group. Each subject's brain was monitored through electroencephalography (EEG), which measured the writer's brain activity through multiple regions in the brain. They discovered that subjects who used ChatGPT over a few months had the lowest brain engagement and 'consistently underperformed at neural, linguistic, and behavioral levels,' according to the study. The study found that the ChatGPT group initially used the large language model, or LLM, to ask structural questions for their essay, but near the end of the study, they were more likely to copy and paste their essay. Those who used Google's search engine were found to have moderate brain engagement, but the 'brain-only' group showed the 'strongest, wide-ranging networks.' The findings suggest that using LLMs can harm a user's cognitive function over time, especially in younger users. It comes as educators continue to navigate teaching when AI is increasingly accessible for cheating. 'What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental,' the study's main author Nataliya Kosmyna told TIME. 'Developing brains are at the highest risk.' However, using AI in education doesn't appear to be slowing down. In April, President Trump signed an executive order that aims to incorporate AI into U.S. classrooms. 'The basic idea of this executive order is to ensure that we properly train the workforce of the future by ensuring that school children, young Americans, are adequately trained in AI tools, so that they can be competitive in the economy years from now into the future, as AI becomes a bigger and bigger deal,' Will Scharf, White House staff secretary, said at the time.
Yahoo
15 hours ago
- Yahoo
Once You Notice ChatGPT's Weird Way of Talking, You Start to See It Everywhere
It's not written by humans, it's written by AI. It's not useful, it's slop. It's not hard to find, it's everywhere you look. As AI-generated text is becoming increasingly ubiquitous on the internet, some distinctive linguistic patterns are starting to emerge — maybe more so than anything else, that pattern of negating statements typified by "it's not X, it's Y." Once you notice it, you start to see it everywhere. One teacher on Reddit even noticed that certain AI phrase structures are making the jump into spoken language. "Comments and essays (I'm a teacher) are the obvious culprits, but I've straight up noticed the 'that's not X, it's [Y]' structure being said out loud more often than it used to be in video essays and other similar content," they wrote. It's a fascinating observation that makes a striking amount of AI-generated text easily identifiable. It also raises some interesting questions about how AI chatbot tech is informing the way we speak — and how certain stylistic choices, like the em-dash in this very sentence, are becoming looked down upon for resembling the output of a large language model. "Now I know that linguistic style existed before GPT, and it was common enough, but now I just can't unsee or unhear it," the Reddit user wrote, saying they now "assume AI was involved" when they see it. "Makes me grimace just a bit on the inside," they added. Others quickly chimed in, agreeing and riffing on the phenomenon. "You're not just seeing it — you're saying something," one user wrote in a tongue-in-cheek comment, imitating ChatGPT. "And that's not illusion — that's POWER." "It's almost as if AI use is becoming the preferred way of communication," another user commented. "It's not just frustrating — it's insulting." Beyond a prolific use of em-dashes, which have quickly become a telltale sign of AI-generated text, others pointed out the abundant use of emojis, including green checkboxes and a red X. It's a particularly pertinent topic now that the majority of students are owning up to using tools like ChatGPT to generate essays or do their homework. Even teachers are using the tech for grading, closing the loop on a trend that experts warn could prove incredibly destructive in the field of education. Tech companies have struggled to come up with trustworthy and effective AI detection tools, more often than not leaving educators to their own devices. And the stakes are as high as they've ever been. The internet is being flooded with AI slop, drowning out text that's actually being authored by a human. AI's oddly stunted use of language isn't surprising. After all, large language models are trained on enormous training datasets and employ mad-libs style tricks to calculate the probability of each sequential word. In other words, LLMs are imitators of human speech and attempt to form sentences that are most likely to be appreciated by the person writing the prompts, sometimes to an absurd degree. It's an unnerving transition to a different — and consistently error-laden — way of writing that simply doesn't mesh with the messiness of human language. It's gotten to the point where teachers have become incredibly wary of submitted work that sounds too polished. To many, it's enough to call for messier writing to quell a surge in low-effort AI slop. "GPT is always going to sound polished," one Reddit user offered. "It's a machine that rewards coherence, which is why incoherence has never been more precious." "We need the rough edges," they added. "The voice cracks. The unexpected pause. The half-formed metaphor that never quite lands. Because that's how you can tell a human is still in there, pushing back." More on AI chatbots: AI Chatbots Are Becoming Even Worse At Summarizing Data


Indian Express
a day ago
- Business
- Indian Express
‘GPT-5 is arriving this summer': Sam Altman reveals OpenAI's roadmap
Sam Altman seems to be giving interviews one after the other. On Thursday, June 19, the CEO of OpenAI appeared on the company's podcast featuring an extended conversation with host Andrew Mayne. In the pilot episode, which lasted for about 40 minutes, Altman laid out a roadmap focused on offering a unified experience with the release of GPT-5, which is slated for this summer. When it comes to GPT-5, OpenAI seems to be working on a solution to unify its array of model offerings. Based on Altman's interaction, the AI powerhouse is working towards fixing the company's confusing product line-up with GPT-5. Reportedly, the latest generation of GPT will be essentially a simplification of ChatGPT's diverse models into one streamlined user interface. Talking about OpenAI's next frontier AI model, Altman said that it will probably arrive this summer. The CEO also admitted that the current state of the model choice is a 'whole mess'. He stated that the goal is to get back to a simple progression (GPT-5, GPT-6) and do away with the complex variants inferring to models, that is, GPT-4, GPT-4o, etc. Altman explained that the future goal is to develop a unified model that can handle everything seamlessly, from instant questions to complex, multi-step tasks using reasoning and agent-like tools such as Deep Research. This would essentially eliminate the need for switching modes within the ChatGPT interface. Altman also said that there is an internal debate going on over the naming strategy for the upcoming model to convey clarity. He even briefly mentioned Elon Musk, and how the SpaceX chief tried using his influence in the government to unfairly compete. Altman said that a shift towards delivering a unified user experience was taking place because AI has evolved from being a bot that gives instant answers. He said that he is surprised to find that for hard problems, users are willing to wait for a 'great answer'. According to him, this insight is driving the development of more thoughtful reasoning models that could perform like a human expert, essentially taking its own time before answering. This is Altman's second podcast interview this week. On the Uncapped podcast that was uploaded to YouTube on June 17, Altman said that Meta offered his employees $100 million bonuses to recruit them as a part of the social media giant's recent efforts to ramp up its AI strategy.


NDTV
a day ago
- Science
- NDTV
ChatGPT making people dumb, brains of youngsters "at highest risk": Study
The researchers at the Massachusetts Institute of Technology (MIT) found alarming trends when they analysed the impact of ChatGPT on the human brain. The artificial intelligence chatbot is making humans 60% faster at completing tasks, but it is also reducing the "germane cognitive load" by 32%. Germane load refers to the effort needed to use memory and intelligence to process information into schemas. The researchers used EEG brain scans on 54 participants, aged between 18 to 39 years, for a period of four months. The paper tracked alpha waves, beta waves and neural connectivity patterns. The subjects were divided into three groups to compare the findings. Researchers revealed that ChatGPT users had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels." The paper is not yet peer-reviewed and the sample size is also relatively small, but the main author, Nataliya Kosmyn, said that she released the findings to understand to highlight the concerns with the usage of large language model (LLM), which is a type of AI programme that can recognise and generate text. "What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, 'let's do GPT kindergarten.' I think that would be absolutely bad and detrimental," Time quoted Kosmyna as saying. "Developing brains are at the highest risk." The study revealed that more than 80% of ChatGPT users couldn't quote from essays they wrote minutes earlier. Essays written by using ChatGPT were extremely similar. When teachers were asked to check them, they said they could feel "something was wrong". The essays were "Soulless", "Empty with regard to content", "Close to perfect language while failing to give personal insights." Higher neural connectivity was seen in people with strong cognitive baselines, as compared to regular AI users. The study was shared on X by Alex Vacca, co-founder of who reacted, saying, "You're trading long-term brain capacity for short-term speed." 83.3% of ChatGPT users couldn't quote from essays they wrote minutes earlier. Let that sink in. You write something, hit save, and your brain has already forgotten it because ChatGPT did the thinking. — Alex Vacca (@itsalexvacca) June 18, 2025 "Every shortcut you take with AI creates interest payments in lost thinking ability," Vacca added.