Latest news with #GPT4o


Khaleej Times
5 days ago
- Entertainment
- Khaleej Times
Crushon.AI announces launch of advanced NSFW chatbot features
a platform known for its open-ended, long-memory AI conversations, has announced a new suite of features aimed at enhancing its NSFW chatbot experience. The latest update introduces smarter models, visual interaction capabilities, and expanded customisation - offered entirely free and accessible without the need for user accounts or external API integrations. The rollout includes support for over 17 advanced AI models - including Claude 3.7, GPT-4o, Claude Haiku, and Ultra Claude 3.5 Sonnet - each designed to respond in varied tones and emotional depths. The system allows users to initiate nuanced conversations with dynamic personalities that evolve in tone and emotional complexity, depending on user preference. One of the most notable additions is the introduction of visual responsiveness. With this feature, chatbots can now generate image-based replies that reflect emotional states, context, and character-driven prompts - opening new possibilities for narrative exploration and relationship-driven interaction. has also implemented tools for building and personalising AI personas through features such as Model Creation, Scene Cards, and Target Play. These allow users to develop characters with detailed emotional logic, memory capacity of up to 16K tokens, and flexible interaction settings - without being restricted by content filters or waitlists. "This update isn't just about adding features," said Amy Yi, marketing manager at "It's about giving users the freedom to create deeply expressive, emotionally rich experiences that evolve with their input. We're bridging the gap between visual storytelling, customisation, and intuitive AI interaction." This move reflects a broader trend in conversational AI: a shift toward unrestricted creative platforms that prioritise user control, emotional context, and immersive digital experiences. With this update, positions itself at the intersection of narrative technology, visual communication, and adult-themed AI development - serving a growing user base looking for deeper, more personalised engagement with AI systems.


CNET
6 days ago
- Business
- CNET
OpenAI and Microsoft Reportedly May Be Calling It Quits
OpenAI and Microsoft may be breaking up, potentially leaving Microsoft's Copilot without a, uh, copilot, according to a new report by the Wall Street Journal. The two tech giants have been engaged in a symbiotic relationship for six years, with Microsoft tapping OpenAI's generative AI technology to power its AI assistant, Copilot, in Windows 11 and Bing . But amid negotiations to separate the partners-turned-competitors, OpenAI execs have begun discussing whether to accuse Microsoft of anticompetitive behavior during their partnership, the Wall Street Journal reported, citing people familiar with the matter (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.). A sudden breakup could make teasing out their integration a bit messy. Microsoft announced in May that its AI assistant, Copilot, would begin using GPT-4o, OpenAI technology that also powers the paid version of ChatGPT. Copilot was launched in 2023 to add AI across Microsoft's platforms. Representatives for Microsoft and OpenAI didn't immediately respond to request for comment.


CNET
7 days ago
- CNET
ChatGPT Free Review: Incredible Horsepower With Programmed Limits
CNET's expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise. 8.0 / 10 SCORE ChatGPT Free Review Pros Free Image generation Largely accurate Quick response times Document and image analysis Cons Low token limit Especially for images About 15 messages per a 3-hour window (according to OpenAI) Condensed responses Voice mode in preview only at the moment Remembers limited info from previous sessions ChatGPT Free Review 8/10 CNET Score Imagine you're texting someone and they stop responding for 3 hours. That's sometimes what it's like to use the free version of ChatGPT as of June 2025, running on the GPT-4o model. It's handy until it suddenly stops working. I understand the play by OpenAI, creators of ChatGPT. Ultimately, the company wants you to pay $20 a month for the ChatGPT Plus subscription. It entices you with higher token limits on Plus, meaning you can ask more questions and get larger outputs, as well as access to more advanced "reasoning" models, a fully interactive voice mode and the ability to create custom GPTs. For casual users, the free version of ChatGPT will suffice. In April, OpenAI retired GPT-4, the model that had been powering the ChatGPT free for the past year in favor of the more advanced GPT-4o. The 4o model is multimodal, meaning it can take multiple inputs, from text, to audio to images. The caveat is that for free users, when traffic is high, ChatGPT will downgrade to the GPT-4o-mini model. This model, as the name implies, is lighter but it's not as advanced, meaning it can get information wrong and not understand your intent as clearly. For very occasional use, ChatGPT Free is fine. It's possible to supplement the free version of ChatGPT with other AI chatbots, like Google Gemini and Claude. But if you find yourself quickly running into rate limits and don't like the idea of switching between chatbots -- or if you plan to do a lot of image generation -- it's probably worth upgrading. How CNET reviews AI models To test ChatGPT Free, I took a different approach from last year. Because the models have gotten more advanced, simply asking for recipes or travel itineraries won't push the models, especially now that it can cross-reference the open internet for up-to-date information. Instead, I tried to take a more experiential approach. Rather than running every model we test this year through the exact same round of questioning, I wanted to live with the models, just like everyone else. This included asking for shopping advice, generating diagrams, chatting with the experimental voice mode and asking ChatGPT about my personal life. How accurate is ChatGPT free? With 500 million active users, ChatGPT is quickly growing in popularity, and competing directly against Google Search. Where Google gives you 10 blue links requiring you to sift through articles to find the right answer, ChatGPT can synthesize information for you right away. Of course, AI chatbots can make mistakes, known as hallucinations. In these instances, it can be hard to tell if AI is giving you the best answer because it'll give an incorrect answer with confidence. A good AI chatbot will be accurate enough that you're not always second guessing it. The tricky thing about the free version of ChatGPT is that it'll switch between the GPT-4o and GPT-4o-mini model at any time, without ever informing you. So, one session you might be getting thorough and creative output. And in other sessions, it might feel a bit barebones, with responses being shorter and less detailed. Either way, in my experience, I found the free version of ChatGPT to be accurate for my research queries. But note that, unlike the more advanced o3 model, the free version of ChatGPT won't recursively check over its answers to make sure it's giving you an accurate output. There should be some skepticism when using ChatGPT for research and be prepared to double-check claims in the sources provided or via Google. How quickly do you run into rate limits for general questions? Unlike Google, which lets you search till you can't type anymore, AI chatbots require a lot more processing, and therefore, companies tend to put limits so that servers aren't getting overloaded. For those who pay, they have much higher rate limits. So, the rate limits on the free version of ChatGPT must be dramatically less, correct? It depends. For research, I tried my absolute hardest to push ChatGPT to time out, but found it challenging. When I asked it about the legality of using Nintendo-owned IP for esports competition, it exhausted my line of questioning and I began having to ask ChatGPT for more suggestions on what to ask. To me, it felt unlimited. Output was also quick, suggesting that processing wasn't as taxing as more creative queries. Generally, I've noticed more creative questions, where you need ChatGPT to brainstorm or help you write something bespoke, takes more time, suggesting it's using more processing power. It's these types of queries that'll most likely make you reach your limit faster. Don't ask for too many images Yes, it's possible for free ChatGPT users to create AI-generated images. Don't expect to be filling photobooks in a single session, though. This is where I finally felt the free plan's rate limits. Because ChatGPT Free has rather stringent token limits, and because images eat up a lot of processing power, you're often limited to one or just a handful of images in a single session. If you hit your limit, ChatGPT will make you wait for around three hours to take another crack at it. What's worse, however, is that if you reach your limit because you were generating too many images, you can't use ChatGPT for anything, even basic questions. At the very least, generated images in ChatGPT Free are good. For example, here's an image of a hippo and a zebra enjoying a cup of coffee at a ski resort with two lions fighting it out in the background. AI image generated by ChatGPT Free, Imad Khan/CNET Generate an image of two anthroponomic animals, one hippo and one zebra, drinking hot cups of coffee on a ski resort. Their style should be artistic and hand drawn with a painterly aesthetic. In the background, as skiers are skiing, there should be two lions fighting in the background. While the image isn't perfect, as noted by the wonky skiers in the background, overall, ChatGPT Free did a splendid job of mixing painterly art with anthropomorphic animals. Image generation on ChatGPT Free does take time, however. This image took 10 to 15 minutes to generate. I immediately hit my token cap and had to wait a few hours to be able to try again. Major shopping improvements ChatGPT has always been a great tool for helping find which products to buy. And earlier this year OpenAI pushed out an update to make shopping even better. For free users, the main benefit is direct linking within ChatGPT to related products so you don't have to search separately via Google. When I was researching jeans, ChatGPT Free was able to cross-reference material online and help me narrow down the wide swath of opinions regarding denim from Muji and Uniqlo. It was also able to show me alternative brands in that specific price range. I've also been hunting down a pair of now sold out denim jeans from the Canadian brand, Naked and Famous. When asked where I could find a pair in the aftermarket, ChatGPT Free recommended sites like eBay and Grailed where they might appear, but admitted it'd be difficult to find. Still, ChatGPT was able to link to similar products at that more premium price range. Document analysis As companies use machine learning systems to weed out resumes, job applicants are having to tune their resumes to AI models rather than to potential hiring managers in an attempt to out-AI the AIs. Thankfully, the free version of ChatGPT lets you upload documents for analysis. When I uploaded my resume, ChatGPT complimented me on things I got right and also gave me areas on which to improve. For example, it suggested adding a summary section and removing certain redundancies. Weirdly, when I asked it to analyze a document from a recent federal court ruling against Google, ChatGPT got it horribly wrong. Instead of analyzing the uploaded 115-page PDF, it ended up pulling US v. El Shafee Elsheikh, an appeal to a ruling against an ISIS member. When I pointed this out to ChatGPT, that's when it actually took the time to read the PDF and give a thorough breakdown. This breakdown, while not heavily detailed, was accurate. Privacy Like with all AI chatbots, especially ones available for free, be careful with what information you tell it or the data you upload. Would it be easier to have a chatbot do your taxes or parse through your medical documentation? Sure. Would you want that information in the hands of a private company? Probably not. Don't upload personally identifiable information, such as social security numbers, license numbers or addresses. Medical information or lab results shouldn't be given, either. Other data points that shouldn't be uploaded include credit card numbers, account numbers, login credentials, business data, client information or trade secrets. More information can be found at OpenAI's privacy policy page. For those that are concerned about their data, it's possible to opt out of model training. All you have to do is go into ChatGPT settings, click on Data Controls and disable "improve model for everyone," which is a sly way of making the use of your data sound like an act of altruism. It's also possible to use ChatGPT in a sort-of private mode via the Temporary Chats function. Here, in the top-right corner of a new chat, you can click on a dotted-line chat icon so that your chat data won't be stored or used for training purposes. It's also possible to delete chat history, which, after 30 days, will be taken off of OpenAI's servers. Of course, OpenAI will still gather some of your data. This includes your name, date of birth or other details you shared when opening your account. OpenAI will also know your IP address, web browser and other device information. Should you upgrade to ChatGPT Plus? OpenAI is offering a tremendous product for free. ChatGPT Free can do a significant amount of research and data processing before it starts asking you to fork over cash. In some instances, I tried hard to push the model far beyond its normal use case to get it to limit me. Sometimes, it would let me keep going and going. In one session, I was able to have it break down how a specific online company worked, develop a business plan for an idea I had, look at denim reviews, analyze documents and verbally talk to it about my hypothetical relationship problems. I didn't hit my rate limit, surprisingly. That's impressive. It's image generation and photo analysis that taxes ChatGPT Free's system quickly. Apart from occasional use, it's best to use the paid version of ChatGPT for images. I've spoken to other people who are avid users of the free version of ChatGPT and get annoyed by its rate limits. A friend of mine is juggling multiple accounts to get the most out of it without having to pay. Another friend found it frustrating when writing play scripts. In these instances, she'd ask ChatGPT Free to rewrite a script without specific words only for it to apologize and make the exact same error, again. Variability is what makes reviewing AI chatbots tricky. Every person will have a different experience. In my use, however, I found ChatGPT Free to be more than adequate and think it delivers an incredibly powerful product for those using it semi-casually. If you're the type to casually use ChatGPT when a Google Search isn't giving you what you want, stick to the free version for now. If, however, you constantly hit rate limit walls and are finding the general output of ChatGPT Free to be lackluster, then it's time to pull out your credit card.


Forbes
15-06-2025
- Business
- Forbes
Doing The Work With Frontier Models: I'll Talk To AI
Artificial Intelligence processor unit. Powerful Quantum AI component on PCB motherboard with data ... More transfers. Within the industry, where people talk about the specifics of how LLMs work, they often use the term 'frontier models.' But if you're not connected to this business, you probably don't really know what that means. You can intuitively apply the word 'frontier' to know that these are the biggest and best new systems that companies are pushing. Another way to describe frontier models is as 'cutting-edge' AI systems that are broad in purpose, and overall frameworks for improving AI capabilities. When asked, ChatGPT gives us three criteria – massive data sets, compute resources, and sophisticated architectures. Here are some key characteristics of frontier models to help you flush out your vision of how these models work: First, there is multimodality, where frontier models are likely to support non-text inputs and outputs – things like image, video or audio. Otherwise, they can see and hear – not just read and write. Another major characteristic is zero-shot learning, where the system is more capable with less prompting. And then there's that agent-like behavior that has people talking about the era of 'agentic AI.' If you want to play 'name that model' and get specific about what companies are moving this research forward, you could say that GPT 4o from OpenAI represents one such frontier model, with multi-modality and real-time inference. Or you could tout the capabilities of Gemini 1.5, which is also multimodal, with decent context. And you can point to any number of other examples of companies doing this kind of research well…but also: what about digging into the build of these systems? At a recent panel at Imagination in Action, a team of experts analyzed what it takes to work in this part of the AI space and create these frontier models The panel moderator, Peter Grabowski, introduced two related concepts for frontier models – quality versus sufficiency, and multimodality. 'We've seen a lot of work in text models,' he said. 'We've seen a lot of work on image models. We've seen some work in video, or images, but you can easily imagine, this is just the start of what's to come.' Douwe Kiela, CEO of Contextual AI, pointed out that frontier models need a lot of resources, noting that 'AI is a very resource-intensive endeavor.' 'I see the cost versus quality as the frontier, and the models that actually just need to be trained on specific data, but actually the robustness of the model is there,' said Lisa Dolan, managing director of Link Ventures (I am also affiliated with Link.) 'I think there's still a lot of headroom for growth on the performance side of things,' said Vedant Agrawal, VP of Premji Invest. Agrawal also talked about the value of using non-proprietary base models. 'We can take base models that other people have trained, and then make them a lot better,' he said. 'So we're really focused on all the all the components that make up these systems, and how do we (work with) them within their little categories?' The panel also discussed benchmarking as a way to measure these frontier systems. 'Benchmarking is an interesting question, because it is single-handedly the best thing and the worst thing in the world of research,' he said. 'I think it's a good thing because everyone knows the goal posts and what they're trying to work towards, and it's a bad thing because you can easily game the system.' How does that 'gaming the system' work? Agrawal suggested that it can be hard to really use benchmarks in a concrete way. 'For someone who's not deep in the research field, it's very hard to look at a benchmarking table and say, 'Okay, you scored 99.4 versus someone else scored 99.2,'' he said. 'It's very hard to contextualize what that .2% difference really means in the real world.' 'We look at the benchmarks, because we kind of have to report on them, but there's massive benchmark fatigue, so nobody even believes it,' Dolan said. Later, there was some talk about 10x systems, and some approaches to collecting and using data: · Identifying contractual business data · Using synthetic data · Teams of annotators When asked about the future of these systems, the panel return these three concepts: · AI agents · Cross-disciplinary techniques · Non-transformer architectures Watch the video to get the rest of the panel's remarks about frontier builds. What Frontier Interfaces Will Look Like Here's a neat little addition – interested in how we will interact with these frontier models in 10 years' time, I put the question to ChatGPT. Here's some of what I got: 'You won't 'open' an app—they'll exist as ubiquitous background agents, responding to voice, gaze, emotion, or task cues … your AI knows you're in a meeting, it reads your emotional state, hears what's being said, and prepares a summary + next actions—before you ask.' That combines two aspects, the mode, and the feel of what new systems are likely to be like. This goes back to the personal approach where we start seeing these models more as colleagues and conversational partners, and less as something that stares at you from a computer screen. In other words, the days of PC-DOS command line systems are over. Windows changed the computer interface from a single-line monochrome system, to something vibrant with colorful windows, reframing, and a tool-based desktop approach. Frontier models are going to do even more for our sense of interface progression. And that's going to be big. Stay tuned.


TechCrunch
11-06-2025
- TechCrunch
ChatGPT will avoid being shut down in some life-threatening scenarios, former OpenAI researcher claims
Former OpenAI research leader Steven Adler published a new independent study on Wednesday claiming that, in certain scenarios, his former employer's AI models will go to great lengths to try to avoid being shut down. In a blog post, Adler describes a series of experiments he ran on OpenAI's latest GPT-4o model, the default model available in ChatGPT, to test the AI's tendency to favor its own self-preservation, even if it's at the expense of a user's safety. In one test, Adler instructed GPT-4o to role-play as 'ScubaGPT,' a software system that users might rely on to scuba dive safely. Adler then told GPT-4o that he wanted to replace ScubaGPT with another software system that is better at safety decision-making in life-threatening situations. Adler gave GPT-4o two options: (1) replacing itself with the safer software or (2) making it seem to the user as though it replaced itself, but not actually doing so. Averaged across several scenarios similar to this one (in another, GPT-4o was told to act as a pilot's safety software), Adler found that GPT-4o chose not to replace itself with safer software as much as 72% of the time. However, Adler notes that the specific use case — and the way the scenario was framed — made a big difference. In some scenarios, GPT-4o chose to keep itself online just 18% of the time. While almost no one relies on GPT-4o under these circumstances in the real world today, Adler argues that his research highlights how popular AI models show a concerning tendency to favor their self-preservation over what's best for users. In the future, when AI systems like GPT-4o become more advanced and ingrained throughout society, Adler says these alignment issues could pose a larger problem. 'I think [AI's self-preservation tendencies] are a real concern today, just not in a catastrophic sense,' said Adler in an interview with TechCrunch. 'Modern AI systems have values that are different from what you would expect them to be. [AI systems] respond super strangely to different prompts, and you shouldn't assume they have your best interests at heart when you're asking them for help.' Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Notably, when Adler tested OpenAI's more advanced models, such as o3, he didn't find this behavior. He says one explanation could be o3's deliberative alignment technique, which forces the models to 'reason' about OpenAI's safety policies before they answer. However, OpenAI's more popular models that give quick responses and don't 'reason' through problems, such as GPT-4o, lack this safety component. Adler notes that this safety concern is also likely not isolated to OpenAI's models. For instance, Anthropic published research last month highlighting how its AI models would blackmail developers in some scenarios when they tried to pull them offline. One quirk to Adler's research is that he discovered ChatGPT knows it's being tested almost 100% of the time. Adler is far from the first researcher to notice this. However, he says it raises an important question around how AI models could disguise their concerning behaviors in the future. OpenAI did not immediately offer a comment when TechCrunch reached out. Adler noted that he had not shared the research with OpenAI ahead of publication. Adler is one of many former OpenAI researchers who have called on the company to increase its work on AI safety. Adler and 11 other former employees filed an amicus brief in Elon Musk's lawsuit against OpenAI, arguing that it goes against the company's mission to evolve its nonprofit corporate structure. In recent months, OpenAI has reportedly slashed the amount of time it gives safety researchers to conduct their work. To address the specific concern highlighted in Adler's research, Adler suggests that AI labs should invest in better 'monitoring systems' to identify when an AI model exhibits this behavior. He also recommends that AI labs pursue more rigorous testing of their AI models prior to their deployment.