
OpenAI's New AI Agent Requires Lots of Adult Supervision
By
Welcome to Tech In Depth, our revamped daily newsletter with reporting and analysis about the business of tech from Bloomberg's journalists around the world. Today, Rachel Metz tests out whether OpenAI's new agent can be the personal assistant she's always dreamed of having.
Mark Zuckerberg's confidence: The Meta chief executive officer predicted a 'really big year' in which the social media company's 'highly intelligent and personalized AI assistant' will reach more than 1 billion people on its platforms.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
34 minutes ago
- Bloomberg
Odd Lots: Giuseppe Paleologo on Quant Investing at Multi-Strat Hedge Funds
Quantitative investing is one of those terms that you hear all the time, but there's various explanations of what it actually means, or how quants actually make money. And of course, the term means different things in different contexts. In this live episode, recorded at the Bloomberg Equity Intelligence Summit on June 12, we speak again with Giuseppe Paleologo, the head of quantitative research at Balyasny Asset Management. We talk about his role, what quant investing actually is, and what the future of the space actually entails.


Bloomberg
an hour ago
- Bloomberg
Giuseppe Paleologo on Quant Investing at Multi-Strat Hedge Funds
Listen to Odd Lots on Apple Podcasts Listen to Odd Lots on Spotify Subscribe to the newsletter Quantitative investing is one of those terms that you hear all the time, but there's various explanations of what it actually means, or how quants actually make money. And of course, the term means different things in different contexts. In this live episode, recorded at the Bloomberg Equity Intelligence Summit on June 12, we speak again with Giuseppe Paleologo, the head of quantitative research at Balyasny Asset Management. We talk about his role, what quant investing actually is, and what the future of the space actually entails.
Yahoo
an hour ago
- Yahoo
Using AI bots like ChatGPTcould be causing cognitive decline, new study shows
A new pre-print study from the US-based Massachusetts Institute of Technology (MIT) found that using OpenAI's ChatGPT could lead to cognitive decline. Researchers with the MIT Media lab broke participants into three groups and asked them to write essays only using ChatGPT, a search engine, or using no tools. Brain scans were taken during the essay writing with an electroencephalogram (EEG) during the task. Then, the essays were evaluated by both humans and artificial intelligence (AI) tools. The study showed that the ChatGPT-only group had the lowest neural activation in parts of the brain and had a hard time recalling or recognising their writing. The brain-only group that used no technology was the most engaged, showing both cognitive engagement and memory retention. Related Can ChatGPT be an alternative to psychotherapy and help with emotional growth? The researchers then did a second session where the ChatGPT group were asked to do the task without assistance. In that session, those who used ChatGPT in the first group performed worse than their peers with writing that was 'biased and superficial'. The study found that repeated GPT use can come with 'cognitive debt' that reduces long-term learning performance in independent thinking. In the long run, people with cognitive debt could be more susceptible to 'diminished critical inquiry, increased vulnerability to manipulation and decreased creativity,' as well as a 'likely decrease' in learning skills. 'When participants reproduce suggestions without evaluating their accuracy or relevance, they not only forfeit ownership of the ideas but also risk internalising shallow or biased perspectives,' the study continued. Related 'Our GPUs are melting': OpenAI puts restrictions on new ChatGPT image generation tool The study also found higher rates of satisfaction and brain connectivity in the participants who wrote all essays with just their minds compared to the other groups. Those from the other groups felt less connected to their writing and were not able to provide a quote from their essays when asked to by the researchers. The authors recommend that more studies be done about how any AI tool impacts the brain 'before LLMs are recognised as something that is net positive for humans.'