United Switches Off Starlink Internet on Regional Jets After Static Problem
United Airlines started rolling out free Starlink Wi-Fi last month with fanfare. But it has had to disable the service on roughly two dozen planes to address problems with static interference.
United said Friday that it is working with Starlink to address 'a small number of reports' about the problem, which it said isn't a flight-safety issue. While it is being fixed, the Starlink-equipped regional jets have been operating with the Wi-Fi turned off.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
31 minutes ago
- Yahoo
Is Palantir Technologies a Once-in-a-Generation AI Stock?
Palantir has built a strong business with government and commercial ties. Growth will need to accelerate to justify the stock price. 10 stocks we like better than Palantir Technologies › Palantir Technologies (NASDAQ: PLTR) has been one of the hottest AI stocks over the past year and a half. Since the start of 2024, it's up over 700% and has risen over 80% in 2025 alone. Those are incredible returns in a short time frame, but many investors are convinced that Palantir can go higher from here. If it does, Palantir could be marked as a once-in-a-generation AI stock. So, does Palantir fit the criteria to become a once-in-a-generation stock, or is something else happening here? Palantir's business is split into two primary units: Government and commercial. Palantir originally started as a software intended for government use and saw tremendous success in this field. However, use cases were developed for commercial products, and Palantir has also successfully expanded into that world. Its software is all about data analytics and giving its users actionable insights based on its information flows. This is all powered by AI and has transformed how some businesses operate. Palantir has seen success in its financials, with impressive growth accelerating quarter after quarter. In Q1, Palantir's U.S. commercial revenue rose 71% year over year to $255 million, and U.S. government revenue rose 45% year over year to $373 million. However, overall commercial growth was 33% to $397 million. This clearly indicates that other areas of the world (namely Europe) aren't embracing AI as much as the U.S. is. However, that could quickly change as several positive developments have been rolling out AI in Europe over the past few weeks. Should Europe more widely adopt AI, that could boost Palantir's commercial sales to even higher levels. Overall, government revenue rose 45% year over year to $487 million, so governments around the globe are adopting AI just as fast as the U.S. government is. Palantir's total year over year growth was an impressive 39%, and management gave guidance for a 38% growth rate in Q2. However, that figure should be taken with a grain of salt, as management consistently beats its own expectations. While this all sounds impressive, investors should be alarmed by Palantir's growth in conjunction with its valuation. As mentioned above, Palantir's revenue rose 39% year over year in Q1, yet the stock is up over 700% since the start of 2024. That indicates that the stock far exceeds actual business growth, which shows up in Palantir's valuation. Palantir has risen from a stock that traded in the 10 to 20 times sales range (normal for a software company) to 110 times sales, a level that few stocks ever trade at. That's because the expectations are unbelievably high at these levels, and it's unlikely that Palantir will be able to live up to them. To illustrate, let's make a few assumptions: Palantir's revenue growth accelerates to 50%, maintaining that level for five years. Palantir's profit margins reach industry-leading 30% Palantir's share count doesn't rise (a terrible assumption; its share count is up 7% year over year). Should all three occur, Palantir's revenue and profits will rise from $3.12 billion and $571 million, respectively, to $23.7 billion and $7.1 billion. If it did that, Palantir would undoubtedly be a once-in-a-generation company. However, at today's current market cap and five years from now's profits, Palantir's stock would trade at 46 times earnings, which is still quite expensive. For reference, Nvidia (NASDAQ: NVDA) grew revenue at a 69% pace in Q1 and trades for 46 times earnings right now. This tells me that there's almost no upside left in Palantir's stock besides what's driven by hype. Palantir's stock has run up too far, too fast, and is now in a precarious situation. While Palantir could be a once-in-a-generation company (its business is fantastic), the stock is incredibly overvalued. It could be prone to a drastic sell-off if Palantir has a slip-up in execution. Before you buy stock in Palantir Technologies, consider this: The Motley Fool Stock Advisor analyst team just identified what they believe are the for investors to buy now… and Palantir Technologies wasn't one of them. The 10 stocks that made the cut could produce monster returns in the coming years. Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you'd have $659,171!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you'd have $891,722!* Now, it's worth noting Stock Advisor's total average return is 995% — a market-crushing outperformance compared to 172% for the S&P 500. Don't miss out on the latest top 10 list, available when you join . See the 10 stocks » *Stock Advisor returns as of June 9, 2025 Keithen Drury has positions in Nvidia. The Motley Fool has positions in and recommends Nvidia and Palantir Technologies. The Motley Fool has a disclosure policy. Is Palantir Technologies a Once-in-a-Generation AI Stock? was originally published by The Motley Fool
Yahoo
31 minutes ago
- Yahoo
Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman
When you buy through links on our articles, Future and its syndication partners may earn a commission. OpenAI CEO Sam Altman says competitors, particularly Mark Zuckerberg's Meta, have been trying to poach OpenAI engineers with sky-high compensation packages. 'They started making these, like, giant offers to people on our team. You know, like $100 million signing bonuses and more than that in compensation per year,' Altman said this week on the Uncapped podcast, hosted by his brother, Jack Altman. Altman said he was glad to see that those enticing offers haven't worked on OpenAI's best people. He assumes this is because they looked at the two paths, Meta and OpenAI, and concluded that the latter has a better shot at delivering on superintelligence and will eventually become the more valuable company. Amid the digs, Altman said Meta is missing the one thing that truly matters in AI: a culture of real innovation. 'There are many things I respect about Meta as a company, but I don't think they're great at innovation,' said Altman, when discussing Meta's attempts to lure OpenAI engineers. He explained that by trying to recruit OpenAI staff with massive guaranteed compensation packages, Meta is essentially building a culture that prioritizes money over the work and mission. He believes that focusing on money rather than purpose and product is a recipe for the wrong kind of culture. Altman contrasted this with OpenAI's approach, which he said attracts and retains talent by aligning financial incentives with a shared sense of purpose and innovative work. 'The special thing about OpenAI is we've managed to build a culture that is good at innovation, and I think we understand a lot of things they don't know about what it takes to succeed at that,' he explained further. Drawing a parallel to past tech rivalries, Altman recalled hearing Zuckerberg discuss how Google tried to enter the social media space in the early days of Facebook. However, to those at Facebook, it was clear that it wasn't going to work for Google. Altman said he now feels similarly about Meta's approach to AI, suggesting that Meta is making an error by trying to replicate OpenAI's success directly. He even discussed how he believes many people at Meta simply copy OpenAI. Altman explained this with an example of how many other companies' chat apps resemble ChatGPT, down to the UI mistakes. He drew from his own experience to argue that the copy-and-paste strategy is fundamentally flawed, and that trying to go where your competitor already is, instead of building a culture around innovation, rarely works. When asked why he thinks Meta sees OpenAI as such a competitor, Altman mentioned how an ex-Meta employee once told him that Meta views ChatGPT as a Facebook replacement. He explained that the user experience with ChatGPT felt different, like one of the few tech products that didn't feel 'somewhat adversarial.' He contrasted this with Google, which he said has started showing worse search results, and with Meta's products, which try to hack users' brains to keep them scrolling. Instead of doing either, ChatGPT simply tries to help users with whatever questions they may have, and even help them feel better. Beyond discussing Meta, the Altman brothers talked about a wide range of topics related to the future of AI, OpenAI's strategy, and even Sam's personal reflections. Altman made a 'crazy claim' that AI will discover new science, and that humanoid robots are one of his dreams — something he thinks will be achievable within the next 5 to 10 years. An internal OpenAI doc reveals exactly how ChatGPT may become your "super-assistant" very soon OpenAI CEO Sam Altman replies to artists irate over their stolen work ChatGPT's Sam Altman threatened to "Uno reverse" Facebook over AI app — he might be dead serious
Yahoo
32 minutes ago
- Yahoo
What happens when you use ChatGPT to write an essay? See what new study found.
Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks. Greta Cross is a national trending reporter at USA TODAY. Story idea? Email her at gcross@ This article originally appeared on USA TODAY: Using ChatGPT to write an essay lowers brain activity: MIT study