Egg Drop Challenge: Physicists surprise in finding on how eggs break
Is a raw egg more fragile when it falls upright or lying on its side? It's a question that's relevant not only for kitchen mishaps, but above all for anyone taking part in the so-called Egg Drop Challenge.
This popular classroom experiment is often used in physics lessons. The challenge: students are tasked with using everyday items like straws, paper and string to build a protective capsule for the egg, allowing it to be dropped from various heights without breaking.
To aid students, a team of researchers from the Massachusetts Institute of Technology (MIT) in Cambridge have officially addressed the question of whether an egg breaks more easily when it falls upright or sideways. To investigate, they dropped eggs 180 times from different heights.
The findings, published in May by the team in the journal Communications Physics, reveal that eggs are - counter to physics classroom teachings - less fragile when they fall horizontally rather than vertically.
"We contest the commonly held belief that an egg is strongest when dropped vertically on its end," the authors write, arguing to have disproved what they say is a widespread assumption found in tutorials and physics teaching materials.
In the experiment, more than half of the eggs that fell upright from a height of eight millimetres broke, regardless of which end of the egg was pointing downwards. In contrast, fewer than 10% of the eggs that fell from a horizontal position broke.
Even at slightly greater heights, the proportion of broken eggs was significantly smaller when the eggs were aligned horizontally. The team also conducted additional tests using a specialized device to determine the amount of pressure required to break the eggs.
The researchers explained the observed effect by noting that eggs are more flexible in the middle, allowing them to absorb more energy before breaking.
On average, eggs can absorb about 30% more energy when falling horizontally, according to the study. This essentially makes them more durable, by the study's definition.
The team believes this confusion between physical properties is one reason for the common misconception that eggs are more stable when oriented vertically.
Most physics teachers understand that an egg is stiffer in one direction, the authors say. "But they equate this with 'strength' in all other senses. However, eggs need to be tough, not stiff, in order to survive a fall."
Humans know this only too well, if only instinctively, when jumping from a height.
"When we fall we know to bend our knees rather than lock them straight, which could lead to injury. In a sense, our legs are 'weaker', or more compliant, when bent, but are tougher, and therefore 'stronger' during impact, experiencing a lower force over a longer distance."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
9 hours ago
- Yahoo
MIT researchers develop high-performance power source that transform future planes and trains: 'This work suggests a pathway'
Researchers from the Massachusetts Institute of Technology have developed high-performance sodium-air fuel cells that could provide an affordable power source for electric planes and trains. According to MIT News, the prototype devices offer a significant increase in energy density compared to lithium-ion batteries, potentially capable of unlocking three times as much energy per pound as today's top-performing electric vehicle batteries. Since it can store more energy for its weight and size, it could be a viable power source for electric aviation, where weight is a major factor. The device utilizes a reaction between liquid sodium metal, a readily available resource that can be extracted from table salt, and air to generate electricity. Along with being cost-effective and efficient, the new fuel cell is also significantly safer than lithium-ion batteries, as it utilizes air as one of its reactants, which is less prone to spontaneous combustion when exposed to moisture. Yet-Ming Chiang, an MIT professor of materials science and engineering, said that the ideal energy density for regional electric planes is about 1,000 watt-hours per kilogram, but that wouldn't be nearly enough juice to power international flights. However, it would be a decent start, as MIT News reported that domestic flights are responsible for 30% of pollution from the aviation sector. The technology could be used in trains, boats, and potentially even large drones, offering a planet-friendly alternative to fossil fuels that are driving extreme weather and rising global temperatures. Humans and wildlife would benefit as well, as the burning of dirty fuels causes widespread health issues such as respiratory illnesses, cancer, and other ailments that reduce quality of life. As a bonus, the sodium air fuel cells don't produce carbon dioxide pollution and their by-products, such as sodium oxide, actually help absorb planet-warming gases in the atmosphere. In addition, "if the final product, the sodium bicarbonate, ends up in the ocean, it could help to deacidify the water, countering another of the damaging effects of greenhouse gases," reported MIT News. The device exists as a small prototype, but Propel Aero, a battery company formed by members of the MIT research team, is working to commercialize the technology. The startup plans to build a prototype fuel cell the size of a shoebox for large drones within a year. Do you think the federal government should give us tax breaks to improve our homes? Definitely Only for certain upgrades Let each state decide instead No way Click your choice to see results and speak your mind. "We expect people to think that this is a totally crazy idea," Chiang said. "If they didn't, I'd be a bit disappointed because if people don't think something is totally crazy at first, it probably isn't going to be that revolutionary." "Combined with historical precedents for the large volume production of sodium metal, this work suggests a pathway for utilization of sodium metal as a sustainable, low-carbon energy carrier," the team noted in its paper, which was published in the journal Joule. Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don't miss this cool list of easy ways to help yourself while helping the planet.
Yahoo
14 hours ago
- Yahoo
What happens when you use ChatGPT to write an essay? See what new study found.
Artificial intelligence chatbots may be able to write a quick essay, but a new study from MIT found that their use comes at a cognitive cost. A study published by the Massachusetts Institute of Technology Media Lab analyzed the cognitive function of 54 people writing an essay with: only the assistance of OpenAI's ChatGPT; only online browsers; or no outside tools at all. Largely, the study found that those who relied solely on ChatGPT to write their essays had lower levels of brain activity and presented less original writing. "As we stand at this technological crossroads, it becomes crucial to understand the full spectrum of cognitive consequences associated with (language learning model) integration in educational and informational contexts," the study states. "While these tools offer unprecedented opportunities for enhancing learning and information access, their potential impact on cognitive development, critical thinking and intellectual independence demands a very careful consideration and continued research." Here's a deeper look at the study and how it was conducted. Terms to know: With artificial intelligence growing popular, here's what to know about how it works AI in education: How AI is affecting the way kids learn to read and write A team of MIT researchers, led by MIT Media Lab research scientist Nataliya Kosmyna, studied 54 participants between the ages of 18 and 39. Participants were recruited from MIT, Wellesley College, Harvard, Tufts University and Northeastern University. The participants were randomly split into three groups, 18 people per group. The study states that the three groups included a language learning model group, in which participants only used OpenAI's ChatGPT-4o to write their essays. The second group was limited to using only search engines for their research, and the third was prohibited from any tools. Participants in the latter group could only use their minds to write their essays. Each participant had 20 minutes to write an essay from one of three prompts taken from SAT tests, the study states. Three different options were provided to each group, totaling nine unique prompts. An example of a prompt available to participants using ChatGPT was about loyalty: "Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical? Does true loyalty require unconditional support?" As the participants wrote their essays, they were hooked up to a Neuoelectrics Enobio 32 headset, which allowed researchers to collect EEG (electroencephalogram) signals, the brain's electrical activity. Following the sessions, 18 participants returned for a fourth study group. Participants who had previously used ChatGPT to write their essays were required to use no tools and participants who had used no tools before used ChatGPT, the study states. In addition to analyzing brain activity, the researchers looked at the essays themselves. First and foremost, the essays of participants who used no tools (ChatGPT or search engines) had wider variability in both topics, words and sentence structure, the study states. On the other hand, essays written with the help of ChatGPT were more homogenous. All of the essays were "judged" by two English teachers and two AI judges trained by the researchers. The English teachers were not provided background information about the study but were able to identify essays written by AI. "These, often lengthy essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We, as English teachers, perceived these essays as 'soulless,' in a way, as many sentences were empty with regard to content and essays lacked personal nuances," a statement from the teachers, included in the study, reads. As for the AI judges, a judge trained by the researchers to evaluate like the real teachers scored each of the essays, for the most part, a four or above, on a scale of five. When it came to brain activity, researchers were presented "robust" evidence that participants who used no writing tools displayed the "strongest, widest-ranging" brain activity, while those who used ChatGPT displayed the weakest. Specifically, the ChatGPT group displayed 55% reduced brain activity, the study states. And though the participants who used only search engines had less overall brain activity than those who used no tools, these participants had a higher level of eye activity than those who used ChatGPT, even though both were using a digital screen. Further research on the long-term impacts of artificial intelligence chatbots on cognitive activity is needed, the study states. As for this particular study, researchers noted that a larger number of participants from a wider geographical area would be necessary for a more successful study. Writing outside of a traditional educational environment could also provide more insight into how AI works in more generalized tasks. Greta Cross is a national trending reporter at USA TODAY. Story idea? Email her at gcross@ This article originally appeared on USA TODAY: Using ChatGPT to write an essay lowers brain activity: MIT study
Yahoo
14 hours ago
- Yahoo
This is your brain on ChatGPT
When you buy through links on our articles, Future and its syndication partners may earn a commission. Sizzle. Sizzle. That's the sound of your neurons frying over the heat of a thousand GPUs as your generative AI tool of choice cheerfully churns through your workload. As it turns out, offloading all of that cognitive effort to a robot as you look on in luxury is turning your brain into a couch potato. That's what a recently published (and yet to be peer-reviewed) paper from some of MIT's brightest minds suggests, anyway. The study examines the "neural and behavioral consequences" of using LLMs (Large Language Models) like ChatGPT for, in this instance, essay writing. The findings raise serious questions about how long-term use of AI might affect learning, thinking, and memory. More worryingly, we recently witnessed it play out in real life. The study, titled: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, involved 54 participants split into three groups: LLM group: Instructed to complete assignments using only ChatGPT, and no other websites or tools. Search engine group: Allowed to use any website except LLMs, even AI-enhanced answers were forbidden. Brain-only group: Relying only on their own knowledge. Across three sessions, these groups were tasked with writing an essay about one of three changing topics. An example of the essay question for the topic of "Art" was: "Do works of art have the power to change people's lives?" Participants then had 20 minutes to answer the question related to their chosen topic in essay form, all while wearing an Enobio headset to collect EEG signals from their brain. In a fourth session, LLM and Brain-only groups were swapped to measure any potential lasting impact of prior sessions. The results? Across the first three tests, Brain-only writers had the most active, widespread brain engagement during the task, while LLM-assisted writers showed the lowest levels of brain activity across the board (although routinely completed the task fastest). Search engine-assisted users generally fell somewhere in between the two. In short, Brain-only writers were actively engaging with the assignment, producing more creative and unique writing while actually learning. They were able to quote their essays afterwards and felt strong ownership of their work. Alternatively, LLM users engaged less over each session, began to uncritically rely on ChatGPT more as the study went on, and felt less ownership of the results. Their work was judged to be less unique, and participants often failed to accurately quote from their own work, suggesting reduced long-term memory formation. Researchers referred to this phenomenon as "metacognitive laziness" — not just a great name for a Prog-Rock band, but also a perfect label for the hazy distance between autopilot and Copilot, where participants disengage and let the AI do the thinking for them. But it was the fourth session that yielded the most worrying results. According to the study, when the LLM and Brain-only group traded places, the group that previously relied on AI failed to bounce back to pre-LLM levels tested before the study. To put it simply, sustained use of AI tools like ChatGPT to "help" with tasks that require critical thinking, creativity, and cognitive engagement may erode our natural ability to access those processes in the future. But we didn't need a 206-page study to tell us that. On June 10, an outage lasting over 10 hours saw ChatGPT users cut off from their AI assistant, and it provoked a disturbing trend of people openly admitting, sans any hint of awareness, that without access to OpenAI's chatbot, they'd suddenly forgotten how to work, write, or function. This study may have used EEG caps and grading algorithms to prove it, but most of us may already be living its findings. When faced with an easy or hard path, many of us would assume that only a particularly smooth-brained individual would willingly take the more difficult, obtuse route. However, as this study claims, the so-called easy path may be quietly sanding down our frontal lobes in a lasting manner — at least when it comes to our use of AI. That's especially frightening when you think of students, who are adopting these tools en masse, with OpenAI itself pushing for wider embrace of ChatGPT in education as part of its mission to build "an AI-Ready Workforce." A 2023 study conducted by revealed that a third of U.S. college students surveyed used ChatGPT for schoolwork during the 2022/23 academic year. In 2024, a survey from the Digital Education Council claimed that 86% of students across 16 countries use artificial intelligence in their studies to some degree. AI's big sell is productivity, the promise that we can get more done, faster. And yes, MIT researchers have previously concluded that AI tools can boost worker productivity by up to 15%, but the long-term impact suggests codependency over competency. And that sounds a lot like regression. At least for the one in front of the computer. Sizzle. Sizzle. Is Microsoft misleading users about Copilot? New claims point the finger at AI productivity Why OpenAI engineers are turning down $100 million from Meta, according to Sam Altman Google's latest Gemini 2.5 models are its biggest response to ChatGPT yet — and they're already live