
Tesla Model Y fails self-driving test, hits child-sized dummies 8 times: Why Elon Musk should be worried
At a recent demonstration in Texas, a Tesla Model Y operating in Full Self-Driving (FSD) mode was shown failing to stop for a stationary school bus and striking child-sized dummies.
The tests, organised by advocacy groups The Dawn Project, Tesla Takedown, and ResistAustin, replicated the scenario eight times, each time with the Tesla Model Y ignoring the bus's flashing lights and stop signs.
Video footage from the demonstration showed the vehicle driving past the bus and colliding with the mannequins intended to represent children.
The demonstration has raised fresh concerns about the readiness of autonomous vehicle technology.
Tesla's system—officially named Full Self-Driving (Supervised)—requires active driver supervision and issues escalating warnings if the driver does not respond. The company has repeatedly cautioned users that failure to comply could lead to serious injury or death.
While Tesla was not involved in the demonstration, this is not the first time its autonomous technology has drawn scrutiny.
In April 2024, a Tesla Model S using FSD was involved in a fatal accident in Washington State, in which a motorcyclist was killed.
The Dawn Project, whose founder Dan O'Dowd also leads a company developing competing driver-assistance software, has previously run campaigns highlighting perceived flaws in Tesla's FSD system.
The incident comes amid anticipation surrounding Tesla's new Cybercab, an all-electric, fully autonomous vehicle initially set for rollout on 22 June.
Chief Executive Elon Musk has since hinted at a delay, saying the company is 'being super paranoid about safety' and suggesting the first vehicle to autonomously drive from the factory to a customer's home could launch on 28 June.
Tentatively, June 22.
We are being super paranoid about safety, so the date could shift.
First Tesla that drives itself from factory end of line all the way to a customer house is June 28. — Elon Musk (@elonmusk) June 11, 2025
As the debate around autonomous vehicle safety intensifies, the industry continues to face questions about whether current technology can meet the expectations—and responsibilities—of full autonomy.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Express Tribune
an hour ago
- Express Tribune
Tesla's 'Robotaxi' won't be driverless, set to launch with 'human monitors'
Tesla's long-awaited robotaxi service will launch on June 22, but the first rides won't be as driverless as promised. Invitations sent to select Tesla investors and influencers confirm that a human "safety monitor" will occupy the front passenger seat — a significant shift from CEO Elon Musk's earlier pledge of rides with 'no one in the car.' The move comes amid increasing scrutiny from regulators and marks a cautious first step for Tesla's autonomous ambitions. According to the invitation, monitors will accompany riders during initial trips, which must be booked between 6 a.m. and midnight within a designated geofenced area — excluding airports. Tesla has given me permission to share the parameters of use for their Model Y Robotaxi service, starting this Sunday June 22nd in Austin, Texas. The Early Access phase is invitation-only. Parameters of Use: • You must read through and agree to the attached Terms of Service,… — Sawyer Merritt (@SawyerMerritt) June 20, 2025 Service may also be limited or suspended in bad weather. The limited pilot will involve between 10 and 20 Tesla Model Y vehicles. While Musk previously claimed these would be capable of unsupervised operation and remote management during emergencies, the current inclusion of human safety staff raises questions about the timeline for achieving full autonomy. Each rider may bring one guest aged 18 or older. Tesla has not yet announced a public rollout date beyond the invite-only phase. The launch also comes as Tesla faces mounting regulatory pressure. The US National Highway Traffic Safety Administration (NHTSA) recently requested more information about the system's ability to operate in low-visibility conditions, citing concerns over its safety performance in inclement weather. Meanwhile, lawmakers in Texas — where Tesla is headquartered — have asked the company to delay operations until new autonomous vehicle legislation comes into effect in September. The law will require robotaxi services to obtain authorisation from the state's Department of Motor Vehicles before running without a human driver. Next week, Tesla plans to launch robotaxis in Austin— before Texas' new AV safety law takes effect. We're urging a delay until those safety standards are in place. Public trust comes from safety and transparency. We look forward to working with Tesla to achieve both. #txlege — Senator Sarah Eckhardt (@SarahEckhardtTX) June 18, 2025 Despite the scaled-back debut, Tesla's robotaxi trial remains a key milestone in the company's broader push toward full self-driving technology — one that Musk has repeatedly described as central to Tesla's future. Whether the service evolves into the fully autonomous system envisioned by Musk remains to be seen.
1736506325-0%2FUntitled-design-(7)1736506325-0.png&w=3840&q=100)

Express Tribune
a day ago
- Express Tribune
Elon Musk reignites feud with Sam Altman after OpenAI controversy surfaces
Elon Musk has once again directed public criticism toward OpenAI CEO Sam Altman, calling him 'Scam Altman' in a recent post on the social media platform X. The comment came shortly after the release of The OpenAI Files, a report raising concerns about OpenAI's governance, profit model, and safety practices. Musk framed his remark as a reaction to the revelations outlined in the report. Musk and Altman, both prominent figures in the tech and artificial intelligence sectors, share a history as co-founders of OpenAI. Musk served on OpenAI's board from its founding in 2015 until stepping down in 2018. He has since criticized the company's evolution from a non-profit research lab to a 'capped-profit' model, arguing that the move contradicts OpenAI's original mission of promoting safe and open AI development. In addition to their involvement in AI, both Musk and Altman have been vocal supporters of cryptocurrency, adding another dimension to their public personas and influence in the tech world. Musk, who leads Tesla, SpaceX, and X, has long promoted digital assets such as Bitcoin and Dogecoin. Tesla holds over $1 billion in Bitcoin, and Musk's public endorsements of Dogecoin have often impacted its market price. Altman, similarly, has expressed support for Bitcoin, describing it as a critical technological step during a 2023 appearance on The Joe Rogan Experience. He also launched the cryptocurrency Worldcoin in 2019, with a focus on decentralized identity and finance. Musk's recent criticism comes amid broader industry debates over the future of artificial intelligence. Centralized models, like those used by OpenAI, have been criticized for concentrating power and limiting transparency. Decentralized alternatives, often supported by crypto infrastructure, are being explored as a counterbalance.
1726054615-0%2FOpenAI-(2)1726054615-0.png&w=3840&q=100)

Express Tribune
a day ago
- Express Tribune
OpenAI files reveal profit shift, leadership concerns, and safety failures in nonprofit AI organization
A new investigative report titled The OpenAI Files, released by non-profit watchdogs The Midas Project and The Tech Oversight Project, reveals troubling insights into OpenAI's internal operations, leadership, and shifting priorities. The report, based on a year-long investigation, provides detailed documentation on how the organization's structure and goals have evolved since its founding in 2015. Founded to democratize artificial intelligence research and prevent misuse, OpenAI began as a non-profit organization. However, despite this designation, it has developed a widely used paid product, ChatGPT, and has maintained a hybrid structure involving a for-profit subsidiary. In late 2024, OpenAI announced plans to shift toward full commercialization. The move faced significant backlash from co-founder Elon Musk, former employees, civil society groups, and competitors like Meta, leading to a reversal in May 2025 and a recommitment to non-profit governance. The watchdog report outlines four core areas of concern: organizational restructuring, leadership, transparency and safety, and conflicts of interest. It criticizes OpenAI for quietly altering its original investor profit cap—initially set at 100x return on investment. By 2023, it allowed annual increases of 20%, and by 2025, was reportedly considering removing the cap entirely. The groups argue that these changes contradict OpenAI's founding mission to ensure AGI (artificial general intelligence) benefits all of humanity. Concerns about CEO Sam Altman are also central to the report. Watchdog organizations cite past controversies involving Altman's alleged absenteeism, manipulative behavior, and staff resignations. Former senior OpenAI figures, including Dario Amodei and Ilya Sutskever, are said to have described his leadership style as abusive. Further, the report alleges that OpenAI failed to allocate promised resources to a dedicated AI safety team and instead pressured employees to meet product deadlines while discouraging internal criticism and whistleblowing. It also highlights the company's use of strict NDAs that threatened employees with the loss of vested stock if they spoke out. Additionally, several board members are reported to have financial interests in businesses that benefit from OpenAI's market position. CEO Altman has invested in multiple affiliated ventures, while Board Chair Brett Taylor and board member Adebayo Ogunlesi lead or fund companies that rely on OpenAI's technology. These ties, the watchdogs argue, may compromise the integrity of OpenAI's mission and decision-making.