Latest news with #safeguards


Telegraph
13 hours ago
- Health
- Telegraph
Everywhere assisted dying is introduced, the safeguards never prove effective
The emotion of watching the progress of the assisted dying Bill through Parliament will differ for every person watching it. For some, it will elicit grief or perhaps fear. For others, hope. For me, watching from afar, it's déjà vu. Before assisted dying was legalised in New Zealand three and a half years ago, it was me speaking in Parliament against its passage. And the debate here is all too spookily reminiscent of what we saw. We, like British MPs, were promised that the eligibility criteria would be tight and that claims of a slippery slope were a 'fallacy'. My work on this issue began when I chaired the New Zealand Parliament's Health Committee inquiry into assisted dying. It was the largest inquiry ever undertaken by the New Zealand parliament and was our nation's most detailed public discussion on this topic. Again and again, I asked questions and probed for the details of how proposed safeguards, which promised to ensure no mistakes were made, would in fact do so. Each time, I was assured that the laws would function to ensure horror cases simply could not occur, that the criteria would never be relaxed, and that this law would be the safest in the world. Listening now to the UK debate, the lines from supporters here are redolent of what I heard back then. Kim Leadbeater, were it not for her broad Yorkshire tones, could well play a Kiwi parliamentarian, breezily dismissing the concerns voiced by my Committee and me. And, if the language matches closely, the proposed 'safeguards' are near-identical. Central to the safeguards in the Kiwi law was the introduction of a body called: 'The End of Life Review Committee'. It broadly mirrors Leadbeater's proposals for a supervisory review body following the removal of the High Court safeguard. In New Zealand, three experts – two health practitioners and one medical ethicist – sit on this Committee. Their role was to review assisted deaths and to scrutinise complex cases where something may have gone wrong. But things did not work out as they were supposed to. One of the Committee's original members resigned over serious concerns about its ability to supervise the implementation of assisted suicide and euthanasia. Another member was pushed out, it is thought, because she was raising too many concerns about the operation of the new law. Two out of three members were gone. Both subsequently went public and stated that the Committee's oversight of the law was so limited that wrongful deaths could go undetected. They said they were 'extremely concerned' about how little information they received relating to patients' deaths, leading to them feeling 'constrained to the point of irrelevance'. In one deeply troubling case, the Review Committee was able to establish that a dementia patient, who did not speak English, was approved for assisted dying despite not having an interpreter present for their assessment. The New Zealand experience is closely mirrored in every country where similar laws have been introduced in recent years. In our Antipodean neighbour, Australia, several states have legalised assisted dying in the last few years. Queensland's law was said to have taken extra time to progress through parliament to make sure the law would guarantee that every death was 'truly voluntary', 'without coercion', and with the strictest safeguards. In fact, patients there have killed themselves with others' drugs and, in a scathing judgement, a coroner ruled that it was in fact 'not a well-considered law', but rather had 'inadequate' safeguards that had taken just '107 days to be exposed'. In another case, a woman appeared in court last week, charged with her husband's murder, having admitted to her family that she had administered him three lethal doses of drugs after he told doctors he wanted to 'go on' rather than die via assisted dying. She denies murder, and the case continues. Meanwhile, Oregon – one of the first jurisdictions to legalise assisted dying – has seen its eligibility criteria stretched to include patients with anorexia, diabetes, or arthritis. Around half of those opting for assisted suicide now cite feeling like a 'burden' on others as a motivating factor. None of the parliamentarians voting for those laws did so believing that they were dangerous. In New Zealand, my colleagues certainly did not do so. They had repeatedly been assured that the safeguards were absolute, inviolable, and complete. But, if the experience of those who have passed these laws is anything to go by, British parliamentarians should think very carefully before passing the assisted dying Bill. Safeguards so often promised have proven so rarely effective. If British MPs are not certain that they will work here, my urgent advice, having seen this play out before, would be to reject this Bill today.


BBC News
12-06-2025
- Sport
- BBC News
Safeguards 'non-existent' when kickboxer, 15, died
A 15-year-old three-time world kickboxing champion died from a severe traumatic brain injury after an unsanctioned fight which had no safeguards, a coroner has ruled. Alex Eastwood suddenly collapsed after the charity bout in Wigan against a 17-year-old opponent and died three days later, on 29 June last Michael Pemberton said the fight was unsanctioned and the safeguards that were meant to exist "simply didn't". He described the regulation of kickboxing as "chaotic and disjointed".He said emergency services did everything they could to try and save Alex. Listen to the best of BBC Radio Merseyside on Sounds and follow BBC Merseyside on Facebook, X, and Instagram. You can also send story ideas via Whatsapp to 0808 100 2230.


BBC News
12-06-2025
- Sport
- BBC News
Study recommends mandatory four-week break for players
Professional footballers should be allowed at least a four-week off-season break, plus a minimum four-week retraining period before a return to competition, according to a new study, released on Thursday by global players' union Fifpro, led to 70 medical and performance experts agreeing on 12 safeguards they want to introduce in a bid to protect players from exhaustion and excessive workload. Fifpro said it represents the most extensive expert consensus to date on safeguards against excessive workload in professional than 75% agreement was required among participants to establish each recommendation. Among the recommendations are:a four-week off-season breakmandatory mid-season breaksa minimum four-week retraining period before a return to competitive actionand mandatory consideration of the travel burden on players."This study presents safety standards based on the considered and independent opinions of medical and performance experts working in professional football who understand the mental and physical strain placed on players," Professor Doctor Vincent Gouttebarge, Fifpro medical director, said. "If we can all agree that health comes first, then we should take steps to implement these safeguards."The release of the report comes just days before the opening game of the Fifa Club World Cup in the United League sides Manchester City and Chelsea are both involved in the tournament, which concludes on 13 either reach the final, players will have a gap of just five weeks before the Premier League season begins on 15 first game of the Club World Cup takes place on 16 June, just 19 days after their Conference League final triumph over Real Betis. In September, a week prior to suffering an ACL injury, City midfielder Rodri said players were close to going on strike because of the increase in games, while team-mate Manuel Akanji suggested he would have to retire at 30-years-old as a result of the lack of breaks in the October, Fifpro filed a legal complaint with the European Commission over what it said was Fifa's "abuse of dominance," which was specifically related to the Club World Cup.


The Guardian
11-06-2025
- Health
- The Guardian
Two more Labour MPs suggest they could vote against assisted dying bill
Two more Labour MPs have expressed significant doubts about the assisted dying bill, suggesting they would now oppose the legislation. The former health minister Andrew Gwynne, who previously abstained, wrote to his constituents in Gorton and Denton to say: 'To date I don't think that the bill has been strengthened enough and that safeguards should go much further.' Paul Foster, the Labour MP for South Ribble, who previously voted in favour, told constituents this week he also had serious concerns about the bill's safeguards, suggesting he too could vote against it when it returns to the Commons for its final vote next week. He said that following the alarm voiced by the Royal College of Psychiatrists, he was 'seriously concerned about the adequacy of the revised safeguards, particularly the removal of judicial oversight and the wider implications for vulnerable individuals'. He said: 'As we approach the final stages of this bill, I want to be clear that I will not support the legislation at third reading unless I am absolutely assured that robust and enforceable safeguards are in place to protect people from harm, pressure or coercion.' About 14 MPs who backed the bill or abstained at its second reading in November have said they are likely to vote against it. At least two others have said they will change their positions to vote for the bill, including the technology minister Chris Bryant, who previously abstained, and fellow Labour MP Jack Abbott, who previously voted against. Labour's Debbie Abrahams, the chair of the work and pensions select committee, and Josh Fenton-Glynn, who both abstained previously, say they will now vote against, and Karl Turner, who voted in favour, has said he will abstain. Those who say they play to switch from voting yes to voting against also include the former Conservative minister George Freeman and fellow Tory MPs Mike Wood and Andrew Snowden. The Tory MP Charlie Dewhirst, who previously abstained, says he will vote against. Two Liberal Democrat MPs have also switched, including the party's work and pensions spokesperson, Steve Darling, and Brian Mathew, the Melksham and Devizes MP, who said that scrutiny of the plans had left 'several concerns I feel have been inadequately answered'. The Reform UK chief whip, Lee Anderson, and his former party colleague Rupert Lowe withdrew support publicly when the bill's sponsor, Kim Leadbeater, removed the need for a high court judge to approve each procedure, instead giving this authority to an expert panel. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion The bill passed with a majority of 55 in November, but the numbers are expected to be significantly tighter when it returns to the Commons for third reading, scheduled for 20 June. This Friday, MPs will debate amendments to the bill for a second day. The first day of debates on amendments drawn up during a lengthy committee stage resulted in some changes being agreed, including an opt-out for all healthcare workers from being involved in assisted dying, extending the exemption that previously would have been available only to doctors. The bill drawn up by Leadbeater would allow terminally ill patients in England and Wales to end their lives if they have less than six months to live, contingent on the agreement of two doctors and an expert panel including a senior lawyer, psychiatrist and social worker.


Fox News
06-06-2025
- Business
- Fox News
Federal AI power grab could end state protections for kids and workers
Just as AI begins to upend American society, Congress is considering a move that would sideline states from enforcing commonsense safeguards. Tucked into the recently passed House reconciliation package is Section 43201, a provision that would pre-empt nearly all state and local laws governing "artificial intelligence models," "artificial intelligence systems," and "automated decision systems" for the next 10 years. Last night, the Senate released its own version of the moratorium that would restrict states from receiving federal funding for broadband infrastructure if they don't fall in line. Supporters argue that a moratorium is needed to avoid a patchwork of state rules that could jeopardize U.S. AI competitiveness. But this sweeping approach threatens to override legitimate state efforts to curb Big Tech's worst abuses—with no federal safeguards to replace them. It also risks undermining the constitutional role of state legislatures to protect the interests and rights of American children and working families amid AI's far-reaching social and economic disruptions. In the absence of Congressional action, states have been the first line of defense against Big Tech. Texas, Florida, Utah, and other states have led the way to protect children online, safeguard data privacy, and rein in platform censorship. Section 43201 puts many of those laws—even those not directly related to AI—at risk. The provision defines "automated decision systems" broadly, potentially capturing core functions of social media platforms, such as TikTok's For You feed or Instagram's recommendation engine. At least 12 states have enacted laws requiring parental consent or age verification for minors accessing these platforms. However, because these laws specifically apply to social media platforms, they could easily be construed as regulating "automated decision systems"— and thus be swept up in the moratorium. Further, Section 43201 might also block provisions of existing state privacy laws that restrict the use of algorithms—including AI—to predict consumer behavior, preferences, or characteristics. Even setting aside concerns with the moratorium's expansive scope, it suffers from a more fundamental flaw. The moratorium threatens to short-circuit American federalism by undermining state laws that ensure AI lives up to the promise outlined by Vice President J.D. Vance. Speaking at the Paris AI Summit, he warned against viewing "AI as a purely disruptive technology that will inevitably automate away our labor force." Instead, Vance called for "policies that ensure that AI… make[s] our workers more productive" and rewards them with "higher wages, better benefits, and safer and more prosperous communities." That vision is nearly impossible without state-level action. Legislators, governors, and attorneys general from Nashville to Salt Lake City are already advancing creative, democratically accountable solutions. Tennessee's novel ELVIS Act protects music artists from nonconsensual AI-generated voice and likeness cloning. Utah's AI consumer protection law requires that generative AI model deployers notify consumers when they are interacting with an AI. Other states, including Arkansas and Montana, are building legal frameworks for digital property rights with respect to AI models, algorithms, data, and model outputs. All of this is now at risk. As laboratories of democracy, states are essential to navigating the inevitable and innumerable trade-offs entailed by the diffusion of emerging technologies. Federalism enables continuous experimentation and competition between states—exposing the best and worst approaches to regulation in highly dynamic environments. That's critical when confronting AI's vast and constantly evolving sphere of impact on children and employment—to say nothing of the technology's wider socio-economic effects. Sixty leading advocacy and research organizations have warned that AI chatbots pose a significant threat to kids. They cite harrowing stories of teens who have been induced to suicide, addiction, sexual perversion, and self-harm at the hands of Big AI. Even industry leaders are sounding alarms: Anthropic CEO Dario Amodei estimates that AI could force up to 20% unemployment over the next five years. Innovation inherently brings disruption—but disruption without guardrails can harm the very communities AI is purportedly meant to uplift. That's why 40 state attorneys general, Democrats and Republicans alike, signed a letter opposing Section 43201, warning that it would override "carefully tailored laws targeting specific harms related to the use of AI." To be sure, not all laws are drafted equal. States like California and Colorado are imposing European-style AI regulations particularly detrimental to "Little Tech" and open-source model developers. But Congress shouldn't throw out federalism with the "doomer" bathwater. Rather than a blanket pre-emption, it should consider narrow, targeted limits carefully tailored to stymie high-risk bills—modeled on California and Colorado's approach—that foist doomer AI standards on the rest of the nation. Absent a comprehensive federal AI framework, states must retain freedom to act—specifically, to ensure that AI bolsters American innovation and competitiveness in pursuit of a thriving middle class. America's AI future has great potential. But our laboratories of democracy are key to securing it.