Latest news with #legal


The Verge
an hour ago
- Politics
- The Verge
Fax spam is apparently still a thing.
The Supreme Court sent a fax spam case back to a lower court after determining it erred in deferring to the Federal Communications Commission's legal interpretation. After the FCC said the law didn't cover online fax services, a lower court decertified a class of fax recipients seeking damages for receiving unsolicited ads. SCOTUS says the court should have made its own interpretation, which could be meaningful for medical professionals who still use faxes.


News24
3 hours ago
- News24
Pretoria woman, 81, found dead at home allegedly strangled, stabbed with garden fork
Jade-Lee Natasha Smit and Lucas Mmnonwa briefly appeared in the Pretoria Magistrate's Court on Friday, where their matter was postponed to next week. Be among those who shape the future with knowledge. Uncover exclusive stories that captivate your mind and heart with our FREE 14-day subscription trial. Dive into a world of inspiration, learning, and empowerment. You can only trial once. Start your FREE trial now Show Comments ()
Yahoo
4 hours ago
- Health
- Yahoo
How artificial intelligence controls your health insurance coverage
Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike doctors and hospitals, which use AI to help diagnose and treat patients, health insurers use these algorithms to decide whether to pay for health care treatments and services that are recommended by a given patient's physicians. One of the most common examples is prior authorization, which is when your doctor needs to receive payment approval from your insurance company before providing you care. Many insurers use an algorithm to decide whether the requested care is 'medically necessary' and should be covered. These AI systems also help insurers decide how much care a patient is entitled to — for example, how many days of hospital care a patient can receive after surgery. If an insurer declines to pay for a treatment your doctor recommends, you usually have three options. You can try to appeal the decision, but that process can take a lot of time, money and expert help. Only 1 in 500 claim denials are appealed. You can agree to a different treatment that your insurer will cover. Or you can pay for the recommended treatment yourself, which is often not realistic because of high health care costs. As a legal scholar who studies health law and policy, I'm concerned about how insurance algorithms affect people's health. Like with AI algorithms used by doctors and hospitals, these tools can potentially improve care and reduce costs. Insurers say that AI helps them make quick, safe decisions about what care is necessary and avoids wasteful or harmful treatments. But there's strong evidence that the opposite can be true. These systems are sometimes used to delay or deny care that should be covered, all in the name of saving money. Presumably, companies feed a patient's health care records and other relevant information into health care coverage algorithms and compare that information with current medical standards of care to decide whether to cover the patient's claim. However, insurers have refused to disclose how these algorithms work in making such decisions, so it is impossible to say exactly how they operate in practice. Using AI to review coverage saves insurers time and resources, especially because it means fewer medical professionals are needed to review each case. But the financial benefit to insurers doesn't stop there. If an AI system quickly denies a valid claim, and the patient appeals, that appeal process can take years. If the patient is seriously ill and expected to die soon, the insurance company might save money simply by dragging out the process in the hope that the patient dies before the case is resolved. This creates the disturbing possibility that insurers might use algorithms to withhold care for expensive, long-term or terminal health problems , such as chronic or other debilitating disabilities. One reporter put it bluntly: 'Many older adults who spent their lives paying into Medicare now face amputation or cancer and are forced to either pay for care themselves or go without.' Research supports this concern – patients with chronic illnesses are more likely to be denied coverage and suffer as a result. In addition, Black and Hispanic people and those of other nonwhite ethnicities, as well as people who identify as lesbian, gay, bisexual or transgender, are more likely to experience claims denials. Some evidence also suggests that prior authorization may increase rather than decrease health care system costs. Insurers argue that patients can always pay for any treatment themselves, so they're not really being denied care. But this argument ignores reality. These decisions have serious health consequences, especially when people can't afford the care they need. Unlike medical algorithms, insurance AI tools are largely unregulated. They don't have to go through Food and Drug Administration review, and insurance companies often say their algorithms are trade secrets. That means there's no public information about how these tools make decisions, and there's no outside testing to see whether they're safe, fair or effective. No peer-reviewed studies exist to show how well they actually work in the real world. There does seem to be some momentum for change. The Centers for Medicare & Medicaid Services, or CMS, which is the federal agency in charge of Medicare and Medicaid, recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients – not just on generic criteria. But these rules still let insurers create their own decision-making standards, and they still don't require any outside testing to prove their systems work before using them. Plus, federal rules can only regulate federal public health programs like Medicare. They do not apply to private insurers who do not provide federal health program coverage. Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute that requires a licensed physician to supervise the use of insurance coverage algorithms. But most state laws suffer from the same weaknesses as the new CMS rule. They leave too much control in the hands of insurers to decide how to define 'medical necessity' and in what contexts to use algorithms for coverage decisions. They also don't require those algorithms to be reviewed by neutral experts before use. And even strong state laws wouldn't be enough, because states generally can't regulate Medicare or insurers that operate outside their borders. In the view of many health law experts, the gap between insurers' actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative. As I argue in an essay to be published in the Indiana Law Journal, the FDA is well positioned to do so. The FDA is staffed with medical experts who have the capability to evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country. Some people argue that the FDA's power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument 'intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.' Because health insurance algorithms are not used to diagnose, treat or prevent disease, Congress may need to amend the definition of a medical device before the FDA can regulate those algorithms. If the FDA's current authority isn't enough to cover insurance algorithms, Congress could change the law to give it that power. Meanwhile, CMS and state governments could require independent testing of these algorithms for safety, accuracy and fairness. That might also push insurers to support a single national standard – like FDA regulation – instead of facing a patchwork of rules across the country. The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients' lives are literally on the line. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Jennifer D. Oliva, Indiana University Read more: Artificial intelligence in medicine raises legal and ethical concerns How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power Beyond AI regulation: How government and industry can team up to make the technology safer without hindering innovation Jennifer D. Oliva currently receives funding from NIDA to research the impact of pharmaceutical industry messaging on the opioid crisis among U.S. Military Veterans. She is affiliated with the UCSF/University of California College of the Law, San Francisco Consortium on Law, Science & Health Policy and Georgetown University Law Center O'Neill Institute for National & Global Health Law.
Yahoo
5 hours ago
- Business
- Yahoo
BBC Hits AI Startup Perplexity With Legal Threat Over Content Scraping Concerns
The BBC has sent a legal threat to Perplexity, citing allegations that the AI startup is scraping the British national broadcaster's content. In one of its first major copyright interventions in the AI age, the BBC claimed that Perplexity's ChatGPT-style search tool was 'trained using BBC content.' More from Deadline BBC's BAFTA-Winning Doc Series 'Once Upon A Time In...' Turns Eye To Middle East BBC Drafts In Consultant To Examine 'Breakfast' Bullying Allegations After Deadline Investigation 'Twenty Twenty Six': BBC Satire Series Officially Heading Stateside With Hugh Bonneville Reprising Role And Stephen Kunken, Paulo Costanzo & Chelsey Crisp Joining Cast The corporation outlined its concerns in a letter seen by the Financial Times newspaper. The BBC confirmed that a legal warning had been issued, but declined to comment beyond the contents of the letter. The broadcaster demanded that San Francisco-based Perplexity cease its use of BBC content, deletes copies of material, and offers 'financial compensation' for the alleged IP infringement. Perplexity has been approached for comment. The company told the FT that the BBC's claims were 'manipulative and opportunistic,' showcasing a 'fundamental misunderstanding' of tech and IP laws. Perplexity added that the BBC's legal letter shows 'how far the BBC is willing to go to preserve Google's illegal monopoly for its own self-interest.' The BBC argued that elements of its content were being regurgitated verbatim by Perplexity and links to its website appeared in search results. It added that some information was reproduced with factual inaccuracies and missing context. The BBC letter said: 'It is therefore highly damaging to the BBC, injuring the BBC's reputation with audiences — including UK licence fee-payers who fund the BBC — and undermining their trust in the BBC.' In an interview at Bloomberg's Tech Summit this month, Perplexity CEO Aravind Srinivas said the AI tool had 30M queries a day and its growth had been 'phenomenal.' 'Give it a year, we'll be doing, like, a billion queries a week if we can sustain this growth rate,' Srinivas said. Perplexity's primary source of revenue is subscriptions, with users being asked to pay $20 a month to access Perplexity Pro. Best of Deadline 2025-26 Awards Season Calendar: Dates For Tonys, Emmys, Oscars & More 'Stick' Soundtrack: All The Songs You'll Hear In The Apple TV+ Golf Series 'Stick' Release Guide: When Do New Episodes Come Out?

News.com.au
10 hours ago
- Politics
- News.com.au
Disgraced former MP Maguire guilty of lying to ICAC probe
Former MP Daryl Maguire is facing the prospect of being jailed after he was found guilty of giving misleading evidence to ICAC. Maguire, 66, fronted a Sydney court on Friday where he was found guilty by a magistrate of one count of giving false or misleading evidence at a public inquiry. Maguire, who was the MP for Wagga Wagga from 1999 to 2018, faced a hearing in the Downing Centre Local Court earlier this year where he pleaded not guilty to the lone count. The case centred on Maguire's evidence before the Independent Commission Against Corruption (ICAC) in July 2018. During the ICAC probe, he denied asking to receive a financial benefit for brokering a property deal at Canterbury. However, recorded phone conversations led him to admit he had asked for a slice of the profits if the multimillion-dollar deal with a Chinese developer was finalised. Maguire had argued that he had not given misleading evidence and he answered questions to the best of his ability. Magistrate Clare Farnan on Friday handed down her judgment, finding him guilty of one count of giving false or misleading evidence at a public inquiry. Maguire faces a maximum penalty of two years in jail. The magistrate also rejected an application by Maguire's legal team on Friday afternoon for a non-publication order which would have kept the verdict under wraps. Maguire's lawyers applied for the non-publication order to protect against the possible prejudice to a jury ahead of a separate upcoming trial. Maguire is also fighting allegations relating to an alleged visa and migration fraud. He has pleaded not guilty to one count of conspiracy and is due to face a District Court trial later this year. Confusion reigned at the Downing Centre John Maddison Tower court complex on Friday. Maguire was listed in one courtroom and when his legal team and the prosecution arrived at court, they were told that the matter had to be adjourned until October. Maguire and his lawyers then left the court complex. However, they were later called back to another court where Ms Farnan handed down her judgment. The court was told that there was confusion because the Downing Centre court complex has been closed for four weeks due to damage to electrical infrastructure after the basement was flooded earlier this week, which has thrown thousands of legal matters into turmoil. Maguire returns to court in August. The former Wagga Wagga MP resigned from the NSW parliament in 2018 after ICAC launched a separate investigation into his conduct while in office. The inquiry revealed he had been in a secret five-year 'close personal relationship' with ex-Premier Gladys Berejiklian. She resigned from her position in September 2021 after ICAC announced it would investigate whether she breached the ministerial code of conduct. The commission found in July 2023 that both Maguire and Ms Berejiklian engaged in serious corrupt conduct.