logo
New cholesterol drug lowers LDL when statins aren't enough, study finds

New cholesterol drug lowers LDL when statins aren't enough, study finds

Yahoo07-05-2025

A new medication that combines an already approved drug with a new unapproved one has been shown to cut the level of LDL, or 'bad' cholesterol, when statins aren't helping enough.
In the Phase 3 trial, Cleveland Clinic researchers found that the combination of the new drug, obicetrapib, with an established medication, ezetimibe, reduced low-density lipoprotein (LDL) cholesterol levels by 48.6% after about three months' use — producing more effective results than either drug alone. Ezetimibe is a cholesterol-lowering drug that is often prescribed with statins to reduce LDL even more.
The research was presented Wednesday during a late-breaking science session at the annual meeting of the European Atherosclerosis Society in Glasgow, Scotland, and simultaneously published in The Lancet.
In the multicenter clinical trial, the lead researcher, Dr. Ashish Sarraju, a preventive cardiologist at the Cleveland Clinic, and his colleagues enrolled 407 patients with a median age of 68 with LDL cholesterol levels greater than 70 mg/dL even though they had taken medication to lower it.
The participants were randomly assigned to four groups: a group for a pill that combined obicetrapib with ezetimibe, a group for each of the drugs separately and a placebo group. All participants continued on the medications they were taking before they started the trial, along with the medications being studied.
The reason: Some people have to take a number of prescriptions to get LDL down to desired levels.
'We need to give patients and their doctors all the options we can to try to get LDL under control if they are at risk for, or already have, cardiovascular disease,' Sarraju said. 'In higher-risk patients, you want to get LDL down as quickly as possible and keep it there as long as possible.'
High-risk patients either had had strokes or heart attacks or were likely to in the future.
For that reason, the researchers enrolled patients in the trial who, despite already being on statins or even high-intensity statins, still had LDL levels that were too high.
The hope is that lowering LDL levels will reduce the risk of adverse cardiovascular events such as strokes and heart attacks. According to the American Heart Association, the optimal total cholesterol level for an adult is about 150 mg/dL, with LDL levels at or below 100 mg/dL ('dL' is short for 'deciliter,' or a tenth of a liter). For high-risk patients, Sarraju recommends an LDL no higher than 70 mg/dL.
The trial was funded by the maker of obcetrapib, Netherlands-based NewAmsterdam Pharma. It expects to have conversations with the Food and Drug Administration about approval for the new combo drug 'over the course of the year,' a spokesperson said.
A multitude of modifiable factors can result in high LDL, such as a diet high in saturated fats, processed foods and fried foods; being overweight; smoking; and older age.
Dr. Robert Rosenson, director of lipids and metabolism for the Mount Sinai Health System in New York City, said other drugs in the same class have failed to reduce heart attacks or stroke, 'but I am cautiously hopeful.'
The drugmaker is currently running an additional trial to determine if the combo drug not only lowers cholesterol but also protects against adverse heart events.
While lifestyle changes can help bring down LDL, levels remain stubbornly high for some people. Only 20% of patients at high risk of heart disease are able to manage their LDL, said Dr. Corey Bradley, a cardiologist at the Columbia University Vagelos College of Physicians and Surgeons.
Heart disease is the leading cause of death for adults in the United States.
'High LDL is one of the leading risk factors for heart disease, and we have such a poor handle on controlling that risk,' Bradley said. 'Many people have such a high LDL they will require multiple agents to control it.'
'I am very excited about drugs like obicetrapib,' she said.
This article was originally published on NBCNews.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nationwide recall issued for popular chocolate brand that contains potentially ‘life-threatening' ingredient
Nationwide recall issued for popular chocolate brand that contains potentially ‘life-threatening' ingredient

New York Post

time15 hours ago

  • New York Post

Nationwide recall issued for popular chocolate brand that contains potentially ‘life-threatening' ingredient

A popular chocolate treat is being pulled from shelves nationwide over an ingredient that may trigger severe – and potentially deadly – allergic reactions, federal officials warned. An urgent recall was issued after Lipari Foods discovered that its 14-ounce packages of JLM Branded Dark Chocolate Nonpareils may contain undeclared milk, the US Food and Drug Administration (FDA) announced Friday. Those with milk allergies are urged to avoid consuming the potentially lethal candy. Select packages of JLM Branded Dark Chocolate Nonpareils may contain undeclared milk. USFDA 'People who have allergies to milk run the risk of serious or life-threatening allergic reactions if they consume these products,' the dire bulletin stated. The Michigan-based company initiated the recall after its distributor, Weave Nut Company, alerted them that the candy may contain the dairy allergen, which was not disclosed on the packaging. But the sprinkle-topped chocolate discs, sold in clear plastic tubs, had already made their way to retailers across the country. The recall targets packaging with lot codes 28202501A, 29202501A, 23202504A, 14202505A, 15202505A, and 03202506A on the bottom label. No illnesses or adverse reactions have been reported in connection with the recall. Brent Hofacker – The FDA advised customers to return the product to the place of purchase for a full refund. No illnesses or adverse reactions have been reported in connection with the recall.

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers
Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

CNBC

time15 hours ago

  • CNBC

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content. The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC. Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI. "The answer, after almost 15 years of research, is, no, we don't know how to do this, and it doesn't look like we are getting better," Rando, who focuses on adversarial machine learning, told CNBC. However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles. Shayne Longpre, a researcher in AI and policy and lead of the Data Provenance Initiative, noted that there are currently insufficient people working in red teams. While AI startups are now using first-party evaluators or contracted second parties to test their models, opening the testing to third parties such as normal users, journalists, researchers, and ethical hackers would lead to a more robust evaluation, according to a paper published by Longpre and researchers. "Some of the flaws in the systems that people were finding required lawyers, medical doctors to actually vet, actual scientists who are specialized subject matter experts to figure out if this was a flaw or not, because the common person probably couldn't or wouldn't have sufficient expertise," Longpre said. Adopting standardized 'AI flaw' reports, incentives and ways to disseminate information on these 'flaws' in AI systems are some of the recommendations put forth in the paper. With this practice having been successfully adopted in other sectors such as software security, "we need that in AI now," Longpre added. Marrying this user-centred practice with governance, policy and other tools would ensure a better understanding of the risks posed by AI tools and users, said Rando. Project Moonshot is one such approach, combining technical solutions with policy mechanisms. Launched by Singapore's Infocomm Media Development Authority, Project Moonshot is a large language model evaluation toolkit developed with industry players such as IBM and Boston-based DataRobot. The toolkit integrates benchmarking, red teaming and testing baselines. There is also an evaluation mechanism which allows AI startups to ensure that their models can be trusted and do no harm to users, Anup Kumar, head of client engineering for data and AI at IBM Asia Pacific, told CNBC. Evaluation is a continuous process that should be done both prior to and following the deployment of models, said Kumar, who noted that the response to the toolkit has been mixed. "A lot of startups took this as a platform because it was open source, and they started leveraging that. But I think, you know, we can do a lot more." Moving forward, Project Moonshot aims to include customization for specific industry use cases and enable multilingual and multicultural red teaming. Pierre Alquier, Professor of Statistics at the ESSEC Business School, Asia-Pacific, said that tech companies are currently rushing to release their latest AI models without proper evaluation. "When a pharmaceutical company designs a new drug, they need months of tests and very serious proof that it is useful and not harmful before they get approved by the government," he noted, adding that a similar process is in place in the aviation sector. AI models need to meet a strict set of conditions before they are approved, Alquier added. A shift away from broad AI tools to developing ones that are designed for more specific tasks would make it easier to anticipate and control their misuse, said Alquier. "LLMs can do too many things, but they are not targeted at tasks that are specific enough," he said. As a result, "the number of possible misuses is too big for the developers to anticipate all of them." Such broad models make defining what counts as safe and secure difficult, according to a research that Rando was involved in. Tech companies should therefore avoid overclaiming that "their defenses are better than they are," said Rando.

Grieving parents awarded $2.25M after Georgia doctor plastered videos of their decapitated baby on social media
Grieving parents awarded $2.25M after Georgia doctor plastered videos of their decapitated baby on social media

New York Post

timea day ago

  • New York Post

Grieving parents awarded $2.25M after Georgia doctor plastered videos of their decapitated baby on social media

A Georgia couple whose baby was decapacitated during childbirth was awarded a $2.25 million verdict after their pathologist posted graphic autopsy videos on social media without their consent. Dr. Jackson Gates and his Atlanta-based business will have to fork over the large sum to Jessica Ross and Traveon Taylor Sr. after a Fulton County jury found him liable of emotional distress, invasion of privacy, and fraud on Wednesday. 'This young couple trusted him with the remains of their precious baby,' attorney's for the grieving parents said, noting that the doctor 'poured salt into the couple's already deep wounds.' 3 Jessica Ross and Treveon Taylor Sr., parents of a baby who was decapitated during childbirth. AP 'Gates, in turn, repaid this trust by posting horrific images of their child for the world to see.' The heartbroken couple hired the twisted doctor to perform an autopsy on their deceased newborn two days after their obstetrician allegedly applied excessive force to the baby's neck when its shoulders became stuck in Ross's pelvic area, causing it to detach during the traumatic July 2023 delivery. 3 The traumatic delivery occurred at Southern Regional Medical Center in July 2023. ERIK S LESSER/EPA-EFE/Shutterstock The baby's head was delivered vaginally, but the rest of the body was removed via C-section. The death was later ruled a homicide. Gates posted numerous videos and photos to his Instagram later that month, showing the grisly postmortem examination of their infants 'decapitated, severed head,' the couple said in their lawsuit. The deranged pathologist initially removed the footage after receiving a letter from the couple's attorney — but later reposted them, according to the lawsuit. 3 The couple was awarded $2.25 million in a lawsuit against their pathologist. AP Gates' attorney, Ira Livant, said his client typically documents his autopsy's on social media to educate fellow pathologists and highlight the importance of independent examinations in cases where families suspect medical misconduct. 'Dr. Gates testified that he is deeply sorry for any harm that he unintentionally caused the plaintiffs,' Livant said Saturday. 'Had he known for one second that they would see that and that they would know it was their child, he would never have done it.' The couple will receive $2 million in compensatory damages and an additional $250,000 in punitive damages from Gates and his company, Medical Diagnostics Choices, per the judgement. The bereaved parents have separate lawsuits pending against the delivering doctor and the Riverdale hospital where the horrific incident took place. With Post wires.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store