
Linux Passwords Warning — 2 Critical Vulnerabilities, Millions At Risk
Beware this Linux password vulnerability.
Although most critical security warnings that hit the headlines impact users of Microsoft's Windows operating systems, and occasionally Apple's iOS and macOS, Critical Linux security vulnerabilities are a much rarer occurrence. As news of not one, but two, such Linux vulnerabilities breaks, millions of users are advised that their passwords and encryption keys could be at risk of compromise. Here's what you need to know and do.
When security experts from a renowned threat research unit discover not one, but two, critical local information disclosure vulnerabilities impacting millions of Linux users, it would be an understatement to say that this is a cause for concern. When those same security researchers develop proof of concepts for both vulnerabilities, across a handful of Linux operating systems, the concern level goes through the roof.
The vulnerabilities, impacting the Ubuntu core-dump handler known as Apport, and Red Hat Enterprise Linux 9 and 10, plus Fedora, with the systemd-coredump handler, are both of the race-condition variety. Put simply, this is where event timing can cause errors or behaviours that are unexpected at best, critically dangerous at worst. The vulnerabilities uncovered by the Qualys threat research unit fall into the latter category.
Exploiting CVE-2025-5054 and CVE-2025-4598, Saeed Abbasi, a manager with the Qualys TRU, said, could 'allow a local attacker to exploit a Set-User-ID program and gain read access to the resulting core dump.' Because both impacted tools are designed to deal with crash reporting, they are well-known targets for attackers looking to exploit vulnerabilities to access the data contained within those core dumps. Abbasi conceded that there are plenty of modern mitigations against such risk, including systems that direct core dumps to secure locations, for example, 'systems running outdated or unpatched versions remain prime targets,' for the newly disclosed vulnerabilities.
Abbasi went on to warn that the successful exploitation of these Linux vulnerabilities could lead to the extraction of 'sensitive data, like passwords, encryption keys, or customer information from core dumps.' All users are urged to mitigate that risk by prioritizing patching and increasing access controls. Abbasi said that when it comes to the Apport vulnerability, Ubuntu 24.04 is affected, including all versions of Apport up to 2.33.0 and every Ubuntu release since 16.04. For the systemd-coredump, vulnerability, meanwhile, Abbasi warned that Fedora 40/41, Red Hat Enterprise Linux 9, and the recently released RHEL 10 are vulnerable.
I have reached out to Canonical and Red Hat for a statement regarding the Linux password exposure threats.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fox News
4 hours ago
- Fox News
What to do if you get a password reset email you didn't ask for
You're checking your inbox or scrolling through your phone when something catches your attention. It's a message about a password reset, but you never asked for one. It might have arrived by email, text message or even through an authenticator app. It looks legitimate, and it could be from a service you actually use. Still, something feels off. Unrequested password reset messages are often an early warning sign that someone may be trying to access your account. In some cases, the alert is real. In others, it's a fake message designed to trick you into clicking a malicious link. Either way, it means your personal information may be at risk, and it's important to act quickly. Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide — free when you join. There are a few reasons this might happen: In some cases, the message is legitimate, as seen in the email below, but the request didn't come from you. That is often a sign your login details are already in someone else's hands. Unsolicited password reset alerts can take several forms, each with signs of potential fraud or hacking: No matter how the alert appears, the goal is the same. Either someone is trying to trick you into handing over your credentials, or they already have your password and are trying to finish the job. If you receive a password reset alert you didn't request, treat it as a warning. Whether the message is legitimate or not, acting quickly can help prevent unauthorized access and stop an attack in progress. Here are the steps you should take right away. 1. Don't click on anything in the message: If the alert came through email or text, avoid clicking any links. Instead, go directly to the official site or app to check your account. If the request was real, there will usually be a notification inside your account. 2. Check for suspicious login activity: Most accounts have a way to view your recent logins. Look for suspicious activity like unfamiliar devices, strange locations or logins you don't recognize. A login from a location you have never been to could be a sign of a breach. 3. Change your password: Even if nothing looks wrong, it's a good idea to reset your password. Choose one that is long, complex and unique. Avoid reusing passwords across different accounts. Consider using a password manager to generate and store complex passwords. Get more details about my best expert-reviewed Password Managers of 2025 here. 4. Scan your device for threats: If someone got access to your password, there is a chance your device is compromised. Use strong antivirus software to scan for keyloggers or spyware. 5. Report the incident: If the alert came from a suspicious message, report it. In Gmail, tap the three-dot menu and select Report phishing. For other services, use the official website to flag unauthorized activity. You can also file a report at the FBI's Internet Crime Complaint Center if you suspect a scam. You can take a few steps to try to reduce the number of emails you receive requesting a password reset. 1. Double-check your username and password. When accessing your account, you may have a typo in your login information. Should you repeatedly attempt to access your account with this error, the company that holds the account may believe a hacking attempt is occurring, triggering an automatic reset. If your web browser automatically populates your username and password for you, make sure this information is free of typos. 2. Remove unauthorized devices. Some accounts maintain a list of devices authorized to use your account. If a hacker manages to gain some of your personal information, it may be able to add one of his devices to your authorized list, triggering account login errors as he tries to hack your password. Check the list of authorized devices and remove any items you don't recognize. The process varies, depending on the type of account. We'll cover steps for Microsoft, Gmail, Yahoo and AOL. Microsoft Gmail: Yahoo: AOL: Remember to regularly check your account settings and authorized devices to ensure the security of your accounts. If you suspect any unauthorized access, it's also a good idea to change your passwords and review your account recovery options. 3. Sort such messages to spam. If you'd prefer to simply not see these kinds of email messages, set up your email client to sort messages like this to a spam folder. (Because many of them are spam, some email clients do this automatically.) Should you ever legitimately request a password reset, though, you'll need to remember to look in the spam folder for the message. 4. Use a static IP address. Some accounts attempt to recognize your device through your IP address. If you have a dynamic IP address, your IP address changes constantly, meaning the account may not recognize your device, triggering the reset message. This often occurs because you are using a VPN. See if your VPN allows you to use a static IP address. Even if this was a one-time scare, it is important to tighten your overall security. Here are a few simple habits that go a long way: 1. Use strong and unique passwords: Use a password manager to create secure, one-of-a-kind passwords for each account. Get more details about my best expert-reviewed Password Managers of 2025 here. 2. Consider using a personal data removal service: If you're receiving password reset emails from accounts you don't remember signing up for, or from multiple services, there's a good chance your personal information is exposed on data broker sites. These companies collect and sell your data, including your email, phone number, home address and even login information from old accounts. Using a reputable data removal service can help you automatically identify and request the removal of your personal data from these sites. This reduces your risk of identity theft, credential stuffing, phishing and spam. While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren't cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It's what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web 3. Turn on two-factor authentication (2FA): Enabling 2FA is one of the most effective ways to stop unauthorized access, even if someone has your password. When 2FA is active, anyone trying to log in must also complete a second verification step, usually through an app on your phone. If an attacker triggers a login attempt, you will receive a prompt to approve or deny it. This gives you the power to block the attempt in real time and confirms that 2FA is working as intended. 4. Install strong antivirus software: Install strong antivirus software to catch malware before it causes harm. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android and iOS devices. 5. Review your account settings: Make sure your recovery phone number and email are current. Remove any outdated or unused backup methods. 6. Keep your software up to date: Keep your device software and apps up to date to patch security vulnerabilities that attackers often exploit. 7. Use a VPN to protect your online activity: Avoid public Wi-Fi or use a VPN to protect your information when browsing on unsecured networks. Consider using a VPN to protect against hackers snooping on your device as well. VPNs will protect you from those who want to track and identify your potential location and the websites that you visit. For best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android and iOS devices It's easy to brush off an unexpected password reset message, especially if nothing else seems out of place. But these alerts are often the digital equivalent of a knock at the door when you weren't expecting anyone. Whether it's a hacker probing for a way in or a scammer trying to bait you, the smartest move is to treat every unexpected security message as a wake-up call. Taking just a few minutes to check your login history, secure your accounts and update your passwords can make all the difference. Cybersecurity isn't just for experts anymore. It's an integral part of everyday life. And the more proactive you are now, the less likely you'll be dealing with damage control later. Are tech companies doing enough to protect users from password threats, or are they putting too much responsibility on individuals? Let us know by writing to us at For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Follow Kurt on his social channels Answers to the most asked CyberGuy questions: New from Kurt: Copyright 2025 All rights reserved.
Yahoo
4 hours ago
- Yahoo
Is Quantum Computing (QUBT) Stock a Buy on This Bold Technological Breakthrough?
Quantum computing stocks are heating up again, offering investors a front-row seat to what could be the next massive tech revolution. Even Nvidia (NVDA) CEO Jensen Huang, once skeptical about near-term adoption, recently said quantum computing was at an 'inflection point,' signaling a dramatic shift from his earlier stance that it was 'decades away.' Companies in this space are finally beginning to move from the research lab to real-world commercialization. Quantum Computing (QUBT) just hit a major milestone in that journey. The company announced the successful shipment of its first commercial entangled photon source to a South Korean research institution. This cutting-edge product is a foundational piece of QUBT's quantum cybersecurity platform, which won a 2024 Edison Award. The shipment not only showcases the company's ability to execute globally, but also underscores growing demand for integrated quantum solutions. CoreWeave Just Revealed the Largest-Ever Nvidia Blackwell GPU Cluster. Should You Buy CRWV Stock? AMD Is Gunning for Nvidia's AI Chip Throne. Should You Buy AMD Stock Now? The Saturday Spread: Statistical Signals Flash Green for CMG, TMUS and VALE Our exclusive Barchart Brief newsletter is your FREE midday guide to what's moving stocks, sectors, and investor sentiment - delivered right when you need the info most. Subscribe today! With real momentum behind it and a clear roadmap ahead, QUBT could be a high-risk, high-reward play for investors looking to capitalize on the coming wave of quantum adoption. Based in Hoboken, New Jersey, Quantum Computing is an integrated photonics company that focuses on the development of quantum machines for both commercial and government markets in the United States. The company specializes in thin-film lithium niobate chips. These chips are central to QUBT's mission of building quantum machines that operate at room temperature and require low power. Valued at $2.7 billion by market cap, QUBT shares have exploded over the past year, soaring more than 3,000%. However, the stock has cooled in 2025, rising just 17.4% year-to-date amid growing skepticism over the commercialization timeline for quantum technology. Following last year's sharp rally, QUBT's valuation has reached nosebleed territory, with a staggering price-sales ratio of 7,475x, far above the sector median. This suggests the stock is extremely overvalued compared to its industry peers. On May 16, Shares of QUBT popped nearly 40% in a single trading session after Quantum Computing reported Q1 results that illustrate both nascent revenue traction and the substantial investments required to advance its quantum photonics roadmap. The company recognized approximately $39,000 in revenue for the quarter, representing a 42.7% year-over-year increase from a similarly low base. However, this figure fell roughly 61% short of consensus forecasts, highlighting the early stage nature of commercial adoption. Gross margin contracted to 33.3% from 40.7% a year earlier, While net income was reported at nearly $17 million or $0.13 per share, beating the estimate of $0.08, it was driven primarily by a non-cash gain on the mark-to-market valuation of warrant-related derivative liabilities. Operating expenses rose to approximately $8.3 million, up from $6.3 million in the year-ago quarter, as the company expanded staffing and advanced its Quantum Photonic Chip Foundry in Tempe, Arizona. The balance sheet remains robust: cash and cash equivalents totaled about $166.4 million with no debt, providing a multi-year runway at current expenditure levels. Revenue divisions are still emerging, with initial sales tied to prototype devices, quantum cybersecurity platforms, and early foundry orders, but detailed segment reporting is limited given the infancy of commercial deployments. Looking ahead, management indicated they expect only modest photonic foundry revenue in the back half of 2025, with revenue likely to accelerate in 2026 as additional customers come online. Earlier this year, Quantum Computing disclosed collaborations with NASA's Langley Research Center and the Sanders Tri-Institutional Therapeutics Discovery Institute. These partnerships were formed to validate their quantum photonic technologies in demanding, real-world settings, removing sunlight noise from space-based LiDAR and enhancing drug discovery workflows. On May 12, Quantum Computing said it has completed its Quantum Photonic Chip Foundry in Tempe, Arizona, positioning it to meet demand in data communications and telecommunications. This facility enables scalable production of entangled photon sources, enhancing QCI's competitive standing against established photonics firms and emerging quantum hardware startups. The foundry's completion transitions R&D toward revenue generation. For now, only a single analyst covers QUBT stock, assigning it a 'Strong Buy' rating with a price target of $22, implying upside of 14%. For investors, QUBT remains a highly speculative stock with unique technology but limited commercial traction. Despite partnerships and bold claims, it lags far behind the commercial sucess of industry giants like International Business Machines (IBM) and Nvidia (NVDA). Without a clear path to profitability or a meaningful share of the market, its lofty valuation is difficult to justify in today's competitive and capital-sensitive environment. Lastly, investors should note that quantum computing stocks often move more on hype than fundamentals, making QUBT a highly speculative bet. On the date of publication, Nauman Khan did not have (either directly or indirectly) positions in any of the securities mentioned in this article. All information and data in this article is solely for informational purposes. This article was originally published on
Yahoo
5 hours ago
- Yahoo
AI tools collect, store your data – how to be aware of what you're revealing
Like it or not, artificial intelligence has become part of daily life. Many devices — including electric razors and toothbrushes — have become "AI-powered," using machine learning algorithms to track how a person uses the device, how the device is working in real time, and provide feedback. From asking questions to an AI assistant like ChatGPT or Microsoft Copilot to monitoring a daily fitness routine with a smartwatch, many people use an AI system or tool every day. While AI tools and technologies can make life easier, they also raise important questions about data privacy. These systems often collect large amounts of data, sometimes without people even realizing their data is being collected. The information can then be used to identify personal habits and preferences, and even predict future behaviors by drawing inferences from the aggregated data. As an assistant professor of cybersecurity at West Virginia University, I study how emerging technologies and various types of AI systems manage personal data and how we can build more secure, privacy-preserving systems for the future. Generative AI software uses large amounts of training data to create new content such as text or images. Predictive AI uses data to forecast outcomes based on past behavior, such as how likely you are to hit your daily step goal, or what movies you may want to watch. Both types can be used to gather information about you. Generative AI assistants such as ChatGPT and Google Gemini collect all the information users type into a chat box. Every question, response and prompt that users enter is recorded, stored and analyzed to improve the AI model. OpenAI's privacy policy informs users that "we may use content you provide us to improve our Services, for example to train the models that power ChatGPT." Even though OpenAI allows you to opt out of content use for model training, it still collects and retains your personal data. Although some companies promise that they anonymize this data, meaning they store it without naming the person who provided it, there is always a risk of data being reidentified. Beyond generative AI assistants, social media platforms like Facebook, Instagram and TikTok continuously gather data on their users to train predictive AI models. Every post, photo, video, like, share and comment, including the amount of time people spend looking at each of these, is collected as data points that are used to build digital data profiles for each person who uses the service. The profiles can be used to refine the social media platform's AI recommender systems. They can also be sold to data brokers, who sell a person's data to other companies to, for instance, help develop targeted advertisements that align with that person's interests. Many social media companies also track users across websites and applications by putting cookies and embedded tracking pixels on their computers. Cookies are small files that store information about who you are and what you clicked on while browsing a website. One of the most common uses of cookies is in digital shopping carts: When you place an item in your cart, leave the website and return later, the item will still be in your cart because the cookie stored that information. Tracking pixels are invisible images or snippets of code embedded in websites that notify companies of your activity when you visit their page. This helps them track your behavior across the internet. This is why users often see or hear advertisements that are related to their browsing and shopping habits on many of the unrelated websites they browse, and even when they are using different devices, including computers, phones and smart speakers. One study found that some websites can store over 300 tracking cookies on your computer or mobile phone. Like generative AI platforms, social media platforms offer privacy settings and opt-outs, but these give people limited control over how their personal data is aggregated and monetized. As media theorist Douglas Rushkoff argued in 2011, if the service is free, you are the product. Many tools that include AI don't require a person to take any direct action for the tool to collect data about that person. Smart devices such as home speakers, fitness trackers and watches continually gather information through biometric sensors, voice recognition and location tracking. Smart home speakers continually listen for the command to activate or "wake up" the device. As the device is listening for this word, it picks up all the conversations happening around it, even though it does not seem to be active. Some companies claim that voice data is only stored when the wake word — what you say to wake up the device — is detected. However, people have raised concerns about accidental recordings, especially because these devices are often connected to cloud services, which allow voice data to be stored, synced and shared across multiple devices such as your phone, smart speaker and tablet. If the company allows, it's also possible for this data to be accessed by third parties, such as advertisers, data analytics firms or a law enforcement agency with a warrant. This potential for third-party access also applies to smartwatches and fitness trackers, which monitor health metrics and user activity patterns. Companies that produce wearable fitness devices are not considered "covered entities" and so are not bound by the Health Information Portability and Accountability Act. This means that they are legally allowed to sell health- and location-related data collected from their users. Concerns about HIPAA data arose in 2018, when Strava, a fitness company released a global heat map of users' exercise routes. In doing so, it accidentally revealed sensitive military locations across the globe through highlighting the exercise routes of military personnel. The Trump administration has tapped Palantir, a company that specializes in using AI for data analytics, to collate and analyze data about Americans. Meanwhile, Palantir has announced a partnership with a company that runs self-checkout systems. Such partnerships can expand corporate and government reach into everyday consumer behavior. This one could be used to create detailed personal profiles on Americans by linking their consumer habits with other personal data. This raises concerns about increased surveillance and loss of anonymity. It could allow citizens to be tracked and analyzed across multiple aspects of their lives without their knowledge or consent. Some smart device companies are also rolling back privacy protections instead of strengthening them. Amazon recently announced that starting on March 28, 2025, all voice recordings from Amazon Echo devices would be sent to Amazon's cloud by default, and users will no longer have the option to turn this function off. This is different from previous settings, which allowed users to limit private data collection. Changes like these raise concerns about how much control consumers have over their own data when using smart devices. Many privacy experts consider cloud storage of voice recordings a form of data collection, especially when used to improve algorithms or build user profiles, which has implications for data privacy laws designed to protect online privacy. All of this brings up serious privacy concerns for people and governments on how AI tools collect, store, use and transmit data. The biggest concern is transparency. People don't know what data is being collected, how the data is being used, and who has access to that data. Companies tend to use complicated privacy policies filled with technical jargon to make it difficult for people to understand the terms of a service that they agree to. People also tend not to read terms of service documents. One study found that people averaged 73 seconds reading a terms of service document that had an average read time of 29 to 32 minutes. Data collected by AI tools may initially reside with a company that you trust, but can easily be sold and given to a company that you don't trust. AI tools, the companies in charge of them and the companies that have access to the data they collect can also be subject to cyberattacks and data breaches that can reveal sensitive personal information. These attacks can by carried out by cybercriminals who are in it for the money, or by so-called advanced persistent threats, which are typically nation/state-sponsored attackers who gain access to networks and systems and remain there undetected, collecting information and personal data to eventually cause disruption or harm. While laws and regulations such as the General Data Protection Regulation in the European Union and the California Consumer Privacy Act aim to safeguard user data, AI development and use have often outpaced the legislative process. The laws are still catching up on AI and data privacy. For now, you should assume any AI-powered device or platform is collecting data on your inputs, behaviors and patterns. Although AI tools collect people's data, and the way this accumulation of data affects people's data privacy is concerning, the tools can also be useful. AI-powered applications can streamline workflows, automate repetitive tasks and provide valuable insights. But it's crucial to approach these tools with awareness and caution. When using a generative AI platform that gives you answers to questions you type in a prompt, don't include any personally identifiable information, including names, birth dates, Social Security numbers or home addresses. At the workplace, don't include trade secrets or classified information. In general, don't put anything into a prompt that you wouldn't feel comfortable revealing to the public or seeing on a billboard. Remember, once you hit enter on the prompt, you've lost control of that information. Remember that devices which are turned on are always listening — even if they're asleep. If you use smart home or embedded devices, turn them off when you need to have a private conversation. A device that's asleep looks inactive, but it is still powered on and listening for a wake word or signal. Unplugging a device or removing its batteries is a good way of making sure the device is truly off. Finally, be aware of the terms of service and data collection policies of the devices and platforms that you are using. You might be surprised by what you've already agreed to. Christopher Ramezan is an assistant professor of cybersecurity at West Virginia University. This article is republished from The Conversation under a Creative Commons license. This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it. This article originally appeared on Erie Times-News: AI devices collect your data, raise questions about privacy | Opinion