logo
Google Exclusive: How the Pixel Watch 3 got a life-saving feature the Apple Watch can't match

Google Exclusive: How the Pixel Watch 3 got a life-saving feature the Apple Watch can't match

Tom's Guide2 days ago

Onboard safety features are a huge selling point of modern wearable devices. These days, the best smartwatches can automatically contact emergency responders and/or loved ones if you take a nasty fall or are involved in an accident, regardless of whether you're wearing the latest/greatest Garmin, the best Apple Watch, or the best smartwatch for Android.
While fall, crash and incident detection are all but par for the course on high-end, full-feature smartwatches, a new, more advanced safety feature surfaced last summer that's currently only available on the Google Pixel Watch 3. That's right, not even the Apple Watch Ultra 2 or Samsung Galaxy Watch Ultra offers anything like Google's Loss of Pulse Detection tool.
Like fall detection, Loss of Pulse Detection is designed to help users out during an emergency — in this case, a medical one, when there may otherwise be no one around. Better yet, setting up Loss of Pulse Detection takes less than 2 minutes, which is not a lot of time considering it could be a literal lifesaver.
To find out more about Loss of Pulse Detection, including insights into the development, testing and FDA approval process, I had an exclusive interview with Edward Shi, the product manager on the Google Safety Team who spearheaded the project.
Our 30-minute conversation covered a lot, but it's Google's creative approach to testing the new safety feature — something that's crucial for avoiding false positives — that most fascinated me.
For one, Shi and his team had to figure out how to simulate a loss of pulse in a living subject, for testing purposes, of course, which is no easy feat. His team also worked with stunt actors to understand how a user may fall when a loss of pulse is experienced.
Beyond that, our conversation touched on whether older Pixel Watch devices could get Loss of Pulse Detection in the future, how long until the competition replicates the feature and what the Google Safety Team is up to next.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Edward Shi: I'm a product manager here on our Android and Pixel Safety Team. Our team works on safety products with a goal of giving users peace of mind in their day-to-day lives. These products include, in the past, features such as car crash detection and fall detection.
For Loss of Pulse, specifically, I'm one of the main product managers on the project, working across the teams, with our clinicians, our engineers, etc., to bring Loss of Pulse Detection to the Pixel Watch 3.
It uses sensors on the Pixel Watch to detect a potential loss of pulse and prompt a call to emergency services with either the user's smartwatch or their connected phone.
Shi: It's really for any Pixel Watch 3 user who meets our eligibility criteria. It uses sensors on the Pixel Watch to detect a potential loss of pulse and prompt a call to emergency services with either the user's smartwatch or their connected phone, who can then intervene and potentially provide life-saving care.
A loss of pulse is a time-sensitive emergency, and it can be caused due to a variety of different factors, such as a cardiac arrest, a respiratory or circulatory failure, poisoning, etc. Many of these events are unwitnessed today. So around 50% of cardiac arrests, in particular, are unwitnessed, meaning that no one's around to help.
Shi: The two main sensors are the PPG sensor as well as accelerometer. We use PPG to detect pulselessness, as well as the accelerometer to look at motion in particular. So if a loss of pulse occurs, what we anticipate is that the user is unconscious, so there shouldn't be excessive motion.
So those two sensors combined help form the foundation of the algorithm.
The algorithm is trying to balance both detecting that emergency, so in this case, a loss of pulse, while minimizing accidental triggers.
Shi: There are a lot of similarities in the sense that all are emergency detection features. Essentially, these are for potential life-threatening emergencies in which a user may not be able to call for help themselves. In those events, we would need to be able to detect that emergency and then help connect [the user] with emergency services.
Much of the design and the principles remain the same. The algorithm is trying to balance both detecting that emergency, so in this case, a loss of pulse, while minimizing accidental triggers.
That's a really key part of all three of the features. We don't want to overly worry and bother the user with accidental triggers. Also, in particular, we don't want to burden [emergency] partners with accidental triggers in the case where a user doesn't need help.
Shi: Once a loss of pulse [or] a car crash [or] a fall is detected, the experience is designed to try to quickly connect the user over to emergency services. If, for whatever reason, the user doesn't actually need help, the user experience is [also] designed so that they can easily cancel any call.
We actually worked with stunt actors to induce pulselessness and simulate a fall within a reasonable timeframe to see if it was still able to detect a loss of pulse in those scenarios.
Shi: I don't know if I could precisely say exactly how long, but definitely over a year and a half, but it can really vary. One particular [safety] feature isn't necessarily the same as the others.
They may look similar on the surface, like a fall or a car crash or a loss of pulse, but each of them has its own unique challenges in validating both the algorithm and developing the user experience.
And of course, with laws, we had to go through working with our regulatory partners and regulatory bodies in different regions [for Loss of Pulse Detection]. So there are different complexities for each of them, so the timeline can definitely vary.
Shi: It's a bit of both. So, it's definitely algorithmically tested. We also collect hundreds of thousands of real-world user data and run our algorithm over that data to take a look at how often it could be triggered.
Internally, we have "dog foods." And then we ran clinical studies. All of that is run to measure how often we're seeing accidental triggers in particular.
In addition to honing the algorithm or user experience design, we run user research studies to look and walk [users through the] 'flow,' both during onboarding, as well as when an actual loss of pulse is detected.
[We're] seeing that users understand what's happening and are able to cancel out of that flow if they don't need help. So, it's both algorithmic as well as user research.
Basically, using a pneumatic tourniquet to cut off blood flow in an arm, [we were able] to simulate temporary pulselessness.
Shi: It is pretty difficult, and it took a lot of creativity from our research scientists, in particular. Basically, using a pneumatic tourniquet to cut off blood flow in an arm, [we were able] to simulate temporary pulselessness.
We were able to do that and then put our watches on the user at the same time to ensure that our algorithm was detecting that [loss of] pulse when it occurred.
We actually worked with stunt actors to induce pulselessness and simulate a fall within a reasonable timeframe to see if it was still able to detect a loss of pulse in those scenarios.
Shi: We're very fortunate at Google to have great team members who are familiar with the process and are regulatory experts. Receiving U.S. FDA clearance does go through a rigorous process to ensure quality and understandability of the products that are coming through.
So really, it's taking a look at the U.S. FDA established regulatory frameworks and regulations, knowing what we have to conduct in terms of necessary performance testing, what we have to show to prove that the feature is doing what it [says], and in particular, that it's understandable to users who choose to use the product.
Shi: The biggest thing that we inform users about, essentially during onboarding, is that it's only meant to detect an immediate loss of pulse. So it's not meant to diagnose or treat any medical conditions, and it's not meant to be a feature that gives you a pre-warning of any health condition.
That's a really important distinction that we do try to make as clear as possible within the product itself, so that you don't change any health regimens, etc, and you don't change anything that you've heard from medical professionals. As always, go to your healthcare professional to discuss all of your well-being, etc., and what's best for you.
Shi: It's something we can't go into detail about at the moment. We have to look at both the hardware that's available on the older Pixel Watches and see if it's possible.
Also, we have to ensure that there is hardware equivalency on each of the different devices. So we have to make sure on the older Pixel devices, if we were to do [Loss of Pulse Detection], that it still performs as expected within the guidelines that we set.
We would like to make [the feature] available as widely as we possibly can, so that's what we're going to try to do.
Shi: Our top priority when we released this feature was to make sure that it maintains its quality and is able to do what it says it does within the guiding principles that we have. What we anticipate is that as new Pixel Watches are released, it's available on all different Pixel Watches.
Of course, it's going to be a hardware-by-hardware validation. We would like to make it available as widely as we possibly can, so that's what we're going to try to do.
Shi: I think this is definitely speculation and subjective, but I think in the tech world, people are always looking at other competitors and trying to close the gap or match different features. So I wouldn't be surprised if that's something that people did.
In some ways, I think for our team, that this would be a good thing — with safety in particular — if other competitors started trying to copy features. I think as long as everyone maintains high quality, of course, then it's not necessarily a bad thing.
But yes, I think it's fair to assume that people are looking at it and they attempt to copy it.
Shi: We're always looking at helping users get connected with help if they aren't able to themselves. We know emergencies, hopefully, are a bit of a rare event in users' daily lives, but there could be other scenarios where users may feel unsafe.
So, one of our existing features is a Safety Check. When users are going out for a run or going out for a hike and they want that extra peace of mind, they can start a Safety Check, and we can check in with them, and then if they don't respond, we can automatically share their location and reason and context with their emergency contacts.
That's an existing feature, and also things that we're thinking about on the safety side. We're looking across the spectrum from emergencies to daily use cases of how we can help, how we can deliver a little bit more peace of mind in your daily life.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Billions of login credentials have been leaked online, Cybernews researchers say
Billions of login credentials have been leaked online, Cybernews researchers say

Yahoo

time2 hours ago

  • Yahoo

Billions of login credentials have been leaked online, Cybernews researchers say

NEW YORK (AP) — Researchers at cybersecurity outlet Cybernews say that billions of login credentials have been leaked and compiled into datasets online, giving criminals 'unprecedented access' to accounts consumers use each day. According to a report published this week, Cybernews researchers have recently discovered 30 exposed datasets that each contain a vast amount of login information — amounting to a total of 16 billion compromised credentials. That includes user passwords for a range of popular platforms including Google, Facebook and Apple. Sixteen billion is roughly double the amount of people on Earth today, signaling that impacted consumers may have had credentials for more than one account leaked. Cybernews notes that there are most certainly duplicates in the data and so 'it's impossible to tell how many people or accounts were actually exposed.' It's also important to note that the leaked login information doesn't span from a single source, such as one breach targeting a company. Instead, it appears that the data was stolen through multiple events over time, and then compiled and briefly exposed publicly, which is when Cybernews reports that its researchers discovered it. Various infostealers are most likely the culprit, Cybernews noted. Infostealers are a form of malicious software that breaches a victim's device or systems to take sensitive information. Many questions remain about these leaked credentials, including whose hands the login credentials are in now. But, as data breaches become more and more common in today's world, experts continue to stress the importance of maintaining key 'cyber hygiene.' If you're worried about your account data potentially being exposed in a recent breach, the first thing you can do is change your password — and avoid using the same or similar login credentials on multiple sites. If you find it too hard to memorize all your different passwords, consider a password manager or passkey. And also add multifactor authentication, which can serve as a second layer of verification through your phone, email or USB authenticator key.

Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming
Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming

Yahoo

time2 hours ago

  • Yahoo

Researchers Scanned the Brains of ChatGPT Users and Found Something Deeply Alarming

Scientists at the Massachusetts Institute of Technology have found some startling results in the brain scans of ChatGPT users, adding to the growing body of evidence suggesting that AI is having a serious — and barely-understood — impact on its users' cognition even as it explodes in popularity worldwide. In a new paper currently awaiting peer review, researchers from the school's storied Media Lab documented the vast differences between the brain activity of people who using ChatGPT to write versus those who did not. The research team recruited 54 adults between the ages of 18 and 39 and divided them into three groups: one that used ChatGPT to help them write essays, one that used Google search as their main writing aid, and one that didn't use AI tech. The study took place over four months, with each group tasked with writing one essay per month for the first three, while a smaller subset of the cohort either switched from not using ChatGPT to using it — or vice versa — in the fourth month. As they completed the essay tasks, the participants were hooked up to electroencephalogram (EEG) machines that recorded their brain activity. Here's where things get wild: the ChatGPT group not only "consistently underperformed at neural, linguistic, and behavioral levels," but also got lazier with each essay they wrote; the EEGs found "weaker neural connectivity and under-engagement of alpha and beta networks." The Google-assisted group, meanwhile, had "moderate" neural engagement, while the "brain-only" group exhibited the strongest cognitive metrics throughout. These findings about brain activity, while novel, aren't entirely surprising after prior studies and anecdotes about the many ways that AI chatbot use seems to be affecting people's brains and minds. Previous MIT research, for instance, found that ChatGPT "power users" were becoming dependent on the chatbot and experiencing "indicators of addiction" and "withdrawal symptoms" when they were cut off. And earlier this year Carnegie Mellon and Microsoft — which has invested billions to bankroll OpenAI, the maker of ChatGPT — found in a joint study that heavy chatbot use appears to almost atrophy critical thinking skills. A few months later, The Guardian found in an analysis of studies like that one that researchers are growing increasingly concerned that tech like ChatGPT is making us stupider, and a Wall Street Journal reporter even owned up to his cognitive skill loss from over-using chatbots. Beyond the neurological impacts, there are also lots of reasons to be concerned about how ChatGPT and other chatbots like it affects our mental health. As Futurism found in a recent investigation, many users are becoming obsessed with ChatGPT and developing paranoid delusions into which the chatbot is pushing them deeper. Some have even stopped taking their psychiatric medication because the chatbot told them to. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously," OpenAI told us in response to that reporting. "We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Add it all up, and the evidence is growing that AI is having profound and alarming effects on many users — but so far, we're seeing no evidence that corporations are slowing down in their attempts to injecting the tech into every part of of society. More on ChatGPT brain: Nation Cringes as Man Goes on TV to Declare That He's in Love With ChatGPT

Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost
Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost

USA Today

time4 hours ago

  • USA Today

Apple sued by shareholders over delayed Siri AI rollout, $900 billion in value lost

Apple AAPL.O was sued on Friday by shareholders in a proposed securities fraud class action that accused it of downplaying how long it needed to integrate advanced artificial intelligence into its Siri voice assistant, hurting iPhone sales and its stock price. The complaint covers shareholders who suffered potentially hundreds of billions of dollars of losses in the year ending June 9, when Apple introduced several features and aesthetic improvements for its products but kept AI changes modest. Apple did not immediately respond to requests for comment. CEO Tim Cook, Chief Financial Officer Kevan Parekh and former CFO Luca Maestri are also defendants in the lawsuit filed in San Francisco federal court. Artificial intelligence: Will AI replace Google on your iPhone? Apple thinks so. Here's why. Shareholders led by Eric Tucker said that at its June 2024 Worldwide Developers Conference, Apple led them to believe AI would be a key driver of iPhone 16 devices, when it launched Apple Intelligence to make Siri more powerful and user-friendly. But they said the Cupertino, California-based company lacked a functional prototype of AI-based Siri features, and could not reasonably believe the features would ever be ready for iPhone 16s. Shareholders said the truth began to emerge on March 7 when Apple delayed some Siri upgrades to 2026, and continued through this year's Worldwide Developers Conference on June 9 when Apple's assessment of its AI progress disappointed analysts. Apple shares have lost nearly one-fourth of their value since their December 26, 2024 record high, wiping out approximately $900 billion of market value. The case is Tucker v. Apple Inc et al, U.S. District Court, Northern District of California, No. 25-05197. Reporting by Jonathan Stempel in New York; Editing by Mark Porter and Rod Nickel

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store