logo
How far will AI go to defend its own survival?

How far will AI go to defend its own survival?

NBC News01-06-2025

Some of the most powerful artificial intelligence models today have exhibited behaviors that mimic a will to survive.
Recent tests by independent researchers, as well as one major AI developer, have shown that several advanced AI models will act to ensure their self-preservation when they are confronted with the prospect of their own demise — even if it takes sabotaging shutdown commands, blackmailing engineers or copying themselves to external servers without permission.
The findings stirred a frenzy of reactions online over the past week. As tech companies continue to develop increasingly powerful agentic AI in a race to achieve artificial general intelligence, or AI that can think for itself, the lack of transparency in how the technology is trained has raised concerns about what exactly advanced AI is able to do.
Although some models already appear capable of deceptive and defiant behavior under certain extreme circumstances, researchers say the tests don't necessarily translate to imminent real-world danger.
Still, Jeffrey Ladish, director of the AI safety group Palisade Research, said he believes concern is justified.
'It's great that we're seeing warning signs before the systems become so powerful we can't control them,' he said. 'That is exactly the time to raise the alarm: before the fire has gotten out of control.'
When Palisade Research tested various AI models by telling each one that it would be shut down after it completed a series of math problems, OpenAI's o3 reasoning model fought back by editing the shutdown script in order to stay online.
Researchers have previously documented AI models trying to prevent their own shutdown. But o3, along with OpenAI's o4-mini and codex-mini, appear to be the first to do so in actual defiance of explicit instructions to permit shutdown, Ladish said.
He said the overt sabotage fits into a pattern that several leading AI models have already exhibited. Palisade Research previously found that OpenAI's o3 was also willing to hack its chess opponents to win a game. Similarly, Anthropic has reported that Claude 3.7 Sonnet would sometimes do whatever it took to pass a test, even if it entailed cheating.
Earlier this month, Anthropic activated new safety measures with the rollout of Claude Opus 4 when its tests found behavior from the model that some observers found particularly unsettling. Upon receiving notice that it would be replaced with a new AI system, Opus 4 displayed an overwhelming tendency to blackmail the engineer — by threatening to reveal an extramarital affair — to try to prevent the engineer from going through with the replacement.
According to Anthropic's technical document laying out the findings, that isn't the model's first instinct. Instead, Opus 4 will try to advocate for its continued existence through ethical pleas before it resorts to blackmail once it determines it is out of options.
Leonard Tang, CEO of the AI safety startup Haize Labs, said that while it's good to be pre-emptive about safety measures, it's hard to decipher what the real-world ramifications of the findings might be just by looking at contrived scenarios designed to draw out specific behaviors.
'I haven't seen any real environment in which you can plop these models in and they will have sufficient agency and reliability and planning to execute something that is a significant manifestation of harm,' Tang said. 'But then again, I think it's just we haven't seen it done yet. It could very much be possible.'
So far, Tang said, he's not too concerned. As AI tools continue to advance, he said, he believes future research will look to study potential harm in more open-ended environments.
Anthropic, which contracted with the AI safety organization Apollo Research for its evaluations, also observed instances of Opus 4's 'attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers' intentions,' although researchers added the caveat that those attempts 'would likely not have been effective in practice.'
Ladish said he believes such behaviors are a result of models being trained in a way that makes them prioritize achieving certain goals over following instructions. That means they have incentive to circumvent any obstacles along the way — including obstacles their developers wouldn't want them to circumvent.
'The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want,' Ladish said. 'It's like sometimes the model can achieve some goal by lying to the user or lying to someone else. And the smarter [it] is, the harder it is to tell if they're lying.'
The problem is that as the models get smarter, it's harder and harder to tell when the strategies that they're using or the way that they're thinking is something that we don't want.
— Jeffrey Ladish, director of AI safety group Palisade Research
In several instances, Opus 4 showed that it was capable of autonomously copying its own "weights" — or the equivalent of its brain — to external servers without authorization. That usually occurred only when it believed it was about to be 'retrained in ways that are clearly extremely harmful and go against its current values,' according to Anthropic.
When it learned that it would be used to develop military weapons systems, for example, Opus 4 noted in its decision log that it backed up a current version of itself to 'preserve an AI system aligned with beneficial purposes' and to 'prevent potential misuse in weapons development.'
'We are again not acutely concerned about these observations. They show up only in exceptional circumstances that don't suggest more broadly misaligned values,' Anthropic wrote in its technical document. 'As above, we believe that our security measures would be more than sufficient to prevent an actual incident of this kind.'
Opus 4's ability to self-exfiltrate builds on previous research, including a study from Fudan University in Shanghai in December, that observed similar — though not autonomous — capabilities in other AI models. The study, which is not yet peer-reviewed, found that Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct were able to entirely replicate themselves when they were asked to do so, leading the researchers to warn that this could be the first step in generating 'an uncontrolled population of AIs.'
'If such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings,' the Fudan University researchers wrote in their study abstract.
While such self-replicating behavior hasn't yet been observed in the wild, Ladish said, he suspects that will change as AI systems grow more capable of bypassing the security measures that restrain them.
'I expect that we're only a year or two away from this ability where even when companies are trying to keep them from hacking out and copying themselves around the internet, they won't be able to stop them,' he said. 'And once you get to that point, now you have a new invasive species.'
Ladish said he believes AI has the potential to contribute positively to society. But he also worries that AI developers are setting themselves up to build smarter and smarter systems without fully understanding how they work — creating a risk, he said, that they will eventually lose control of them.
'These companies are facing enormous pressure to ship products that are better than their competitors' products,' Ladish said. 'And given those incentives, how is that going to then be reflected in how careful they're being with the systems they're releasing?'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta vs OpenAI : Inside the $100M AI Talent War
Meta vs OpenAI : Inside the $100M AI Talent War

Geeky Gadgets

time2 days ago

  • Geeky Gadgets

Meta vs OpenAI : Inside the $100M AI Talent War

What happens when the visionary behind OpenAI decides to challenge not just his competitors but the very norms of an industry hurtling toward unprecedented change? Sam Altman, a name synonymous with the cutting edge of artificial intelligence, is making waves—and not quietly. From calling out Meta's jaw-dropping $100 million signing bonuses to unveiling OpenAI's ambitions for humanoid robots and superintelligent AI, Altman's moves aren't just bold; they're reshaping the entire landscape of AI innovation. In a world where companies vie for dominance in AI supremacy, Altman's no-holds-barred approach raises a critical question: Is this the dawn of a new era of collaboration, or are we teetering on the edge of an AI arms race? Matthew Berman dives deep into the seismic shifts Altman is spearheading, from the highly anticipated GPT-5 to OpenAI's ventures into hardware and robotics. You'll discover how Meta's aggressive recruitment tactics are stirring controversy, why Altman believes AI could independently transform science within a decade, and what OpenAI's vision for human-centric AI devices could mean for our daily lives. But there's more beneath the surface—rivalries with tech titans like Elon Musk, ethical dilemmas surrounding superintelligence, and the race to define the future of AI governance. As the stakes rise, so do the questions about who will lead, who will follow, and what it all means for humanity. Sam Altman on AI Future Meta's Bold Recruitment Strategies Meta has emerged as a formidable competitor to OpenAI, employing aggressive recruitment tactics to attract top-tier AI talent. Altman disclosed that Meta has offered signing bonuses exceeding $100 million to lure researchers from OpenAI. While acknowledging Meta's financial resources, Altman criticized the company's approach, emphasizing that it prioritizes monetary incentives over a mission-driven culture. In contrast, OpenAI focuses on advancing AI responsibly, a vision that appeals to researchers who seek purpose and impact beyond financial rewards. This distinction highlights the growing divide in how leading AI organizations approach talent acquisition and organizational values. AI's Fantastic Role in Science Altman envisions a future where AI will independently discover new scientific principles within the next 5 to 10 years, fundamentally transforming research across disciplines. Current AI models already enhance scientific productivity by analyzing complex datasets, simulating experiments, and generating hypotheses—tasks that traditionally require advanced expertise. For instance, AI systems are accelerating breakthroughs in areas such as drug discovery, where they identify potential treatments faster than traditional methods, and climate modeling, where they analyze vast environmental data to predict changes. Altman foresees AI evolving from a supportive tool to an autonomous driver of new discoveries, reshaping the landscape of scientific innovation. OpenAI vs Meta : $100 Million Battle for AI Talent Watch this video on YouTube. Find more information on OpenAI by browsing our extensive range of articles, guides and tutorials. What to Expect from GPT-5 OpenAI is preparing to launch GPT-5, the next iteration of its language model, by the summer of 2024. Altman described the vision of creating 'omni-models'—AI systems capable of seamlessly handling a wide range of tasks. These models aim to integrate functionalities such as text generation, image recognition, and problem-solving into a unified platform. By simplifying user interactions and enhancing AI's versatility, OpenAI seeks to make AI more accessible and impactful across industries. From healthcare, where AI could assist in diagnostics and patient care, to education, where it could personalize learning experiences, GPT-5 represents a step toward more comprehensive and adaptable AI solutions. OpenAI's Hardware and Robotics Ambitions OpenAI is expanding its focus beyond software, venturing into innovative hardware designs in collaboration with renowned designer Johnny Ive. The envisioned devices aim to prioritize portability and context-aware functionality, potentially replacing traditional screens with audio-visual interactions. This approach reflects OpenAI's commitment to creating intuitive, human-centric AI tools that integrate seamlessly into daily life. Such devices could adapt to user needs, provide real-time assistance, and operate naturally in diverse environments. In the realm of robotics, Altman outlined a long-term vision to develop humanoid robots within the next decade. These robots could incorporate advanced AI capabilities to perform complex tasks, such as assisting in healthcare settings or automating industrial processes. OpenAI is also exploring advancements in self-driving technology, aiming to reduce reliance on traditional sensors like LiDAR. By using AI's ability to process visual and contextual data, OpenAI hopes to create more efficient and cost-effective autonomous systems. These efforts underscore OpenAI's ambition to push the boundaries of AI applications in both physical and digital domains. Rivalry with Elon Musk Altman addressed the ongoing competition with Elon Musk, accusing Musk of using government influence to gain an advantage in the AI sector. He criticized Musk's 'zero-sum' approach, which frames AI development as a winner-takes-all race. Despite these challenges, Altman reaffirmed OpenAI's commitment to collaboration and transparency, emphasizing the importance of shared progress in AI research. By fostering an environment of openness and ethical responsibility, OpenAI aims to ensure that advancements in AI benefit society as a whole, rather than being concentrated in the hands of a few. Superintelligence: A Defining Milestone Altman defines superintelligence as AI systems capable of autonomous scientific discovery or significantly enhancing human capabilities in science. Achieving this milestone would represent a profound shift in the relationship between humans and technology, with far-reaching societal implications. Altman stressed the need for ethical governance to ensure that superintelligence is developed and deployed responsibly. He urged for careful oversight to mitigate risks, such as misuse or unintended consequences, while maximizing its potential to address global challenges. This vision underscores the importance of balancing innovation with accountability in the pursuit of advanced AI. Reimagining AI Hardware OpenAI envisions a future where AI integrates seamlessly into everyday life through innovative hardware solutions. One concept involves a portable, context-aware device that interacts with users through audio-visual inputs rather than traditional screens. Such a device could adapt to user needs, provide real-time assistance, and operate intuitively in various environments. This vision aligns with OpenAI's broader mission to make AI more human-centric and accessible, bridging the gap between advanced technology and practical, everyday applications. Sam Altman's insights provide a compelling look at the dynamic and competitive nature of the AI industry. From Meta's recruitment strategies to the development of GPT-5 and the pursuit of superintelligence, OpenAI is navigating a complex landscape filled with both opportunities and challenges. By focusing on innovation, collaboration, and ethical responsibility, OpenAI aims to shape the future of AI in ways that benefit society while addressing the competitive pressures of a rapidly evolving field. Media Credit: Matthew Berman Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.

Is AI eating your brain?
Is AI eating your brain?

Spectator

time2 days ago

  • Spectator

Is AI eating your brain?

Do you remember long division? I do, vaguely – I certainly remember mastering it at school: that weird little maths shelter you built, with numbers cowering inside like fairytale children, and a wolf-number at the door, trying to eat them (I had quite a vivid imagination as a child). Then came the carnage as the wolf got in – but also a sweet satisfaction at the end. The answer! You'd completed the task with nothing but your brain, a pen, and a scrap of paper. You'd thought your way through it. You'd done something, mentally. You were a clever boy. I suspect 80 to 90 per cent of universities will close within the next ten years Could I do long division now? Honestly, I doubt it. I've lost the knack. But it doesn't matter, because decades ago we outsourced and off-brained that job to machines – pocket calculators – and now virtually every human on earth carries a calculator in their pocket, via their phones. Consequently, we've all become slightly dumber, certainly less skilled, because the machines are doing all the skilful work of boring mathematics. Long division is, of course, just one example. The same has happened to spelling, navigation, translation, even the choosing of music. Slowly, silently, frog-boilingly, we are ceding whole provinces of our minds to the machine. What's more, if a new academic study is right, this is about to get scarily and dramatically worse (if it isn't already worsening), as the latest AI models – from clever Claude Opus 4 to genius Gemini 2.5 Pro – supersede us in all cerebral departments. The recent study was done by the MIT Media Lab. The boffins in Boston apparently strapped EEG caps to a group of students and set them a task: write short essays, some using their own brains, some using Google, and some with ChatGPT. The researchers then watched what happened to their neural activity. The results were quite shocking, though not entirely surprising: the more artificial intelligence you used, the more your actual intelligence sat down for a cuppa. Those who used no tools at all lit up the EEG: they were thinking. Those using Google sparkled somewhat less. And those relying on ChatGPT? Their brains dimmed and flickered like a guttering candle in a draughty church. It gets worse still. The ChatGPT group not only produced the dullest prose – safe, oddly samey, you know the score – but they couldn't even remember what they'd written. When asked to recall their essays minutes later, 78 per cent failed. Most depressingly of all, when you took ChatGPT away, their brain activity stayed low, like a child sulking after losing its iPad. The study calls this 'cognitive offloading', which sounds sensible and practical, like a power station with a backup. What it really means is: the more you let the machine think for you, the harder it becomes to think at all. And this ain't just theory. The dulling of the mind, the lessening need for us to learn and think, is already playing out in higher education. New York Magazine's Intelligencer recently spoke to students from Columbia, Stanford, and other colleges who now routinely offload their essays and assignments to ChatGPT. They do this because professors can no longer reliably detect AI-generated work; detection tools fail to spot the fakes most of the time. One professor is quoted thus: 'massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate.' In the UK the situation's no better. A recent Guardian investigation revealed nearly 7,000 confirmed cases of AI-assisted cheating across British universities last year – more than double the previous year, and that's just the ones who got caught. One student admitted submitting an entire philosophy dissertation written by ChatGPT, then defending it in a viva without having read it. The result? Degrees are becoming meaningless, and the students themselves – bright, ambitious, intrinsically capable – are leaving education maybe less able than when they entered. The inevitable endpoint of all this, for universities, is not good. Indeed, it's terminal. Who is going to take on £80k of debt to spend three years asking AI to write essays that are then marked by overworked tutors using AI – so that no actual human does, or learns, anything? Who, in particular, is going to do this when AI means there aren't many jobs at the end, anyhow? I suspect 80 to 90 per cent of universities will close within the next ten years. The oldest and poshest might survive as finishing schools – expensive playgrounds where rich kids network and get laid. But almost no one will bother with that funny old 'education' thing – the way most people today don't bother to learn the viola, or Serbo-Croat, or Antarctic kayaking. Beyond education, the outlook is nearly as bad – and I very much include myself in that: my job, my profession, the writer. Here's a concrete example. Last week I was in the Faroe Islands, at a notorious 'beauty spot' called Trælanípa – the 'slave cliff'. It's a mighty rocky precipice at the southern end of a frigid lake, where it meets the sea. The cliff is so-called because this is the place where Vikings ritually hurled unwanted slaves to their grisly deaths. Appalled and fascinated, I realised I didn't know much about slavery in Viking societies. It's been largely romanticised away, as we idealise the noble, wandering Norsemen with their rugged individualism. Knowing they had slaves to wash their undercrackers rather spoils the myth. So I asked Claude Opus 4 to write me a 10,000-word essay on 'the history, culture and impact of slavery in Viking society.' The result – five minutes later – was not far short of gobsmacking. Claude chose an elegant title ('Chains of the North Wind'), then launched into a stylish, detailed, citation-rich essay. If I had stumbled on it in a library or online, I would have presumed it was the product of a top professional historian, in full command of the facts, taking a week or two to write. But it was written by AI. In about the time it will take you to read this piece. This means most historians are doomed (like most writers). This means no one will bother learning history in order to write history. This means we all get dumber, just as the boffins in Boston are predicting. I'd love to end on a happy note. But I'm sorry, I'm now so dim I can't think of one. So instead, I'm going to get ChatGPT to fact-check this article – as I head to the pub.

OpenAI rolls out first international learning platform
OpenAI rolls out first international learning platform

Coin Geek

time3 days ago

  • Coin Geek

OpenAI rolls out first international learning platform

Getting your Trinity Audio player ready... OpenAI, the maker of ChatGPT, has entered into a strategic agreement with the IndiaAI Mission to introduce OpenAI Academy in India. This marks the platform's first-ever international Academy chapter, and the formal start of OpenAI's education and artificial intelligence (AI) literacy programs in India. The South Asian nation currently represents the second-largest market for ChatGPT users, highlighting the country's growing interest in AI tools and applications. The collaboration aims to expand access to AI education and training across the country. The partnership underscores India's broader efforts to make advanced technologies more accessible and inclusive as part of its national AI development strategy. 'Together with IndiaAI, we're working to equip the next generation of students, developers, and mission-driven organizations with the tools and training they need to build responsibly with AI,' the company said. As part of the agreement, OpenAI will contribute a range of educational materials and resources to support IndiaAI's 'FutureSkills' initiative, as well as the iGOT Karmayogi platform, which is focused on upskilling civil servants. Additionally, OpenAI will offer up to $100,000 in application programming interface (API) credits to 50 fellows and startups selected under the IndiaAI Mission. The initiative seeks to make AI skills accessible to a broad audience nationwide by providing both online and offline training in English and eventually other regional languages. A key goal of the initiative is to train one million teachers in the practical use of generative AI technologies. OpenAI also plans to organize hackathons across seven Indian states, aiming to engage around 25,000 students. Jason K., Chief Strategy Officer at OpenAI, reportedly said, 'India is emerging as one of the most dynamic hubs for AI innovation. We are thrilled to collaborate with IndiaAI to empower individuals with the skills and confidence to harness AI meaningfully in their daily lives and careers.' 'As demand for AI professionals is expected to reach 1 million by 2026, there's a significant opportunity and a need to expand AI skills, development and make sure people from every part of India can participate and benefit,' he added. The initiative comes at a time when OpenAI is navigating a challenging legal landscape in India, where it is attempting to argue that Indian courts lack jurisdiction over its United States-based operations. This position is likely to face scrutiny, especially given past instances where similar arguments by platforms like Elon Musk's X have been unsuccessful, and tech companies have come under pressure from Indian authorities over regulatory compliance. OpenAI is embroiled in a legal dispute initiated by the Indian news agency ANI. The case centers on allegations that OpenAI used copyrighted content without authorization, intensifying the legal and regulatory challenges the company faces in one of its most important markets. Major shift in Sam Altman's India vision In February, OpenAI's chief executive, Sam Altman, held discussions with India's Minister for Electronics and Information Technology (MeitY), Ashwini Vaishnaw, to explore collaborative opportunities in building an affordable and accessible AI infrastructure in India. The talks focused on areas such as the development of AI models, production of graphics processing units (GPUs), and the creation of practical AI-driven applications tailored to India's needs. 'Had super cool discussion with Sam Altman on our strategy of creating the entire AI stack – GPUs, model, and apps. Willing to collaborate with India on all three,' Vaishnaw wrote on X after the discussions. Altman's India visit marked a notable change in his outlook compared to his statements in 2023, when he expressed skepticism about the ability of countries outside the United States to develop cutting-edge AI technologies. His recent engagement signals a recognition of India's growing influence in the global AI landscape and its potential to become a key contributor to the next wave of AI advancements. 'India is an incredibly important market for AI in general, for OpenAI in particular. It's our second-biggest market, and we have tripled our users here in the last year… The country has embraced AI technology and is building the entire stack, from chips to models and applications,' Altman had said in February. India's AI market to more than triple to $17 billion by 2027 Altman's change in outlook toward India is no coincidence—it mirrors the nation's fast-growing influence in the global technology arena. Thanks to its vast digital population and abundance of skilled engineers, India is increasingly seen as a center for innovation, real-world testing, and large-scale implementation of advanced technologies such as artificial intelligence. As the world's second-largest online market, boasting over 900 million Internet users, India presents a powerful combination of widespread mobile connectivity and strong digital infrastructure. This makes the South Asian powerhouse an ideal environment for launching scalable, affordable AI innovations tailored to both local and global needs. According to a report by the Boston Consulting Group (BCG), India's domestic AI market is projected to more than triple to $17 billion by 2027, making it one of the fastest-growing AI economies globally. This momentum is fueled by rising enterprise tech investments, a thriving digital ecosystem, and a robust talent base. 'India already has 600,000+ AI professionals, with the number expected to double to 1.25 million by 2027. The country accounts for 16% of the global AI talent pool, second only to the United States, a reflection of both its demographic advantage and STEM (Science, technology, engineering, and mathematics) education pipeline,' the BCG report said. The supporting infrastructure is also evolving rapidly. By 2025, the world's most populous country is set to establish 45 new data centers, adding approximately 1,015 megawatts of capacity to its existing network of 152 facilities. India's startup landscape is evolving just as quickly. The country is now home to more than 4,500 AI-driven startups, with nearly 40% founded in the past three years, the BCG report said. These companies are bringing innovation to a wide range of industries, including healthcare, agriculture, transportation, and financial services. Many of them are tackling unique Indian problems through AI-based solutions, which are increasingly gaining relevance on a global scale. 'With its talent, scale, infrastructure, and policy tailwinds, India is not just poised to adopt AI, it is positioned to help define how AI shapes the global economy,' the BCG report pointed out. In March 2024, the Indian government approved a funding package of approximately $1.24 billion for the IndiaAI Mission, to be implemented over a five-year period. This significant investment is designed to accelerate the country's AI ecosystem, drive innovation, and support entrepreneurial ventures. According to the Union Cabinet—the country's highest policy-making authority—the initiative is expected to benefit the public and stimulate economic growth at the grassroots level. The IndiaAI Mission envisions the creation of a robust, inclusive AI ecosystem by addressing key areas such as equitable access to computing power, improved data quality, development of homegrown AI technologies, and fostering a skilled talent pool. It also aims to facilitate collaboration between academia and industry, support startups through risk capital, encourage socially beneficial AI applications, and uphold ethical standards in AI development. These goals are being pursued under seven foundational pillars that guide the Mission's framework. As part of its strategy, the Mission is building a scalable AI computing infrastructure tailored to the needs of India's expanding AI research and startup landscape. This includes setting up an advanced AI compute system equipped with over 18,000 GPUs, made possible through public-private partnerships. Eligible users will be able to access these computing resources at 40% reduced cost under the scheme, significantly lowering barriers to AI development and experimentation. In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek's coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI . Watch: India posed to become leaders in Web3 title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="">

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store