logo
Gemini Forged like Achilles: Google Shields Gemini, with Powerful Security Advancements.

Gemini Forged like Achilles: Google Shields Gemini, with Powerful Security Advancements.

Time of India2 days ago

What are these threats that AI has grown vulnerable to?
Live Events
How does Google protect their Gemini models from such threats?
Delving a bit deeper into these methods of increasing security:
Automated Red Teaming (ART) and Adversarial Fine Tuning:
Instruction-Data Separation:
Constant Evaluation:
As AI expands itself into more industries, the need for AI to be forged like a modern-day Achilles becomes increasingly necessary. For, the more its branches extend into various sectors, the more vulnerable it shall become to the ever-evolving minefield of digital threats. Google DeepMind has unveiled an upgrade to the security safeguards, with a goal to protect its Gemini models.In its newly launched 'White Paper,' Google has laid down its strategic blueprint for combating 'indirect prompt injections' that make agentic AI tools supported by advanced large language models vulnerable to attacks of such kind. Google has made its wants clear to the public, wherein they choose to create AI tools that aren't just capable but also tools that are secure.AI agents are known for their ability to fulfill straightforward tasks in a moment's time; completion of said tasks, however, involves garnering access to information from various means, such as documents, calendars, or external websites. 'Indirect prompt injection' infects these data sources with 'malicious instructions designed to trick AI into sharing private data or misusing its permissions.'Indirect prompt Injection has become an emerging cybersecurity challenge, AI falls short on identifying the difference between instructions from a genuine user and manipulative commands embedded within the data they retrieve.Indirect prompt injection attacks tend to be complex and often require constant surveillance with the requirement of multiple layers of defense. Rather than combatting these challenges manually, evoking a slow and inefficient result, Google has built an automated system that relentlessly strengthens Gemini's defenses.The strategy essentially involves the internal team constantly attacking Gemini in relevant ways to pinpoint security weaknesses that Gemini possesses. This technique, in comparison to the others elaborately mentioned in the 'white paper,' helped significantly to increase Gemini's protection rate against 'indirect prompt injection' attacks while the tool is being used by the users.What makes modern-day cyberattacks malicious is their adaptive nature. Upon being struck once by the security safeguards, they return again, with adaptive measures, ensuring they make palpable damage. Hence, basic security measures work well against non-adaptive cyberattacks, the kind that stands in contrast to the one explained previously.Therefore, to combat complex attacks such as this, Gemini's security enhancements ensure focus on both proactive and reactive strategies:'ART generates effective indirect prompt injections targeting sensitive information.' This essentially mimics the tactics of real-world adversaries, which in turn teaches Gemini to ignore such malicious embedded instructions and follow the original user request, resulting in the model only providing the correct and secure answer it is meant to provide. Further, this form of training lets the model innately understand how it must handle compromised information that evolves over time as part of adaptive attacks.This safeguard helps Gemini differentiate between a command fed to the model by the genuine user and prompts that are embedded with malicious instructions. This is deemed to be an essential defense line against prompt injection.Considering the attacks' adaptive capabilities, it asks for constant surveillance; therefore, the system is tested using a dynamic feedback loop of continuous evaluations.Google understands that this isn't a 'solved' problem but merely a step forward into the minefield. As generative AI becomes a pivot to search, productivity tools, assistance, and more, the stakes for secure and trustworthy AI are higher than ever. Therefore, Gemini's upgrade marks a key milestone in this AI race, where they ensure that powerful tools such as these remain loyal to their wielder.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google's Audio Overview can turn those boring documents into engaging podcasts
Google's Audio Overview can turn those boring documents into engaging podcasts

Mint

time23 minutes ago

  • Mint

Google's Audio Overview can turn those boring documents into engaging podcasts

The New Normal: The world is at an inflexion point. Artificial Intelligence is set to be as massive a revolution as the Internet has been. The option to just stay away from AI will not be available to most people, as all the tech we use takes the AI route. This column series introduces AI to the non-techie in an easy and relatable way, aiming to demystify and help a user to actually put the technology to good use in everyday life. The first time I heard an article I had written being discussed, I sat up and listened in utter surprise. Two people I had never come across before were deep in conversation about what I'd written. This man and woman team went through everything, making up a slick podcast. These were AI voices that sounded totally natural and pleasant. This kind of conversation is generated by a feature called Audio Overview. To experience it immediately, download the Gemini app on your phone. Tap the plus sign at the bottom and navigate to one of your documents. Once uploaded, see the tab on top of it, click - and go make yourself a cup of coffee. By the time you get back with your streaming cup, the Audio Overview should be ready. Click, as indicated, and sit back to listen. The two AI hosts will now talk about your content. And they do so with impressive clarity and skill. It's no gimmick or party trick. Also read: Why India is so far behind in the fight for AI supremacy Listening to content can be a great way of absorbing it. Anyone can get tired of reading, since we have to do so much of it each day. As long as you have content that is in a Word file, plain text, a PDF, or Google doc, you can feed it to Gemini to turn it into an Audio Overview. I was putting off going through an 83-page document, when I figured I could quickly get the general gist of it with an Audio Overview. At work this can really help productivity. It's also great for just giving your eyes a rest. If you happen to have a visual impairment, the feature is a relief as you can get so much more done. NotebookLLM: podcasts from anything Audio Overview can be even more magical in its original home, Google's NotebookLM. To find that, go to your browser on any device and type NotebookLM in the search bar. Sign in with your Google account and you're in. Add up to 50 items of content including articles, notes, YouTube videos, presentations and more, to make up a notebook. All of these will be combined into an Audio Overview or a more full-fledged Deep Dive conversation through the Chat and Studio tabs. This does take a few minutes, so find something else to do for a bit. Once the conversation is ready you can listen in the browser, or download for later. Or even share it. This amazing audio feature gives you more control in NotebookLM than it does in Gemini. NotebookLM does have an app, but that doesn't seem to have all the features. You can select the playback speed, the length of the conversation, and incredibly even the language the AI hosts should speak. And yes, Hindi is on the list, making it possible to reach a wider audience with that content. It's easy enough to imagine the feature being used for training and education, making it so much more widely useful. Also read: AI didn't take the job. It changed what the job is. As if all this weren't impressive enough already, here's another way you can control the conversation. In NotebookLM you'll also find a Customise tab for the Deep Dive audio. Here, you can actually describe what you want the hosts to focus on. Request a focus on some selected aspect of the content, or ask to keep the language simple or technical. You have the option of deleting the conversation and re-generating it with fresh instructions. You can easily create a conversation in multiple languages for use with different audiences, or change the difficulty level. If you visit aistudio via the browser, you'll see that Google is experimenting with users being able to change the accent or style of speaking in a feature called Native Speech Generation. There's no announcement to the effect but one can easily see how this could be added to Audio Overview sometime. It works very well and is fascinating to try out. Join the conversation Another impressive but experimental feature lets you actually 'join' the podcast, by tapping a button. Interrupt the hosts and ask a question or make them change focus or ask for a comment on your opinion on the subject. This is a little slow and you'll be left wondering if the hosts heard you at all, but I fully expect it to become more fluid in the future as Google adds new features quite frequently. Also read | Mary Meeker's AI report: Decoding what it signals for India's tech future Audio Overview isn't flawless, but chances of getting things wrong are minimised because it's you giving the content. The feature has worked well enough for Google to have brought it to Search, where it will give you AI Overview in audio form – being tried out in the US first. Mala Bhargava is most often described as a 'veteran' writer who has contributed to several publications in India since 1995. Her domain is personal tech and she writes to simplify and demystify technology for a non-techie audience.

AI didn't take the job. It changed what the job is.
AI didn't take the job. It changed what the job is.

Mint

timean hour ago

  • Mint

AI didn't take the job. It changed what the job is.

Over the past few weeks, I've been on the road. Parbhani, Pune, Chennai, Jaipur. In small-town labs and factory floors, I saw jobs that still exist, but don't look like they used to In Parbhani I met Dr. Chaitanya, who runs a 24-hour diagnostics lab above a heart clinic. He told me he's failed to detect cancer before—not out of neglect, but because he was worn out. Now, when something doesn't feel right, he runs the slide through a machine. It doesn't get distracted. It doesn't get tired. It caught leukaemia in a boy whose report looked normal at first glance. In Jaipur I spent time inside Wipro's factories. I met Chandni—just out of college, far from home—running a CNC machine built for someone twice her size. The platform was raised to fit her. Sensors pause the line if she skips a step. She's not fighting the machine. She's learning to work with it. And then I came back to Bengaluru. Over the weekend, I caught up with a few junior engineers—entry-level coders, recently let go. We sat in a noisy café near HSR, talking about layoffs. Some of their friends—older, with fatter salaries—had been let go, too, from well-known names on Outer Ring Road. Most of them hadn't told their families yet. Someone joked their severance would go into a 'detox trip". But the silence after that said more. Also read | Mary Meeker's AI report: Decoding what it signals for India's tech future I kept thinking about all of it. From Parbhani to Jaipur to Bengaluru, I've seen AI reshape work—but in such unsettling ways. In some places, it keeps people going. In others, it shuts the door. And I've come back with questions I can't truly answer. Who gets to stay in the game? Who gets to rewrite their role? And who just disappears? *** We've spent years asking the wrong question. It's never been just 'Will AI take jobs?" That's the headline version—the one that misses what's actually unfolding on the ground. What I've seen is something slower and harder to name: jobs are shifting shape. The work still exists, but it doesn't look like it used to. Doctors don't just rely on training—they rely on machines to catch what their fatigue might miss. Factory workers aren't lifting metal—they're supervising systems. Engineers aren't writing code—they're managing what the agents spit out. In some places, people are being lifted. In others, pushed out. This isn't about replacement. It's about redefinition. And not everyone is getting the chance to adapt. *** In Parbhani, Dr. Chaitanya isn't trying to be some AI-era pathologist. He just doesn't want to miss a sign of cancer again. He bought the scanner not because anyone sold him a pitch-deck future, but because he was tired. Because late at night, after hours of non-stop samples, the eyes slip. And he knows what that costs. The machine doesn't replace his judgment – it just doesn't lose focus when he does. In Jaipur, Wipro didn't automate Chandni out. They built the floor to fit her. She's running a CNC machine designed for someone taller, stronger—but they raised the platform instead. Her job wasn't taken. It was made possible. She oversees the system now. And when she sends money home, there's no debate anymore about whether girls can handle mechanical work. Also read: Indian companies lag in workforce upskilling amid AI disruption, job cuts And then there's Bengaluru. The coders I met had barely started. A few months in, then gone. Not for bad performance. Just… gone. Their work was handed to tools they weren't trained to supervise. Their seniors—some drawing seven-figure salaries—were asked to leave too. One of them said most of his severance would go into a detox trip. We all laughed. But it didn't feel funny. Same tool. But in Parbhani, it buys time. In Jaipur, it makes the job possible. In Bengaluru, it ends it. **** There's something I've been noticing everywhere lately—in factories, hospitals, GCCs, even small startups. Someone in the room knows how to work with the AI. Not just use it, but shape it. Prompt it right. Catch when it's wrong. That person sets the tone for how work flows. And then there's everyone else. Trying to keep up. Hoping they're not left behind. It's not just a skill gap. It's who gets the confidence to speak up. Who gets the permission to push back when the machine's answer doesn't feel right. Who gets to set the rules for how AI shows up—and who's left cleaning up after it. One founder told me straight: 'We're not hiring another ops exec. We're hiring someone to manage the agents." The job still exists. It just looks different now. And the person who knows how to talk to the machine gets to decide how everyone else works around it. That's the shift I can't ignore. It's not about mass layoffs. It's about brutal sidelining. Not fired. Still on payroll. But it is no longer in the loop. *** I keep coming back to something Andy Grove once said. Intel was stuck in the memory chip business, losing ground fast. Grove turned to CEO Gordon Moore and asked, 'If we were fired, and the board brought in someone new, what do you think they'd do?" Moore said, 'They'd get us out of memories." Grove paused, then said, 'Then why don't we walk out the door, come back in, and do it ourselves?" And that's what they did. They walked back in and changed the company. Also read: Microsoft envisions a web driven by AI agents. What will it look like? What stayed with me wasn't the decision itself—it was the mindset. They gave themselves permission to reset. Same chairs. Same table. Just a different way of thinking. Most people I meet don't get to do that. In every workplace I've visited lately—factories, hospitals, GCCs—there's always someone who gets to reframe the game. The person who speaks up, shapes the tool, sets the tone. Everyone else is just trying to stay in the room. Or figuring out the exit. *** I asked Dr. Chaitanya if he ever worries AI will take over his work. He didn't hesitate. 'I just don't want to miss what matters," he said. 'Let the machine help with the rest." Chandni said the same thing, in different words. 'If it helps us do the work better, why fear it?" Neither of them were trying to protect their turf. They just wanted the tools to hold up when it counted. When they're tired. When something's easy to miss. When a mistake can't be undone. They weren't talking about AI as a threat. They weren't talking about it as the future. They were talking about the work—what it asks of them, what it gives back, and what they still want to hold on to. ***** So yes, people will need to learn. New tools, new ways of working, new habits. That's always been part of work. But before any of that, they need a little space to figure things out. To ask questions without sounding slow. To try, to fumble, to not know right away—and not be punished for it. Because the bigger risk isn't that AI takes your job. Also read: Why AI is central to the new browser wars It's that you're still in the role, still showing up every day—but slowly pushed out of the decisions. Not because you can't contribute. But because no one gave you the chance to learn how. And by the time you notice what's changed, the work has already moved on—without your voice in the room. Pankaj Mishra is a journalist and co-founder of FactorDaily. He has spent over two decades reporting on technology, startups, and work in India with a focus on the people and places often left out of the spotlight.

How Bill Gates and Melinda Gates reacted 'very differently' when their daughter announced her startup
How Bill Gates and Melinda Gates reacted 'very differently' when their daughter announced her startup

Time of India

timean hour ago

  • Time of India

How Bill Gates and Melinda Gates reacted 'very differently' when their daughter announced her startup

Microsoft co-founder Bill Gates ' daughter Phoebe Gates launched an ecommerce app earlier this year. Phoebe co-founded the e-commerce app Phia with her former Stanford roommate Sophia Kianni. The platform uses AI to help users compare prices of new and second-hand products across more than 40,000 websites. A web browser/app, Phia aims to be the of fashion, offering an instant price comparison from thousands of e-commerce sites for any item, new or used. Phoebe Gates is the youngest child of Bill Gates and his ex-wife, Melinda French Gates. When she told her father that she and Ms. Kianni wanted to get into the e-commerce space, his reaction, he said, was 'Wow, a lot of people have tried, and there's some big guys in there.' However, he was reportedly worried she might ask for money. In an interview published by the New York Times, soon after Phia's web browser and app went live, Gates said: 'I thought, 'Oh boy, she's going to come and ask.' 'I would have kept her on a short leash and be doing business reviews, which I would have found tricky,' the businessman explained of his hesitance. 'And I probably would have been overly nice but wondered if it was the right thing to do.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Memperdagangkan CFD Emas dengan salah satu spread terendah? IC Markets Mendaftar Undo Bill Gates: Phoebe is the most different than I am Her mother Melinda Gates , who Phoebe calls her 'rock,' told her she had to raise the capital on her own. 'She saw it as a real opportunity for me to, like, learn and fail,' she said. Talking further about Phoebe, Gates said that among all her children, she is 'the most different than I am". "Because she's so good with people. When we would go on family vacations, we would find some part of the beach to just be off on our own, and Phoebe would go down the beach and meet people and bring them back to introduce them to us,' Gates explained. AI Masterclass for Students. Upskill Young Ones Today!– Join Now

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store