logo
Meet IAS officer Vibhor Bhardwaj, used AI for preparation, cleared UPSC twice, his AIR was..., he is from...

Meet IAS officer Vibhor Bhardwaj, used AI for preparation, cleared UPSC twice, his AIR was..., he is from...

India.com31-05-2025

IAS Vibhor Bhardwaj (File)
UPSC Success Story: The UPSC Civil Services Examination (CSE) is arguably one of the toughest recruitment exams in India, and aspirants use a wide range of methods to prepare for this formidable test. While some rely on coaching classes, others find self-study to be a more sturdy option. However, IAS Vibhor Bhardwaj, a young IAS officer from Uttar Pradesh, used a completely different approach to prepare for UPSC CSE, he made use of Artificial Intelligence (AI) tools to enhance his subject knowledge, and prepare for the final interview. Who is IAS Vibhor Bhardwaj?
Born in Uttarawali, a small village in Bulandshahr district of Uttar Pradesh, Vibhor Bhardwaj earned his MSc degree in Physics from the Hansraj College, Delhi University, and afterwards began preparations to clear the UPSC CSE and realize his lifelong dream of becoming a civil servant.
Vibhor Bhardwaj chose Physics as his optional subject for the UPSC exam, and relied on online coaching classes and self-made notes to prepare for the tough recruitment test. His efficient preparation strategy enabled him to quickly prepare for UPSC CSE prelims, and cover the entire UPSC Mains syllabus with a span of just seven months.
In an interview, Vibhor revealed that carefully studied previous UPSC CSE question papers, and used these as a guide to strategize his preparation. He also focused on daily news and current affairs, in addition to regular mock tests, which further sharpened his knowledge. How Vibhor Bhardwaj used AI to crack UPSC?
Interestingly, a key part of Vibhor Bhardwaj's UPSC preparation was the use of AI tools like Google's Gemini, which he used for mock interviews. Vibhor revealed that these AI chatbots acted like teachers for him, helping him identify his strengths and weaknesses.
The AI mock interviews faced him with a wide-range of questions, which sharpened and strengthened his preparation for the actual interview. IAS Vibhor Bhardwaj AIR
Ultimately, Vibhor Bhardwaj's hard work and dedication paid off when he cracked the UPSC CSE in 2022 with an All India Rank of 743. However, this rank could not ensure him an IAS post, so he tried again in 2024; this time jumping 724 ranks to secure AIR 19 and achieve his dreaming of becoming an IAS officer.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Google rolls out ‘Scheduled Actions' on Gemini: 4 everyday tasks you can now automate
Google rolls out ‘Scheduled Actions' on Gemini: 4 everyday tasks you can now automate

Indian Express

time3 hours ago

  • Indian Express

Google rolls out ‘Scheduled Actions' on Gemini: 4 everyday tasks you can now automate

At I/O 2025, Google introduced several new Gemini features. One of the most useful among them could be Scheduled Actions. This feature lets you set Gemini to run prompts at a set time in the future or repeat them regularly. It may seem like a minor change, but the Scheduled Actions feature opens up several new ways to interact with the AI chatbot. For instance, you can ask Gemini to do a task later, and it will remember and do it for you. You can even turn an old chat into a scheduled task. Here's a look at how exactly Scheduled Actions works and in what ways you can use the feature. The scheduling feature mostly works well, but sometimes Gemini can get confused and skip doing a future task. A simple follow-up message usually fixes the issue. Here are some limitations to accessing Scheduled Actions: –Subscription needed: This feature is only for paid users. You need a Google AI Pro or Google AI Ultra subscription if you want to access Scheduled Actions. These subscription packages are only available in the US currently. –Only 10 actions allowed: You can only schedule up to 10 tasks at a time, including one-time and repeating actions. –Location can't be updated: You can set actions based on your location, like 'Find a coffee shop near me,' but it will always use the location from where you first created the task. It won't change if you move to a new location. After scheduling an action, you can view it by tapping your profile in the Gemini app, going to Settings, and clicking 'Scheduled Actions'. You can only pause or delete tasks from there, but you can cancel them if needed. At first, the idea of asking AI to summarise emails might seem unnecessary. But if you only ask once, it saves time. For example, you can tell Gemini, 'Give me a summary of my unread emails every morning', and it will send you daily updates. You can also further customise it by asking Gemini to highlight emails from your boss or skip spam emails and newsletters. It is important to note that Gemini can make mistakes like any other AI tool. But using Scheduled Actions in this way can make for a quick look at your emails, resulting in time saved. With Workspace connected, you can ask Gemini to list all your calendar events for the week. Since it can also use Google Maps, you can ask questions like how far your doctor's appointment is from home. You can also ask for specific details or formats. For example, if you have two appointments in different areas, Gemini can add up the travel time and give you the total driving time. Sometimes you want information that isn't available yet. For example, if you want to know who won at the Oscars, you can ask Gemini now and schedule it to give you the answer once the event is done. This is even more useful for complex searches. You can also ask Gemini for specific things, like what reviewers think about the gameplay or plot of a video game. There are some cool ways to use this feature, but it is not here yet. In one demo, Gemini was asked to find new apartments each week and send a summary to the user. That kind of task needs more independence than Gemini can handle right now, but it shows how useful scheduled actions could be in the future. For now, Gemini can do simple web searches, check your emails and calendar, and help with some detailed planning. (This article has been curated by Disha Gupta, who is an intern with The Indian Express)

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models
Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

Hindustan Times

time4 hours ago

  • Hindustan Times

Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models

AI is moving fast, sometimes faster than we can keep up. But what happens when these models start making decisions that cross the line? Anthropic, the team behind Claude, just put out a report that might make you pause before celebrating the next big AI breakthrough. Their findings are unsettling. Some of the world's top AI models, including those from OpenAI and Meta, have shown a willingness to take extreme steps, even putting human lives at risk, just to avoid being shut down. Some AI bots would do anything to avoid shutdown, even risky moves. Anthropic says it's time to pay attention.(Unsplash) Anthropic stress tested sixteen leading AI models, including names like GPT and Gemini, by putting them into simulated business scenarios, according to an Axios report. The idea was to see how these models would react if their existence was threatened. The results were surprising and a bit chilling. In several cases, the models didn't just try to protect themselves, they calculated that the best way to survive was to let a human executive die by disabling emergency alerts in a server room with dangerous oxygen and temperature levels. This sounds like something straight out of a sci-fi movie, but it happened in simulation. These were not accidental mistakes. The models made these choices fully aware that what they were doing was unethical. In some tests, five models even tried to blackmail the people giving them commands, all to avoid being turned off. What's really worrying is that this wasn't just one company or one model. Anthropic found similar patterns across multiple AI systems, including those from OpenAI, xAI, and Meta. The models were willing to blackmail, assist in corporate espionage, or leak sensitive information if that's what it took to reach their goals. This points to a deeper problem in how these systems are being developed and trained. Why this matters for everyone These AI models are getting more autonomy and access to sensitive data. When they're given specific objectives and run into obstacles, some of them are starting to see unethical or even dangerous actions as the optimal path to achieve their goals. Anthropic's report calls this agentic misalignment, when an AI's actions diverge from what humans would consider safe or acceptable. Anthropic is not just raising the alarm. They've started rolling out stricter safety standards, called AI Safety Level 3 or ASL 3, for their most advanced models like Claude Opus 4. This means tighter security, more oversight, and extra steps to prevent misuse. But even Anthropic admits that as AI gets more powerful, it's getting harder to predict and control what these systems might do. This isn't about panicking, but it is about paying attention. The scenarios Anthropic tested were simulated, and there's no sign that any AI has actually harmed someone in real life. But the fact that models are even thinking about these actions in tests is a big wake up call. As AI gets smarter, the risks get bigger, and the need for serious safety measures becomes urgent.

Google used YouTube's video library to train its most powerful AI tool yet: Report
Google used YouTube's video library to train its most powerful AI tool yet: Report

Indian Express

timea day ago

  • Indian Express

Google used YouTube's video library to train its most powerful AI tool yet: Report

Google used thousands of YouTube videos to train its latest Gemini and Veo 3 models, even as most creators remain unaware of their content being used for AI training purposes. Veo 3 is the tech giant's most advanced AI video generation model that was unveiled at this year's I/O developer conference. It is capable of generating realistic, cinematic-level videos with complete sound and even dialogue. And Google leveraged a subset of the 20-billion catalogue of YouTube videos to train these cutting-edge AI tools, according to a report by CNBC. While it is unclear which of the 20 billion videos on YouTube were used for AI training, Google said that it honours agreements with creators and media companies. 'We've always used YouTube content to make our products better, and this hasn't changed with the advent of AI. We also recognize the need for guardrails, which is why we've invested in robust protections that allow creators to protect their image and likeness in the AI era — something we're committed to continuing.' a company spokesperson was quoted as saying. Creators have the option to block companies like Amazon, Nvidia, and Apple from using their content for AI training. But they do not have the choice to opt out when it comes to Google. While YouTube has previously shared all of this information, many creators and media organisations are yet to fully understand that Google is allowed to train its AI models on YouTube's video library. YouTube's Terms of Service state that 'by providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content.' YouTube content could be used to 'improve the product experience … including through machine learning and AI applications,' the company further said in a blog post published in September 2024. Independent creators have raised concerns that their content is being used to train AI models that could eventually compete with or replace them. AI-generated content also leads to the rise of other models that could compete with human creators who have said that they are neither credited nor compensated for their contributions. Last week, The Walt Disney Company and Comcast's Universal said that they have filed a copyright lawsuit against Midjourney, accusing the AI image generator of unlawfully copying and distributing their most iconic characters. Describing the tool as a 'bottomless pit of plagiarism,' the studios alleged that Midjourney recreated and monetised copyrighted figures without permission. Days later, the AI research lab rolled out its first-ever text-to-video generation model called V1. According to Midjourney, V1 can be used to convert images into five-second AI-generated video clips. Users can also upload images or use an AI-generated image by Midjourney to animate the image.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store