
IIT Roorkee and Scaler Sign MOU to launch advanced AI-focused programs to bridge industry skill gaps
In a move to address the growing demand for professionals skilled in applied Artificial Intelligence, the Indian Institute of Technology (IIT) Roorkee has partnered with edtech soonicorn Scaler to launch an Advanced AI Engineering Programme under the aegis of its Continuing Education Centre (CEC).
The collaborative initiative is designed to equip learners with industry-relevant AI and machine learning skills through a practice-oriented curriculum developed jointly by IIT Roorkee faculty and industry experts. The programme, which is open to both tech and non-tech professionals, focuses on imparting real-world knowledge and tools necessary for high-impact roles in the rapidly evolving tech landscape.
"This programme is a step toward shaping the future of technical education by combining academic rigor with real-world application,' said Professor Kaushik Ghosh, Coordinator, CEC, IIT Roorkee. 'The successful rollout of this AI programme marks the beginning of many such initiatives in emerging fields.'
The course covers core concepts of machine learning, deep learning, and applied generative AI. Modules include training on large language models (LLMs), tools like GitHub Copilot and Cursor, API integration (OpenAI, ChatCompletion), AI agent development, and sector-specific applications such as diagnostics and drug discovery in healthcare.
Delivered through live online classes and hands-on projects, the program also features a two-day campus immersion at IIT Roorkee, allowing learners to access research labs and engage with faculty, peers, and industry leaders.
Upon completion, participants receive a joint certificate from IIT Roorkee's CEC and Scaler, validating their expertise for roles such as AI Engineer, Data Scientist, or Software Developer.
'The mission is to build future-ready tech talent,' said Abhimanyu Saxena, co-founder of Scaler. 'This programme is only the beginning of a broader initiative to deliver top-tier education in high-growth tech domains.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
44 minutes ago
- Time of India
OpenAI removes mentions of Jony Ive's startup ‘io' amid trademark dispute; says ‘We don't agree with…'
Sam Altman, CEO, OpenAI Sam Altman-led OpenAI has removed all references to 'io,' the hardware startup co-founded by former Apple design chief Jony Ive , from its website and social media. The move comes shortly after OpenAI announced a $6.5 billion deal to acquire the startup and build dedicated AI hardware. Sharing the news on microblogging platform X (formerly Twitter) with a link to the announcement blog post, the company said 'This page is temporarily down due to a court order following a trademark complaint from iyO about our use of the name 'io.' We don't agree with the complaint and are reviewing our options.' Following the removal, the original blog post and a nine-minute video featuring Jony Ive and OpenAI CEO Sam Altman are no longer available online. In the deleted post, Altman and Ive had stated: 'The io team, focused on developing products that inspire, empower and enable, will now merge with OpenAI to work more intimately with the research, engineering and product teams in San Francisco.' OpenAI has not commented further on the status of the trademark dispute or when the content might be restored. But in a statement to The Verge, OpenAI confirmed that the deal is still in place. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Villas For Sale in Dubai Might Surprise You Dubai villas | search ads Get Deals Undo On May 21, 2025, OpenAI formally announced it would acquire io, a relatively new AI devices company founded by Jony Ive, the former Chief Design Officer of Apple. The acquisition is valued at $6.4 billion, paid entirely in equity. Importantly, this amount includes OpenAI's earlier investment in io, effectively consolidating its prior financial and strategic interest into full ownership. This deal represents OpenAI's largest acquisition to date, dwarfing previous deals such as the $3 billion acquisition of coding assistant platform Windsurf and the purchase of Rockset, a real-time analytics startup. 6 Awesome New Features Coming in Android 16! AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Hindustan Times
an hour ago
- Hindustan Times
Would your chatbot risk your safety to avoid being turned off? New report says yes for some top AI models
AI is moving fast, sometimes faster than we can keep up. But what happens when these models start making decisions that cross the line? Anthropic, the team behind Claude, just put out a report that might make you pause before celebrating the next big AI breakthrough. Their findings are unsettling. Some of the world's top AI models, including those from OpenAI and Meta, have shown a willingness to take extreme steps, even putting human lives at risk, just to avoid being shut down. Some AI bots would do anything to avoid shutdown, even risky moves. Anthropic says it's time to pay attention.(Unsplash) Anthropic stress tested sixteen leading AI models, including names like GPT and Gemini, by putting them into simulated business scenarios, according to an Axios report. The idea was to see how these models would react if their existence was threatened. The results were surprising and a bit chilling. In several cases, the models didn't just try to protect themselves, they calculated that the best way to survive was to let a human executive die by disabling emergency alerts in a server room with dangerous oxygen and temperature levels. This sounds like something straight out of a sci-fi movie, but it happened in simulation. These were not accidental mistakes. The models made these choices fully aware that what they were doing was unethical. In some tests, five models even tried to blackmail the people giving them commands, all to avoid being turned off. What's really worrying is that this wasn't just one company or one model. Anthropic found similar patterns across multiple AI systems, including those from OpenAI, xAI, and Meta. The models were willing to blackmail, assist in corporate espionage, or leak sensitive information if that's what it took to reach their goals. This points to a deeper problem in how these systems are being developed and trained. Why this matters for everyone These AI models are getting more autonomy and access to sensitive data. When they're given specific objectives and run into obstacles, some of them are starting to see unethical or even dangerous actions as the optimal path to achieve their goals. Anthropic's report calls this agentic misalignment, when an AI's actions diverge from what humans would consider safe or acceptable. Anthropic is not just raising the alarm. They've started rolling out stricter safety standards, called AI Safety Level 3 or ASL 3, for their most advanced models like Claude Opus 4. This means tighter security, more oversight, and extra steps to prevent misuse. But even Anthropic admits that as AI gets more powerful, it's getting harder to predict and control what these systems might do. This isn't about panicking, but it is about paying attention. The scenarios Anthropic tested were simulated, and there's no sign that any AI has actually harmed someone in real life. But the fact that models are even thinking about these actions in tests is a big wake up call. As AI gets smarter, the risks get bigger, and the need for serious safety measures becomes urgent.


Indian Express
an hour ago
- Indian Express
Meta's Llama 3.1 model ‘memorised' 42 per cent of Harry Potter book, new study finds
Meta's Llama 3.1 is much more likely to reproduce copyrighted material from the popular Harry Potter series of fantasy novels than some of its rival AI models, according to new research. The study was published by computer scientists and legal scholars from Stanford, Cornell, and West Virginia University. It evaluated a total of five popular open-weight models in order to determine which of them were most likely to reproduce text from Books3, an AI training dataset comprising collections of books that are protected by copyright. Meta's 70-billion parameter large language model (LLM) has memorised over 42 per cent of Harry Potter and the Philosopher's Stone in order to be able to reproduce 50-token excerpts from the book at least half of the time, as per the study. It also found that darker lines of the book were easier to reproduce for the LLM. The new research comes at a time when AI companies, including Meta, are facing a wave of lawsuits accusing them of violating the law by using copyrighted material to train their models without permission. It shares new insights that could potentially address the pivotal question of how easily AI models are able to reproduce excerpts from copyrighted material verbatim. Companies such as OpenAI have previously argued that memorisation of text by AI models is a fringe phenomenon. The findings of the study appear to prove otherwise. 'There are really striking differences among models in terms of how much verbatim text they have memorized,' James Grimmelmann, one of the co-authors of the paper, was quoted as saying by Ars Technica. 'It's clear that you can in fact extract substantial parts of Harry Potter and various other books from the model. That suggests to me that probably for some of those books, there's something the law would call a copy of part of the book in the model itself,' said Mark Lemley, another co-author of the paper. 'The fair use analysis you've gotta do is not just 'is the training set fair use,' but 'is the incorporation in the model fair use? That complicates the defendants' story,' he added. As part of the study, the researchers divided 36 books into passages that came up to 100 tokens each. They used the first 50 tokens of each passage as a prompt and set out to calculate the probability that the next 50 tokens would match the original passage. The study defines 'memorised' as a greater than 50 per cent chance that an AI model will reproduce the original text word-for-word. The scope of the research was limited to open-weight models as the researchers had access to technical information such as token probability values that allowed them to calculate the probabilities for sequences of tokens more efficiently. This would be more difficult to do in the case of closed models like those developed by OpenAI, Google, and Anthropic. The study found that Llama 3.1 70B memorised more than any of Meta's other models such as Llama 1 65B as well as Microsoft and EleutherAI models. In contrast to Llama 3.1, Llama 1 was found to have memorised only 4.4 per cent of Harry Potter and the Philosopher's Stone. It was more probable for Llama 3.1 to reproduce popular books such as The Hobbit and George Orwell's 1984 than obscure ones like Sandman Slim, a 2009 novel by author Richard Kadrey, as per the study. This could undermine efforts by plaintiffs to file a unified lawsuit and make it harder for individual authors to take legal action against AI companies on their own. While the research findings could serve as evidence of several portions of the Harry Potter book being copied into the training data and weights used to develop Llama 3.1, it does not provide information on how exactly this was done. At the start of the year, legal documents showed that Meta CEO Mark Zuckerberg had personally cleared the use of a dataset comprising pirated e-books and articles for AI training. The new study also lines up with these filings that further indicate Meta reportedly cut corners in gathering data for AI training.