
AI Samarth: New curriculum brings AI to classrooms across Delhi schools
A fresh curriculum for the education of Artificial Intelligence (AI) in school children will be introduced on May 8 in Delhi schools.The AI Samarth curriculum has been drafted for students between Classes 6 to 10. The Central Square Foundation (CSF) will launch it jointly with the Wadhwani School of Data Science and AI at the Indian Institute of Technology, Madras.The syllabus is designed for kids in the age range of 11 to 14. It will make them comprehend AI, the way it operates, and its application in real life. It will also impart the proper and secure usage of such equipment.advertisementThe strategy covers lessons not only for students but also for educators.
The material will be put online and will contain training for usage. It will be in five languages initially - Hindi, Marathi, Bengali, Odia, and English. The material will be rolled out in six months. Additional languages can be added subsequently.As narrated by CSF, the goal of the framework is to describe the fundamental concepts of AI and its influence on society. It will address issues including ethics, bias, privacy, and responsibility in AI utilisation.The content stack is an open-source application. Schools can use it in its current form or modify it to suit their requirements. The framework is designed to be adaptable, so that it can be applicable to various levels of learning and local circumstances.advertisementWith this new initiative, the creators aim to introduce greater awareness about AI in schools all over the nation. They feel that students and educators alike should be equipped to utilise AI tools in the future.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
an hour ago
- Mint
Colleagues or overlords? The debate over AI bots has been raging but needn't
There's the Terminator school of perceiving artificial intelligence (AI) risks, in which we'll all be killed by our robot overlords. And then there's one where, if not friends exactly, the machines serve as valued colleagues. A Japanese tech researcher is arguing that our global AI safety approach hinges on reframing efforts to achieve this benign partnership. In 2023, as the world was shaken by the release of ChatGPT, a pair of successive warnings came from Silicon Valley of existential threats from powerful AI tools. Elon Musk led a group of experts and industry executives in calling for a six-month pause in developing advanced systems until we figured out how to manage risks. Then hundreds of AI leaders—including Sam Altman of OpenAI and Demis Hassabis of Alphabet's DeepMind—sent shockwaves with a statement that warned: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war." Also Read: AI didn't take the job. It changed what the job is. Despite all the attention paid to the potentially catastrophic dangers, the years since have been marked by AI 'accelerationists' largely drowning out AI doomers. Companies and countries have raced towards being the first to achieve superhuman AI, brushing off the early calls to prioritise safety. And it has all left the public very confused. But maybe we've been viewing this all wrong. Hiroshi Yamakawa, a prominent AI scholar from the University of Tokyo who has spent the past three decades studying the technology, is now arguing that the most promising route to a sustainable future is to let humans and AIs 'live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks." Yamakawa hit a nerve because while he recognizes the threats noted in 2023, he argues for a working path toward coexistence with super-intelligent machines—especially at a time when nobody is halting development over fears of falling behind. In other words, if we can't beat AI from becoming smarter than us, we're better off joining it as an equal partner. 'Equality' is the sensitive part. Humans want to keep believing they are superior, not equal to machines. Also Read: Rahul Matthan: AI models aren't copycats but learners just like us His statement has generated a lot of buzz in Japanese academic circles, receiving dozens of signatories so far, including from some influential AI safety researchers overseas. In an interview with Nikkei Asia, he argued that cultural differences in Asia are more likely to enable seeing machines as peers instead of as adversaries. While the United States has produced AI-inspired characters like the Terminator from the eponymous Hollywood movie, the Japanese have envisioned friendlier companions like Astro Boy or Doraemon, he told the news outlet. Beyond pop culture, there's some truth to this cultural embrace. At just 25%, Japanese people had the lowest share of respondents who say products using AI make them nervous, according to a global Ipsos survey last June, compared to 64% of Americans. It's likely his comments will fall on deaf ears, though, like so many of the other AI risk warnings. Development has its own momentum. And whether the machines will ever get to a point where they could spur 'civilization extinction' remains an extremely heated debate. It's fair to say that some of the industry's focus on far-off, science-fiction scenarios is meant to distract from the more immediate harm that the technology could bring—whether that's job displacement, allegations of copyright infringement or reneging on climate change goals. Still, Yamakawa's proposal is a timely re-up on an AI safety debate that has languished in recent years. These discussions can't just rely on eyebrow-raising warnings and the absence of governance. Also Read: You're absolutely right, as the AI chatbot says With the exception of Europe, most jurisdictions have focused on loosening regulations in the hope of not falling behind. Policymakers can't afford to turn a blind eye until it's too late. It also shows the need for more safety research beyond just the companies trying to create and sell these products, like in the social-media era. These platforms were obviously less incentivized to share their findings with the public. Governments and universities must prioritise independent analysis on large-scale AI risks. Meanwhile, as the global tech industry has been caught up in a race to create computer systems that are smarter than humans, it's yet to be determined whether we'll ever get there. But setting godlike AI as the goalpost has created a lot of counter-productive fear-mongering. There might be merit in seeing these machines as colleagues and not overlords. ©Bloomberg The author is a Bloomberg Opinion columnist covering Asia tech.


News18
an hour ago
- News18
Reddit Co-founder Uses AI To ‘Hug' Late Mother, Internet Warns Of ‘False Memory'
Last Updated: Ohanian used an old picture of himself with his mother and converted it into a short video clip. Reddit co-founder Alexis Ohanian used artificial intelligence to create a moment with his late mother, and the results left him quite emotional. Ohanian, the husband of Serena Williams, often uses his social media handles to show the importance of his family. He is frequently spotted spending time with Serena and their daughters, Olympia and Adira. Taking help from a cutting-edge AI, Ohanian chose an old picture of himself and his mother and converted it into a moving image that shows the mother and son hugging each other. 'Damn, I wasn't ready for how this would feel. We didn't have a camcorder, so there's no video of me with my mom. I dropped one of my favourite photos of us in mid-journey as a 'starting frame for an AI video' and wow… This is how she hugged me. I've rewatched it 50 times," he wrote in the caption. Damn, I wasn't ready for how this would feel. We didn't have a camcorder, so there's no video of me with my mom. I dropped one of my favorite photos of us in midjourney as 'starting frame for an AI video' and wow… This is how she hugged me. I've rewatched it 50 times. — Alexis Ohanian The post includes the original picture of the Reddit co-founder along with his mother, uploaded alongside the AI-generated video that shows a heartwarming moment. Despite being an emotional creation for him, the video had mixed reactions online, with many warning about creating fake memories with the help of AI, which can affect his mental health. One wrote, 'Be careful with this. Human memories are very malleable, and you'll remember what the AI shows you, whether it's true or not," while another added, 'I've tried this. But be cautious. At least do it repeatedly until you get one that perfectly matches your current memory of them, and delete the rest. Otherwise, you may create false memories. Be aware of what you're doing, potentially rewriting history in a way that can't be reversed." A comment read, 'It's not how she hugged you. You've been given a false memory," while another added, 'feels like I'm watching a bunch of people about to jump into a pit of despair following and worshipping the AI output." Amid the strong criticism from social media users, Ohanian, in a follow-up tweet on Monday, June 23, shared another message. Noting that he lost his mother almost 20 years ago, the businessman added, 'I lost my mom almost 20 years ago. Trolls can rest assured, that I've grieved sufficiently. My family couldn't afford a camcorder, and using tech to generate a few seconds of animation from a still is the equivalent of using AI to stabilise a poorly recorded video — or fill in the gaps of a deteriorated video — of her (if we'd had it). It's not a replacement for a loved one, nor should it be." I lost my mom almost 20 years ago. Trolls can rest assured I've grieved sufficiently. My family couldn't afford a camcorder and using tech to generate few seconds of animation from a still is the equivalent of using AI to stabilize a poorly recorded video — or fill in the gaps of… top videos View all — Alexis Ohanian Despite the negative remarks on the use of AI, a section of users appeared pleased, with some even creating their own videos of their parents and their close ones. First Published:


Time of India
2 hours ago
- Time of India
Zen Technologies acquires 55% stake in Raghu Vamsi Group's defence drone company TISA Aerospace
HYDERABAD: With the spotlight on the defence drone sector post Operation Sindoor, Hyderabad-based anti-drone tech player Zen Technologies Limited has acquired majority stake in emerging defence technology player TISA Aerospace Pvt Ltd, which specialises in indigenously developing loitering munitions and unmanned aerial vehicles (UAVs). Zen Technologies will be acquiring around 54.7% stake in Hyderabad-based TISA Aerospace for a consideration of nearly Rs 6.6 crore. This it will be doing through the acquisition of 2,06,518 equity shares from existing shareholders in the company. Zen Technologies will also be acquiring another 4 lakh 6% compulsory convertible debentures of TISA, which was set up by Raghu Vamsi Group's promoter Vamsi Vikas Ganesula and Kiran Kumar Vagga in December 2020. News of the acquisition powered Zen Technologies shares to the 5% upper circuit at Rs 1995.30 a share on the Bombay Stock Exchange on Monday as compared to the previous sessions close of Rs 1900.30 a share. The move marks Zen Technologies' foray into the rapidly growing domain of loitering munitions, UAVs and precision guided weaponry. Zen Technologies chairman and managing director Ashok Atluri said the move would beef up the company's presence in the rapidly evolving defence drone sector. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Giao dịch CFD với công nghệ và tốc độ tốt hơn IC Markets Đăng ký Undo 'TISA's expertise in loitering munitions provides us with immediate access to advanced technologies and platforms that align with the emerging operational requirements of the armed forces,' he said. Pointing out that TISA has achieved significant R&D milestones, which includes the successful execution of a project for DRDO with critical design assistance from IIT Madras, Atluri said: 'By integrating these capabilities with our existing strengths in anti-drone systems and propulsion technologies, we are building a broader and more future-ready defence portfolio.' He said the move is in line with India's 'urgent need' for self-reliance in defence capabilities, particularly in drones and loitering munitions. 'We see strong potential in product integration across platforms, enabling us to scale faster and compete more effectively in both domestic and global markets,' Atluri added. TISA Aerospace is focused on the design, development, and manufacture of advanced loitering munitions and UAVs tailored for defence applications. The company has successfully delivered loitering munitions meeting DRDO specifications and is developing new variants for the Indian Army. Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now