logo
‘Veena' guides candidates through UG form fill-up, subject selection

‘Veena' guides candidates through UG form fill-up, subject selection

Time of India5 hours ago

Kolkata: Veena, the new chatbot on the centralised admission portal for undergraduate admissions had interacted with more than 7,236 users and answered 19,844 questions over four days till Saturday evening, showed higher education department data.
From helping applicants with selecting colleges and courses to answering queries on the admission process, document verification and payments, Veena, with multilingual support system, seemed to have made it easier for applicants to fill up forms.
TOI reported on May 15 on the chatbot's debut on the centralised portal, launched to guide candidates through the process and clear their doubts in real-time, which in turn would ensure fewer errors in the forms.
"As the chatbot utilises Domain-Specific Natural Language Processing techniques to understand and process queries in natural language, students can interact in a conversation-like manner in multiple languages, including Bengali, Hindi and English.
Some applicants asked questions in Bengali though they wrote it in English font. The chatbot gave the answer in Bengali font," said an official. "The chatbot maintained 35%-38% Bengali support on all days."
Another official pointed out that 2.8 interactions with the chatbot per user took place on an average, indicating "strong engagement". "User growth increased in the first few days, and the applicants rated their experience on the chatbot highly, marking a score between 4.5 and 5, 5 being the maximum," said the official.
The chatbot was introduced after many of last year's applicants had said they found it difficult to understand the course combinations and many even made some basic mistakes.
"But the chatbot helped clear doubts and minimise errors. The queries are changing from quantitative questions, like the number of seats, eligibility criteria and documents required, to qualitative ones, like comparisons between courses and colleges or and best-suited subject combinations.
The chatbot is answering the queries precisely and instantly," said an official.
The higher education department said the first few days' chatbot interactions showed remarkable AI system evolution as the precision of answers improved, reaching almost 99% accuracy. The interaction data and feedback showed that the AI-driven tool had the capacity for self-learning, improving answers and enhancing the admission process, said an official. "As the chatbot handles a large number of users at the same time, the call waiting-time has minimised.
It is ensuring reliability while complying with privacy and security," he said.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

‘Veena' guides candidates through UG form fill-up, subject selection
‘Veena' guides candidates through UG form fill-up, subject selection

Time of India

time5 hours ago

  • Time of India

‘Veena' guides candidates through UG form fill-up, subject selection

Kolkata: Veena, the new chatbot on the centralised admission portal for undergraduate admissions had interacted with more than 7,236 users and answered 19,844 questions over four days till Saturday evening, showed higher education department data. From helping applicants with selecting colleges and courses to answering queries on the admission process, document verification and payments, Veena, with multilingual support system, seemed to have made it easier for applicants to fill up forms. TOI reported on May 15 on the chatbot's debut on the centralised portal, launched to guide candidates through the process and clear their doubts in real-time, which in turn would ensure fewer errors in the forms. "As the chatbot utilises Domain-Specific Natural Language Processing techniques to understand and process queries in natural language, students can interact in a conversation-like manner in multiple languages, including Bengali, Hindi and English. Some applicants asked questions in Bengali though they wrote it in English font. The chatbot gave the answer in Bengali font," said an official. "The chatbot maintained 35%-38% Bengali support on all days." Another official pointed out that 2.8 interactions with the chatbot per user took place on an average, indicating "strong engagement". "User growth increased in the first few days, and the applicants rated their experience on the chatbot highly, marking a score between 4.5 and 5, 5 being the maximum," said the official. The chatbot was introduced after many of last year's applicants had said they found it difficult to understand the course combinations and many even made some basic mistakes. "But the chatbot helped clear doubts and minimise errors. The queries are changing from quantitative questions, like the number of seats, eligibility criteria and documents required, to qualitative ones, like comparisons between courses and colleges or and best-suited subject combinations. The chatbot is answering the queries precisely and instantly," said an official. The higher education department said the first few days' chatbot interactions showed remarkable AI system evolution as the precision of answers improved, reaching almost 99% accuracy. The interaction data and feedback showed that the AI-driven tool had the capacity for self-learning, improving answers and enhancing the admission process, said an official. "As the chatbot handles a large number of users at the same time, the call waiting-time has minimised. It is ensuring reliability while complying with privacy and security," he said.

MIT study warns how ChatGPT weakens critical thinking
MIT study warns how ChatGPT weakens critical thinking

Hans India

time17 hours ago

  • Hans India

MIT study warns how ChatGPT weakens critical thinking

A new study from MIT's Media Lab is raising red flags about the impact of generative AI tools like ChatGPT on human cognition—particularly among students. The study suggests that using ChatGPT for academic work may reduce brain activity, diminish creativity, and impair memory formation. The experiment involved 54 participants aged 18 to 39, who were divided into three groups: one using ChatGPT, another using Google Search, and a control group using neither. Each group was asked to write multiple SAT-style essays while wearing EEG devices to measure brain activity across 32 regions. Results showed ChatGPT users exhibited the lowest neural engagement, underperforming across behavioral, linguistic, and cognitive measures. Their essays were also deemed formulaic and lacking originality by English teachers. Alarmingly, as the study progressed over several months, many in the ChatGPT group abandoned active writing altogether, opting instead to copy-paste AI-generated responses with minimal editing. Lead author Nataliya Kosmyna explained her urgency to publish the findings ahead of peer review, saying, 'I'm afraid in 6-8 months some policymaker will propose 'GPT for kindergarten.' That would be absolutely detrimental to developing brains.' In contrast, the group that relied solely on their own brainpower showed stronger neural connectivity in alpha, theta, and delta bands—regions linked with creativity, memory, and semantic processing. These participants felt more ownership over their work and reported higher satisfaction. The Google Search group also demonstrated high engagement and satisfaction, suggesting traditional web research supports more active learning than LLM use. In a follow-up test, participants had to rewrite a previous essay—this time without their original tool. ChatGPT users struggled, barely recalling their previous responses, and showed weaker brain wave activity. In contrast, the brain-only group, now using ChatGPT for the first time, exhibited increased cognitive activity, suggesting that AI can support learning—but only when foundational thinking is already in place. Kosmyna warns that heavy AI use during critical learning phases could impair long-term brain development, particularly in children. Psychiatrist Dr. Zishan Khan echoed this concern: 'Overreliance on LLMs may erode essential neural pathways related to memory, resilience, and deep thinking.' Ironically, the paper itself became a case study in AI misuse. Some users summarized it using ChatGPT, prompting hallucinated facts—like falsely stating the version of ChatGPT used was GPT-4o. Kosmyna had anticipated this and included 'AI traps' in the document to test such behavior. MIT researchers are now expanding their work into programming and software engineering, and early results are even more troubling—suggesting broader implications for industries seeking to automate entry-level tasks. While previous studies have highlighted AI's potential to boost productivity, this research underscores the urgent need for responsible AI use in education, backed by policies that balance efficiency with brain development. OpenAI did not respond to a request for comment. Meanwhile, the debate on the role of AI in learning continues—with growing calls for regulation, transparency, and digital literacy.

'Who are you?' Mysterious AI voices answer calls of Iranians; diaspora feels 'helpless' as communication with family disrupted
'Who are you?' Mysterious AI voices answer calls of Iranians; diaspora feels 'helpless' as communication with family disrupted

Time of India

time18 hours ago

  • Time of India

'Who are you?' Mysterious AI voices answer calls of Iranians; diaspora feels 'helpless' as communication with family disrupted

Iran after it was reportedly struck by an Israeli airstrike (Image credits: AP) As tensions escalate between Iran and Israel, Iranians living abroad are encountering an unsettling new challenge, robotic voices answering their calls home. Since Israel launched airstrikes on Iran a week ago, targeting nuclear and military sites, communication with loved ones inside the country has become nearly impossible,reported news agency AP. The Iranian government has imposed a widespread internet and phone blackout, leaving families abroad desperate for any news. Ellie, a 44-year-old British-Iranian woman, was shocked when she tried to call her mother in Tehran. Instead of hearing her mother's voice, a robotic female voice responded in broken English: 'Who you want to speak with? I'm Alyssia. Do you remember me? I think I don't know who are you,' as cited by AP. The same experience has been reported by eight other Iranians in the UK and US. 'Calling your mom and expecting to hear her voice and hearing an AI voice is one of the scariest things I've ever experienced,' said a woman in New York. The robotic messages range from eerie to oddly comforting. One caller heard a voice calmly saying: 'Life is full of unexpected surprises, and these surprises can sometimes bring joy while at other times they challenge us.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Memperdagangkan CFD Emas dengan salah satu spread terendah? IC Markets Mendaftar Undo Another message told callers to imagine peaceful places like forests or seashores, even as their families remain unreachable in a country under attack. I ranian cybersecurity experts suggest these diversions could be a government tactic to prevent hacking or spread confusion. In the early days of the conflict, mass voice and text messages were sent to Iranian phones warning citizens to prepare for emergencies. The ministry of information and communications technology oversees Iran's phone systems, and the country's intelligence services are believed to monitor conversations. One expert said it would be difficult for anyone but the government to implement such a large-scale voice diversion system. However, some experts also speculate that Israel could be behind it, referencing similar tactics used in past military operations in Lebanon and Gaza. For many Iranians abroad, these strange voices are not calming, they're haunting reminders of how disconnected they are from their families during a time of crisis. 'The only feeling it gives me,' said a woman in the UK, 'is helplessness.' Elon Musk announced that his satellite internet service, Starlink, has been activated in Iran, where a limited number of people are believed to be using it despite its illegal status. Authorities have been urging citizens to report neighbors possessing the devices amid an ongoing crackdown on suspected espionage. Some Iranians also rely on illegal satellite dishes to access international news.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store