logo
HCLTech partners with Just Energy to enhance operations and customer experience

HCLTech partners with Just Energy to enhance operations and customer experience

Business Upturn3 days ago

By Aman Shukla Published on June 19, 2025, 14:24 IST
HCLTech, a leading global technology company, has been selected by US-based energy supplier Just Energy to elevate its operations and customer experience through advanced digital solutions. As part of the partnership, HCLTech will implement its cutting-edge Digital Process Outsourcing (DPO) suite and GenAI platform, AI Force, to optimize IT, finance, analytics, customer service, sales, and renewals functions.
To further enhance workforce collaboration and streamline business processes, HCLTech will deploy digitalCOLLEAGUE, a unified, role-specific interface, along with Toscona, its business process optimization suite. These tools are designed to boost productivity and service quality across enterprise functions.
Scott Fordham, Chief Operating Officer of Just Energy, stated, 'We are confident that HCLTech's proven expertise and commitment to service excellence will help us achieve our key business objectives relating to operational efficiency and service improvements.'
This collaboration underscores HCLTech's leadership in driving digital transformation in the energy sector using AI, automation, and next-gen tech platforms.
Ahmedabad Plane Crash
Aman Shukla is a post-graduate in mass communication . A media enthusiast who has a strong hold on communication ,content writing and copy writing. Aman is currently working as journalist at BusinessUpturn.com

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NTPC commissions final 52 MW at Nokh Solar Project, total commercial capacity hits 60,318 MW
NTPC commissions final 52 MW at Nokh Solar Project, total commercial capacity hits 60,318 MW

Business Upturn

time37 minutes ago

  • Business Upturn

NTPC commissions final 52 MW at Nokh Solar Project, total commercial capacity hits 60,318 MW

By Aman Shukla Published on June 22, 2025, 10:47 IST NTPC Ltd., India's largest power utility, has announced the successful commissioning of the final 52 MW capacity at its Nokh Solar PV Project (Plot-3), taking the project's total operational output to its full planned capacity of 245 MW. The latest segment—commissioned commercially at midnight on June 22, 2025—marks the second and last part of the Plot-3 facility. The first part, comprising 193 MW, had already achieved commercial operation on June 9, 2025. With this milestone, NTPC's total standalone commercial capacity has risen to 60,318 MW, while its group commercial capacity now stands at 81,420 MW. Furthermore, the company's total installed capacity has reached 60,978 MW (standalone) and 82,080 MW (group). Located in Rajasthan, the Nokh Solar PV Project is part of NTPC's broader push towards renewable energy, in line with India's clean energy transition goals. The 3×245 MW facility represents a significant investment in solar infrastructure, reinforcing NTPC's role in the nation's energy transformation. Ahmedabad Plane Crash Aman Shukla is a post-graduate in mass communication . A media enthusiast who has a strong hold on communication ,content writing and copy writing. Aman is currently working as journalist at

Europe Frets About US Pullout After NATO Allies Bolster Spending
Europe Frets About US Pullout After NATO Allies Bolster Spending

Bloomberg

time2 hours ago

  • Bloomberg

Europe Frets About US Pullout After NATO Allies Bolster Spending

NATO's European allies are focused on getting through this week's summit unscathed. But even if President Donald Trump is satisfied with fresh pledges to ramp up spending, anxiety is growing about the US military presence in the region. Only after the June 24-25 summit meeting in The Hague – where North Atlantic Treaty Organization members will pledge to spend 5% of GDP on defense – will the US present its military review, which will spell out the scope of what are likely significant reductions in Europe.

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers
Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

CNBC

time2 hours ago

  • CNBC

Encountered a problematic response from an AI model? More standards and tests are needed, say researchers

As the usage of artificial intelligence — benign and adversarial — increases at breakneck speed, more cases of potentially harmful responses are being uncovered. These include hate speech, copyright infringements or sexual content. The emergence of these undesirable behaviors is compounded by a lack of regulations and insufficient testing of AI models, researchers told CNBC. Getting machine learning models to behave the way it was intended to do so is also a tall order, said Javier Rando, a researcher in AI. "The answer, after almost 15 years of research, is, no, we don't know how to do this, and it doesn't look like we are getting better," Rando, who focuses on adversarial machine learning, told CNBC. However, there are some ways to evaluate risks in AI, such as red teaming. The practice involves individuals testing and probing artificial intelligence systems to uncover and identify any potential harm — a modus operandi common in cybersecurity circles. Shayne Longpre, a researcher in AI and policy and lead of the Data Provenance Initiative, noted that there are currently insufficient people working in red teams. While AI startups are now using first-party evaluators or contracted second parties to test their models, opening the testing to third parties such as normal users, journalists, researchers, and ethical hackers would lead to a more robust evaluation, according to a paper published by Longpre and researchers. "Some of the flaws in the systems that people were finding required lawyers, medical doctors to actually vet, actual scientists who are specialized subject matter experts to figure out if this was a flaw or not, because the common person probably couldn't or wouldn't have sufficient expertise," Longpre said. Adopting standardized 'AI flaw' reports, incentives and ways to disseminate information on these 'flaws' in AI systems are some of the recommendations put forth in the paper. With this practice having been successfully adopted in other sectors such as software security, "we need that in AI now," Longpre added. Marrying this user-centred practice with governance, policy and other tools would ensure a better understanding of the risks posed by AI tools and users, said Rando. Project Moonshot is one such approach, combining technical solutions with policy mechanisms. Launched by Singapore's Infocomm Media Development Authority, Project Moonshot is a large language model evaluation toolkit developed with industry players such as IBM and Boston-based DataRobot. The toolkit integrates benchmarking, red teaming and testing baselines. There is also an evaluation mechanism which allows AI startups to ensure that their models can be trusted and do no harm to users, Anup Kumar, head of client engineering for data and AI at IBM Asia Pacific, told CNBC. Evaluation is a continuous process that should be done both prior to and following the deployment of models, said Kumar, who noted that the response to the toolkit has been mixed. "A lot of startups took this as a platform because it was open source, and they started leveraging that. But I think, you know, we can do a lot more." Moving forward, Project Moonshot aims to include customization for specific industry use cases and enable multilingual and multicultural red teaming. Pierre Alquier, Professor of Statistics at the ESSEC Business School, Asia-Pacific, said that tech companies are currently rushing to release their latest AI models without proper evaluation. "When a pharmaceutical company designs a new drug, they need months of tests and very serious proof that it is useful and not harmful before they get approved by the government," he noted, adding that a similar process is in place in the aviation sector. AI models need to meet a strict set of conditions before they are approved, Alquier added. A shift away from broad AI tools to developing ones that are designed for more specific tasks would make it easier to anticipate and control their misuse, said Alquier. "LLMs can do too many things, but they are not targeted at tasks that are specific enough," he said. As a result, "the number of possible misuses is too big for the developers to anticipate all of them." Such broad models make defining what counts as safe and secure difficult, according to a research that Rando was involved in. Tech companies should therefore avoid overclaiming that "their defenses are better than they are," said Rando.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store