logo
#

Latest news with #GoogleAIStudio

This WMass college is offering free course in AI essentials
This WMass college is offering free course in AI essentials

Yahoo

time13-06-2025

  • Business
  • Yahoo

This WMass college is offering free course in AI essentials

HOLYOKE — Holyoke Community College and the nonprofit CanCode Communities will partner together to offer a free course on the world of artificial intelligence this summer. 'AI Essentials,' a real-time, instructor-led online training program will run on Tuesdays and Thursdays, June 24 to Sept. 11, from 5:45 to 8:45 p.m. each day. The class is free for eligible Massachusett residents. Over 12 weeks, participants will learn the fundamentals of AI, including prompt engineering, tokenization, embeddings, model structures, retrieval-augmented generation, agency, compute and ethics. The course emphasizes practical applications, leveraging tools such Google AI Studio, n8n, and OpenWebUI to explore how AI models are built, trained, and deployed in the real world. 'Along the way, participants will gain valuable professional development experience, enhancing their technical skills and problem-solving abilities,' said Arvard Lingham, HCC executive director of community education and corporate training. Limited seats are available. Laptops and WiFi hotspots for Internet access will be provided for students who need them. Funding for the program comes from the Western Mass Alliance for Digital Equity. To sign up for classes, send an email to admissions@ or go to and choose 'AI Essentials.' Read the original article on MassLive.

Holyoke Community College to offer free course in AI essentials
Holyoke Community College to offer free course in AI essentials

Yahoo

time11-06-2025

  • Business
  • Yahoo

Holyoke Community College to offer free course in AI essentials

HOLYOKE, Mass. (WWLP) – Holyoke Community College (HCC) is offering a free 12-week training course this summer on artificial intelligence. The program, titled 'AI Essentials,' is being launched in partnership with the non-profit organization CanCode Communities. The class will run on Tuesdays and Thursdays from June 24 to September 11 from 5:45 p.m. to 8:45 p.m. These western Massachusetts cities awarded funding to boost protection against cyberattacks Participants will get the opportunity to learn about the practical applications of AI, such as prompt engineering, tokenization, model structures, ethics, and more. They will also learn to use leveraging tools, including Google AI Studio, n8n, and OpenWebUI, to delve further into how AI models are built and trained for real-world use. 'Along the way, participants will gain valuable professional development experience, enhancing their technical skills and problem-solving abilities,' said Arvard Lingham, HCC Executive Director of Community Education and Corporate Training. The class is free to eligible Massachusetts residents, with tuition assistance available for qualified residents age 18 and older. Limited seats are offered, and laptops and WiFi hotspots for Internet access will be provided for students who require them. This program is being funded by the Western Mass Alliance for Digital Equity. Those interested in signing up for the class can email admissions@ or visit WWLP-22News, an NBC affiliate, began broadcasting in March 1953 to provide local news, network, syndicated, and local programming to western Massachusetts. Watch the 22News Digital Edition weekdays at 4 p.m. on Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

AI may turn legacy codebases into liabilities: Zoho founder Sridhar Vembu
AI may turn legacy codebases into liabilities: Zoho founder Sridhar Vembu

Time of India

time23-05-2025

  • Business
  • Time of India

AI may turn legacy codebases into liabilities: Zoho founder Sridhar Vembu

The foundations of modern software development may be under threat as generative artificial intelligence evolves rapidly, according to Sridhar Vembu , founder of software-as-a-service company (SaaS) Zoho . In a series of posts on social media platform X, Vembu said that large, existing codebases, long considered prized assets, could soon become burdens. 'If AI makes us 100x productive, why not rewrite the whole thing with AI help?' Vembu asked, highlighting a potential shift in enterprise software strategy. With generative AI tools now able to write new code at high velocity, companies may find it more efficient to start from scratch rather than maintain legacy systems. — svembu (@svembu) This view counters a common concern in the tech industry that AI is currently poor at navigating complex, existing code structures. Vembu acknowledged the limitation but suggested it will not last. 'It is not there yet, but perhaps not for long,' he said. 'Viewed that way, large existing codebases may no longer be assets,' he said. 'They may be liabilities.' He pointed to AI-powered tools such as Zoho Creator and Google AI Studio, which are now capable of building standard CRUD (create, read, update, delete) applications with minimal human input. Having recently stepped back from his role as chief executive officer to focus on research, Vembu has been vocal about AI's growing role in software development. He has predicted that generative AI could eventually handle up to 90% of coding tasks, especially boilerplate code that consumes much of a developer's time. However, he noted that essential complexity, such as innovative and creative work, would still require human input. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories The Zoho founder warned developers not to count on high salaries or long-term job security. As AI reshapes the software development landscape, he urged the industry to stay vigilant or risk becoming obsolete.

Google unveils Gemini 2.5 upgrades for reasoning & security
Google unveils Gemini 2.5 upgrades for reasoning & security

Techday NZ

time22-05-2025

  • Business
  • Techday NZ

Google unveils Gemini 2.5 upgrades for reasoning & security

Google has provided a series of updates to its Gemini 2.5 model series, with enhancements spanning advanced reasoning, developer capabilities and security safeguards. The company reported that Gemini 2.5 Pro is now the leading model on the WebDev Arena coding leaderboard, holding an ELO score of 1415. It also leads across all leaderboards in LMArena, a platform that measures human preferences in multiple dimensions. Additionally, Gemini 2.5 Pro's 1 million-token context window was highlighted as supporting strong long context and video understanding performance. Integration with LearnLM, a family of models developed with educational experts, resulted in Gemini 2.5 Pro apparently becoming the foremost model for learning. According to Google, in direct comparisons focusing on pedagogy and effectiveness, Gemini 2.5 Pro was favoured by educators and experts over other models in a wide range of scenarios. The model outperformed others based on the five principles of learning science used in AI system design for education. Gemini 2.5 Pro introduced an experimental capability called Deep Think, which is being tested to enable enhanced reasoning by allowing the model to consider multiple hypotheses before responding. The company said, "2.5 Pro Deep Think gets an impressive score on 2025 USAMO, currently one of the hardest math benchmarks. It also leads on LiveCodeBench, a difficult benchmark for competition-level coding, and scores 84.0% on MMMU, which tests multimodal reasoning." Safety and evaluation measures are being emphasised with Deep Think. "Because we're defining the frontier with 2.5 Pro DeepThink, we're taking extra time to conduct more frontier safety evaluations and get further input from safety experts. As part of that, we're going to make it available to trusted testers via the Gemini API to get their feedback before making it widely available," the company reported. Google announced improvements to 2.5 Flash, describing it as the most efficient in the series, tailored for speed and cost efficiency. This version now reportedly uses 20-30% fewer tokens in evaluations and delivers improved performance across benchmarks for reasoning, multimodality, code, and long-context tasks. The updated 2.5 Flash is now available for preview in Google AI Studio, Vertex AI, and the Gemini app. New features have also been added to the Gemini 2.5 series. The Live API now offers a preview version supporting audio-visual input and native audio output. This is designed to create more natural and expressive conversational experiences. According to Google, "It also allows the user to steer its tone, accent and style of speaking. For example, you can tell the model to use a dramatic voice when telling a story. And it supports tool use, to be able to search on your behalf." Early features in this update include Affective Dialogue, where the model can detect and respond to emotions in a user's voice; Proactive Audio, which enables the model to ignore background conversations and determine when to respond; and enhanced reasoning in live API use. Multi-speaker support has also been introduced for text-to-speech capabilities, allowing audio generation with two distinct voices and support for over 24 languages, including seamless transitions between them. Project Mariner's computer use capabilities are being integrated into the Gemini API and Vertex AI, with multiple enterprises testing the tool. Google stated, "Companies like Automation Anywhere, UiPath, Browserbase, Autotab, The Interaction Company and Cartwheel are exploring its potential, and we're excited to roll it out more broadly for developers to experiment with this summer." On the security front, Gemini 2.5 includes advanced safeguards against indirect prompt injections, which involve malicious instructions embedded into retrieved data. According to disclosures, "Our new security approach helped significantly increase Gemini's protection rate against indirect prompt injection attacks during tool use, making Gemini 2.5 our most secure model family to date." Google is introducing new developer tools with thought summaries in the Gemini API and Vertex AI. These summaries convert the model's raw processing into structured formats with headers and action notes. Google stated, "We hope that with a more structured, streamlined format on the model's thinking process, developers and users will find the interactions with Gemini models easier to understand and debug." Additional features include thinking budgets for 2.5 Pro, allowing developers to control the model's computation resources to balance quality and speed. This can also completely disable the model's advanced reasoning capability if desired. Model Context Protocol (MCP) support has been added for SDK integration, aiming to enable easier development of agentic applications using both open-source and hosted tools. Google affirmed its intention to sustain research and development efforts as the Gemini 2.5 series evolves, stating, "We're always innovating on new approaches to improve our models and our developer experience, including making them more efficient and performant, and continuing to respond to developer feedback, so please keep it coming! We also continue to double down on the breadth and depth of our fundamental research — pushing the frontiers of Gemini's capabilities. More to come soon."

Google announces major Gemini AI upgrades & new dev tools
Google announces major Gemini AI upgrades & new dev tools

Techday NZ

time22-05-2025

  • Business
  • Techday NZ

Google announces major Gemini AI upgrades & new dev tools

Google has unveiled a range of updates to its developer products, aimed at improving the process of building artificial intelligence applications. Mat Velloso, Vice President, AI / ML Developer at Google, stated, "We believe developers are the architects of the future. That's why Google I/O is our most anticipated event of the year, and a perfect moment to bring developers together and share our efforts for all the amazing builders out there. In that spirit, we updated Gemini 2.5 Pro Preview with even better coding capabilities a few weeks ago. Today, we're unveiling a new wave of announcements across our developer products, designed to make building transformative AI applications even better." The company introduced an enhanced version of its Gemini 2.5 Flash Preview, described as delivering improved performance on coding and complex reasoning tasks while optimising for speed and efficiency. This model now includes "thought summaries" to increase transparency in its decision-making process, and its forthcoming "thinking budgets" feature is intended to help developers manage costs and exercise more control over model outputs. Both Gemini 2.5 Flash versions and 2.5 Pro are available in preview within Google AI Studio and Vertex AI, with general availability for Flash expected in early June, followed by Pro. Among the new models announced is Gemma 3n, designed to function efficiently on personal devices such as phones, laptops, and tablets. Gemma 3n can process audio, text, image, and video inputs and is available for preview on Google AI Studio and Google AI Edge. Also introduced is Gemini Diffusion, a text model that reportedly generates outputs at five times the speed of Google's previous fastest model while maintaining coding performance. Access to Gemini Diffusion is currently by waitlist. The Lyria RealTime model was also detailed. This experimental interactive music generation tool allows users to create, control, and perform music in real time. Lyria RealTime can be accessed via the Gemini API and trialled through a starter application in Google AI Studio. Several additional variants of the Gemma model family were announced, targeting specific use cases. MedGemma is described as the company's most capable multimodal medical model to date, intended to support developers creating healthcare applications such as medical image analysis. MedGemma is available now via the Health AI Developer Foundations programme. Another upcoming model, SignGemma, is designed to translate sign languages into spoken language text, currently optimised for American Sign Language to English. Google is soliciting feedback from the community to guide further development of SignGemma. Google outlined new features intended to facilitate the development of AI applications. A new, more agentic version of Colab will enable users to instruct the tool in plain language, with Colab subsequently taking actions such as fixing errors and transforming code automatically. Meanwhile, Gemini Code Assist, Google's free AI-coding assistant, and its associated code review agent for GitHub, are now generally available to all developers. These tools are now powered by Gemini 2.5 and will soon offer a two million token context window for standard and enterprise users on Vertex AI. Firebase Studio was presented as a new cloud-based workspace supporting rapid development of AI applications. Notably, Firebase Studio now integrates with Figma via a plugin, supporting the transition from design to app. It can also automatically detect and provision necessary back-end resources. Jules, another tool now generally available, is an asynchronous coding agent that can manage bug backlogs, handle multiple tasks, and develop new features, working directly with GitHub repositories and creating pull requests for project integration. A new offering called Stitch was also announced, designed to generate frontend code and user interface designs from natural language descriptions or image prompts, supporting iterative and conversational design adjustments with easy export to web or design platforms. For those developing with the Gemini API, updates to Google AI Studio were showcased, including native integration with Gemini 2.5 Pro and optimised use with the GenAI SDK for instant generation of web applications from input prompts spanning text, images, or videos. Developers will find new models for generative media alongside enhanced code editor support for prototyping. Additional technical features include proactive video and audio capabilities, affective dialogue responses, and advanced text-to-speech functions that enable control over voice style, accent, and pacing. The model updates also introduce asynchronous function calling to enable non-blocking operations and a Computer Use API that will allow applications to browse the web or utilise other software tools under user direction, initially available to trusted testers. The company is also rolling out URL context, an experimental tool for retrieving and analysing contextual information from web pages, and announcing support for the Model Context Protocol in the Gemini API and SDK, aiming to facilitate the use of a broader range of open-source developer tools.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store