
Google Gemini now supports video uploads for analysis
Washington: Google has rolled out an exciting update to its Gemini app, allowing users to upload videos for analysis.
This feature enables users to ask questions about video content or have Gemini describe clips, as per The Verge.
Although the update hasn't been universally rolled out yet, users on iOS and Android devices may already have access to this functionality.
Key features of video upload and analysis include:
- Video Analysis: Gemini can analyse uploaded video files and provide insights or answers to user queries.
- Question Answering: Users can ask questions about specific video content, such as identifying objects, actions, or text within the video.
- Video Player Interface: The uploaded video appears above the chat interface, allowing users to watch the clip again if needed.
Availability and limitations of the feature include:
- Platform Support: The video upload feature is currently available on iOS and Android devices, with varying availability across accounts and devices.
- Web Support: This feature is not yet live on the web version of Gemini, with users encountering a "File type unsupported" message.
- Camera Limitation: The built-in Gemini camera still doesn't support capturing video.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Observer
10 hours ago
- Observer
The ethics of using AI to predict patient choices
I recently attended a conference on bioethics in Switzerland where professionals from different countries met to discuss recent topics in medical ethics which was the main theme of this year's conference. Among the highlights of the meeting were several talks about the inclusion of Artificial Intelligence in decision-making and its ethical impact. What caught my attention was a talk about Personalised Patient Preference Predictor, or P4, which is a tool that aims to predict an individual patient's preferences for healthcare, using machine learning. The idea is that in situations where a person is incapacitated — for example, found unconscious with no advance directive — the AI would comb through their digital footprint, including tweets, Instagram and Facebook posts, and possibly even emails, to infer their likely wishes. The system would then create a virtual copy of the individual's personality, known as a 'psychological twin,' which would communicate decisions to the medical team on the person's behalf. While this concept is technologically fascinating, it raises several pressing ethical concerns. First, it assumes that our social media presence accurately reflects our core values and long-term preferences. However, people's views are dynamic and influenced by their emotional state, life experiences, and personal growth. A sarcastic tweet or a momentary opinion shared online may not represent someone's actual end-of-life wishes. Second, the use of AI risks introducing or amplifying bias — especially against the elderly and individuals from ethnic or religious minorities. AI systems often generalise from large datasets, which can lead to 'one-size-fits-all' assumptions that disregard cultural, spiritual, or personal nuances. Another critical question is: can AI truly understand or navigate the emotional and moral complexity of disagreements among family members and healthcare providers? Would it possess the empathy required to mediate a delicate conversation, or would it deliver cold logic such as: 'Grandpa is too old, his survival chances are low, so resources would be better allocated elsewhere'? Furthermore, relying on AI for such deeply human decisions risks the deskilling of health professionals. Ethical decision-making is an essential skill developed through experience, reflection, and dialogue. If AI takes over these roles, clinicians may gradually lose the ability — or the confidence — to engage in these vital discussions. The speaker, who advocated for the use of P4, admitted he did not fully understand how the AI makes its decisions. This lack of transparency is alarming. If we are to entrust a machine with life-or-death recommendations, we must first demand clarity and accountability in its design and operation. In my opinion, while AI has a growing role in healthcare, ethical decision-making remains a human responsibility. These discussions are often fraught with disagreement, cultural sensitivity, and intense emotion — particularly when they involve questions of life and death. In my view, we are not yet ready to hand this task over to machines. We are not yet ready to hand this task over to machines.


Times of Oman
16 hours ago
- Times of Oman
Oman to enhance AI contributions to national economy
Times News Service Muscat: Oman is actively working to enhance the contribution of artificial intelligence (AI) to the national economy by increasing the number of specialised startups and expanding research and scientific investment in this vital field. In September 2024, the Council of Ministers approved the National Programme for Artificial Intelligence and Advanced Digital Technologies as part of a comprehensive strategic plan built on three main pillars: 1. Promoting the adoption of AI across economic and developmental sectors 2. Localising AI technologies by supporting homegrown solutions and developing national capabilities so that Oman becomes a producer and developer of digital technologies 3. Governing AI applications with a human-centric vision, creating a flexible regulatory environment that ensures the ethical and effective use of emerging technologies. Hassan bin Fada Hussein Al Lawati, Head of the National Programme for AI and Advanced Digital Technologies at the Ministry of Transport, Communications and Information Technology, highlighted that the programme benefits key economic, development, and service sectors that directly impact citizens' quality of life. In a statement to the Oman News Agency, Al Lawati noted that Oman advanced five spots in the Oxford Insights Government AI Readiness Index, ranking 45th globally out of 193 countries in 2024, up from 50th in 2023. Regionally, Oman ranks 5th in MENA and 4th among GCC states, with ambitions to join the global top 30. The programme targets a 20% annual increase in the number of AI-focused tech startups, which have already grown from fewer than 10 at the programme's inception to over 25 today. Cumulative investments in AI projects have reached approximately OMR60 million over the past four years, with plans to increase investment by 20% annually. The Ministry has also launched the 'AI Innovators' initiative in collaboration with the University of Technology and Applied Sciences to promote AI knowledge production and honor top researchers, scientific papers, and projects. Additionally, Al-Lawati mentioned the 'Engineer IT with AI' competition, designed to localise and encourage generative AI innovation, empower national talent, and increase economic returns through startup creation and performance benchmarks. A specialised initiative titled 'Humanising AI' has also been introduced to ensure a balanced approach that integrates technological empowerment with human-centred service delivery and inclusive access for all segments of society.


Observer
a day ago
- Observer
AI's arrival at work reshaping employers' hunt for talent
TOM BARFIELD Predictions of imminent AI-driven mass unemployment are likely overblown, but employers will seek workers with different skills as the technology matures, a top executive at global recruiter ManpowerGroup said at Paris's Vivatech trade fair. The world's third-largest staffing firm by revenue ran a startup contest at Vivatech in which one of the contenders was building systems to hire out customisable autonomous AI 'agents', rather than humans. Their service was reminiscent of a warning last month from Dario Amodei, head of American AI giant Anthropic, that the technology could wipe out half of entry-level white-collar jobs within one to five years. For ManpowerGroup, AI agents are "certainly not going to become our core business any time soon," the company's Chief Innovation Officer Tomas Chamorro-Premuzic said. "If history shows us one thing, it's most of these forecasts are wrong." An International Labour Organization (ILO) report published in May found that around "one in four workers across the world are in an occupation with some degree of exposure" to generative AI models' capabilities. "Few jobs are currently at high risk of full automation," the ILO added. But the UN body also highlighted "rapid expansion of AI capabilities since our previous study" in 2023, including the emergence of 'agentic' models more able to act autonomously or semi-autonomously and use software like web browsers and email. Soft skills Chamorro-Premuzic predicted that the introduction of efficiency-enhancing AI tools would put pressure on workers, managers and firms to make the most of the time they will save. "If what happens is that AI helps knowledge workers save 30, 40, maybe 50 per cent of their time, but that time is then wasted on social media, that's not an increase in net output," he said. Adoption of AI could give workers "more time to do creative work" -- or impose "greater standardisation of their roles and reduced autonomy," the ILO said. There's general agreement that interpersonal skills and an entrepreneurial attitude will become more important for knowledge workers as their daily tasks shift towards corralling AIs. Employers identified ethical judgement, customer service, team management and strategic thinking as top skills AI could not replace in a ManpowerGroup survey of over 40,000 employers across 42 countries published this week. Nevertheless, training that adopts those new priorities has not increased in step with AI adoption, Chamorro-Premuzic lamented. "For every dollar you invest in technology, you need to invest eight or nine on HR, culture transformation, change management," he said. Tomas Chamorro-Premuzic, Chief Innovation Officer AI hiring AI? One of the areas where AI is transforming the world of work most rapidly is ManpowerGroup's core business of recruitment. But here candidates are adopting the tools just as quickly as recruiters and companies, disrupting the old way of doing things from the bottom up. "Candidates are able to send 500 perfect applications in one day, they are able to send their bots to interview, they are even able to game elements of the assessments," Chamorro-Premuzic said. That extreme picture was not borne out in a survey of over 1,000 job seekers released this week by recruitment platform TestGorilla, which found just 17 per cent of applicants admitting to cheating on tests, and only some of those to using AI. Jobseekers' use of consumer AI tools meets recruiters doing the same. The same TestGorilla survey found almost two-thirds of the more-than-1,000 hiring decision-makers polled used AI to generate job descriptions and screen applications. Where employers today are focused on candidates' skills over credentials, Chamorro-Premuzic predicted that "the next evolution is to focus on potential, not even skills even if I know the skills you bring to the table today, they might be obsolete in six months." — AFP