Latest news with #computerVision
Yahoo
3 days ago
- Health
- Yahoo
Remarkable new AI can tell your age by looking at your eyes
If you purchase an independently reviewed product or service through a link on our website, BGR may receive an affiliate commission. One of the most impressive areas of generative AI software like ChatGPT right now involves enhanced computer vision. AI can understand and interpret data from images. That's why we now have such advanced image and video generation models in ChatGPT, Gemini, Firefly, and other AI software. Models like ChatGPT o3 can accurately guess the location of an image by analyzing its details. Google offers advanced photo editing tools in its Photos app, and also directly in Gemini. These tools let you alter real photos in ways that weren't possible before. Today's Top Deals Best deals: Tech, laptops, TVs, and more sales Best Ring Video Doorbell deals Memorial Day security camera deals: Reolink's unbeatable sale has prices from $29.98 These image-related AI capabilities aren't just used to generate memes or melt OpenAI's servers. Researchers are developing AI models that can interpret images for various purposes, including medicine. The latest study showing such advancements comes from China. Researchers from several universities have been able to determine a person's age with high accuracy by having AI inspect an image of their retina. The readings also showed differences between the person's age and the eye's age. The researchers found that the retinal age gap the AI provided can be especially helpful for women. A simple retinal scan might help doctors offer better support to couples trying to conceive and to women at risk of early menopause. Retinal fundus imaging, or a photo of the back of the eye, lets doctors see microvascular features that reflect systemic aging. An AI trained on thousands of images can then predict the eye's age and compare it to the person's actual age to 'predict retinal age from fundus images with high precision.' The scientists used an AI called Frozen and Learning Ensemble Crossover (FLEX) to predict retinal age from fundus images. They fed FLEX over 20,000 eye photos from more than 10,000 adults of all ages to teach it how the back of the eye looks as people age. FLEX also analyzed over 2,500 images from nearly 1,300 pre-menopausal women. The AI was then able to estimate a person's age by examining a retinal fundus photo. If the eye appears older than the woman's actual age, the retinal age gap is positive. That could also mean other organs in the body are aging faster. The implications for reproductive health are clear. Fertility and menopause issues could benefit directly from such an AI screening tool. The researchers linked a larger retinal age gap to lower blood levels of anti-Müllerian hormone (AMH), a marker for ovarian reserve. The lower the AMH value, the harder it is for older women to conceive. The scientists studied women ages 40 to 50 and found that each additional retinal year raised the risk of a low AMH result. The risk increased by 12% in the 40-44 age group and by 20% in the 45-50 group for every extra retinal year. The study also found that having more childbirths at younger ages was associated with lower AMH levels than average. Each additional retinal year increased the risk of developing menopause before age 45 by 36%, according to the paper. We're still in the early days of using AI for medical imaging, but the study shows promise for using a simple, non-invasive technique to improve reproductive health protocols. Imagine getting a retinal scan in your late 20s or early 30s to help decide whether to get pregnant or freeze your eggs. Similarly, women over 40 concerned about pre-menopause or menopause could use an eye scan to check their retinal age and assess the risk of early symptoms. This might help them prepare for the years ahead with hormonal therapies to delay or ease symptoms. For any of this to happen, the conclusions from Hanpei Miao & Co. would need to be confirmed by further research. Separately, the FLEX AI model used in this study could be explored for other health conditions where eye scans might serve as early indicators of age-related health risks. The full study is available in Nature magazine. Don't Miss: Today's deals: Nintendo Switch games, $5 smart plugs, $150 Vizio soundbar, $100 Beats Pill speaker, more More Top Deals Amazon gift card deals, offers & coupons 2025: Get $2,000+ free See the


Forbes
13-06-2025
- Business
- Forbes
What Is ‘Physical AI'? Inside The Push To Make AI Understand The Real World
What happens when AI enters the physical world — predicting actions, spotting risks and transforming ... More how machines understand real-time events? For years, AI has been great at seeing things. It can recognize faces, label objects and summarize the contents of a blurry image better than most humans. But ask it to explain why a person is pacing nervously near a fence, or predict what might happen next in a crowded room — and suddenly, the illusion of intelligence falls apart. Add to this reality the fact that AI largely remains a black box and engineers still struggle to explain why models behave erratically or how to correct them, and you might realize the big dilemma in the industry today. But that's where a growing wave of researchers and startups believe the next leap lies: not just in faster model training or flashier generative outputs, but in machines that truly understand the physical world — the way it moves, reacts and unfolds in real time. They're calling it 'physical AI'. The term was initially popularized by Nvidia CEO Jensen Huang, who previously has called physical AI the next AI wave, describing it as 'AI that understands the laws of physics,' moving beyond pixel labeling to bodily awareness — space, motion and interaction. At its core, physical AI merges computer vision, physics simulation and machine learning to teach machines cause and effect. Essentially, it enables AI systems to not just recognize objects or people, but to understand how they interact with their surroundings — like how a person's movement might cause a door to swing open or how a ball might bounce off a wall. At Lumana, a startup backed by global venture capital and growth equity firm Norwest, that phrase isn't just branding; it's a full-blown product shift. Known for AI video analytics, the company is now training its models not only to detect motion, but to recognize human behavior, interpret intent and automatically generate real-time alerts. 'We define physical AI as the next evolution of video intelligence,' Lumana CEO Sagi Ben-Moshe said in an interview. 'It's no longer just about identifying a red car or a person in a hallway — it's about inferring what might happen next, and taking meaningful action in real-world conditions.' In one real-world deployment, Lumana's system flagged a possible assault after detecting unusual body language and close proximity between two men and a pair of unattended drinks, prompting an alert that allowed staff to step in before anything escalated. In another case, it caught food safety violations in real time, including workers skipping handwashing, handling food without gloves and leaving raw ingredients out too long. These weren't issues discovered after the fact, but ones that the system caught as they unfolded. This kind of layered inference, Ben-Moshe explained, transforms cameras into 'intelligent sensors.' It's no coincidence that Huang has also previously used the term 'physical AI,' linking it to embodied intelligence and real-world simulation. It reflects a broader shift in the industry about creating AI systems that better understand the laws of physics and can reason more intelligently. Physics, in this context, is shorthand for cause and effect — the ability to reason about motion, force and interaction, not just appearances. That framing resonated with investors at Norwest, who incubated Lumana during its earliest phase. 'You can't build the future of video intelligence by just detecting objects,' said Dror Nahumi, a general partner at Norwest. 'You need systems that understand what's happening, in context and can do it better than a human watching a dozen screens. In many cases, businesses also need this information in real-time.' Norwest isn't alone. Other players, from Hakimo to Vintra, are exploring similar territory — using AI to spot safety violations in manufacturing, detect loitering in retail, or prevent public disturbances before they escalate. For example, Hakimo recently built an autonomous surveillance agent that prevented assaults, identified vandalism and even saved a collapsed individual using live video feeds and AI. At Nvidia GTC in March, Nvidia even demoed robotic agents learning to reason about gravity and spatial relationships directly from environment-based training, echoing the same physical reasoning that Lumana is building into its surveillance stack. And just yesterday, Meta announced the release of V- JEPA 2, 'a self-supervised foundation world model to understand physical reality, anticipate outcomes and plan efficient strategies.' As Michel Meyer, group product manager at the Core Learning and Reasoning arm of the company's Fundamental AI Research, noted on LinkedIn yesterday quoting Meta chief AI scientist Yann Lecun, 'this represents a fundamental shift toward AI systems that can reason, plan, and act through physical world models. To reach advanced machine intelligence, AI must go beyond perception and understand how the physical world works — anticipating dynamics, causality, and consequences. V‑JEPA 2 does just that.' When asked what the real-world impact of physical AI might look like, Nahumi noted that it's more than mere marketing. 'Anyone can detect motion, but if you want real AI in video surveillance, you must go beyond that to understand context.' He sees Lumana's full-stack, context-driven architecture as a foundation and not a vanity pitch. 'We think there's a big business here and the technology is now reliable enough to augment and outperform humans in real time,' told me. The reality is that the success of physical‑AI systems will not be just about the technology. As AI continues to advance, it's becoming much clearer that the success of most AI systems largely hinges on ethics, trust and accountability. Put in a different way, trust is the currency of AI success. And the big question that companies must continue to answer is: Can we trust your AI system to be safe? In a security context, false positives can shut down sites or wrongly accuse innocent people. In industrial settings, misinterpreted behavior could trigger unnecessary alarms. Privacy is another concern. While many physical AI systems operate on private premises — factories, campuses, hotels — critics warn that real-time behavior prediction, if left unchecked, could drift into mass surveillance. As Ben-Moshe himself acknowledged, this is powerful technology that must be used with guardrails, transparency and explicit consent. But, according to Nahumi, Lumana's multi-tiered model delivers actionable alerts, but also protects privacy and supports seamless integration into existing systems. 'Lumana engineers systems that layer physical AI on current infrastructure with minimal friction,' he noted, 'ensuring operators aren't overwhelmed by false positives.' Despite these questions, demand is accelerating. Retailers want to track foot traffic anomalies. Municipalities want to prevent crime without expanding staff. Manufacturers want safety compliance in real time, not post-event reviews. In every case, the challenge is the same: too many cameras, too little insight. And that's the business case behind physical AI. As Norwest's Nahumi put it, 'We're seeing clear ROI signals — not just in avoided losses, but in operational efficiency. This is no longer speculative deep tech. It's a platform bet.' That bet hinges on systems that are scalable, adaptable and cost-effective. Lumana's approach, which layers physical AI on top of existing camera infrastructure, avoids the 'rip-and-replace' problem and keeps adoption friction low. Nahumi pointed to rising enterprise demand across retail, manufacturing, hospitality and public safety — fields where video footage is ubiquitous, but analysis remains manual and inefficient. And even across boardrooms and labs, the appetite for machines that 'understand' rather than 'observe' is growing. That's why companies like Norwest, Nvidia, Hakimo and Lumana are doubling down on physical AI. 'In five years,' Ben-Moshe envisions, 'physical AI will do more than perceive — it will suggest actions, predict events and give safety teams unmatched visibility.' This, he noted, is about systems that not only see, but also act. Ultimately, the goal of physical AI isn't just to help machines see better — it's to help them understand what they're seeing. It's to help them perceive, understand and reason in the messy physical world we inhabit. Ben-Moshe envisions a future where physical AI suggests actions, prevents escalation and even predicts incidents before they unfold. 'Every second of video should generate insight,' he said. 'We want machines to reason about the world as a system — like particles tracing possible paths in physics — and highlight the most likely, most helpful outcome.' That's a far cry from today's basic surveillance. From thwarting crime and preventing accidents to uncovering new operational insights and analyzing activity trends, reasoning engines over cameras promise real, demonstrable value. But scaling them is where the real work is. It'll require systems that are accurate, ethical, auditable and trustworthy. If that balance is struck, we could enter a world where AI won't just help us see what happened, but help us know what matters most.
Yahoo
10-06-2025
- Business
- Yahoo
Trigo Retail Launches Computer Vision-AI Powered Loss Prevention Solution
Redefining Shrinkage Control In-Store LONDON, June 10, 2025 /PRNewswire/ -- Trigo Vision Ltd., a leading provider of computer vision AI technology, today announces the launch of its AI-driven loss prevention solution. The solution addresses the growing challenges of retail theft and inventory shrinkage, which result in estimated losses of over $130 billion annually, with shoplifting incidents up 93% compared to pre-COVID levels. Building on its proven computer vision technology—already deployed at autonomous stores by some of the world's largest retailers, including Tesco in the UK and REWE in Germany—Trigo's latest solution offers a capex-free approach to combat retail loss. The solution uniquely compares each scanned item with what the shopper picks up, identifying any mismatches in real time. If an item is taken but not scanned, an alert is triggered at checkout—maintaining customer privacy and a frictionless shopping experience. Enhanced Detection That Improves the Customer Experience While many retailers have "eyes"—CCTV cameras—Trigo's computer vision AI acts as a brain. The platform tracks shoppers as anonymised figures and identifies which items are picked up—particularly from high theft areas—then cross-references them against what's scanned at checkout, whether the items are visible or concealed. Unlike traditional systems that focus only on checkout, Trigo addresses a key blind spot: most shoplifters conceal items in-store, long before reaching the tills. Trigo is the only solution that identifies these actions in real time, delivering instant alerts to store security—across all checkout methods, including self-checkout, manned tills, or Scan&Go—while the shopper is still on-site. Privacy-First Approach Designed with privacy as a top priority, Trigo's solution never uses, collects, or stores any biometric data. Most importantly, the technology is frictionless, ensuring a seamless shopping experience for honest shoppers. Rapid Deployment Using Existing Infrastructure Trigo's solution leverages existing CCTV infrastructure within stores, eliminating the need for significant capital investment in new hardware. Implementation is straightforward, requiring only a connection to existing Network Video Recorder (NVR) systems and integration with the Point of Sale (POS) system. This enables rapid deployment, minimal disruption to operations, and delivers instant ROI. Daniel Gabay, CEO of Trigo, commented: "Trigo's mission is to empower retailers with cutting-edge Computer Vision AI technology to address the sector's biggest challenges. With retail theft on the rise, we are proud to launch a solution that integrates easily into existing estates and delivers quick and efficient loss prevention, along with an improved experience for both retailers and customers." About Trigo: Trigo Retail is a world leader in Computer Vision AI technology, working with leading retailers to tackle some of the sector's most complex challenges. Powered by proprietary technology, Trigo's platform processes over 5 million shopping activities every month with unmatched accuracy—all while maintaining a strict privacy-by-design approach. The Company's CVaaP (Computer Vision as a Product) platform offers vision-based and data-driven advanced retail solutions, such as loss prevention, retail intelligence, fully autonomous stores, and more. For more information, please visit Photo: View original content to download multimedia: SOURCE Trigo Retail Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Associated Press
09-06-2025
- Business
- Associated Press
Zedge's DataSeeds.AI Releases Foundational Dataset for Computer Vision and Generative AI in Collaboration with Perle.ai and Émet Research
Dataset establishes new AI training benchmarks as detailed in accompanying research paper NEW YORK, NY / ACCESS Newswire / June 9, 2025 / Zedge, Inc. (NYSE American:ZDGE), $ZDGE, a leader in digital marketplaces and interactive games that provide content, enable creativity, empower self-expression and facilitate community, today announced the release of a new foundational image dataset - Sample Dataset (DSD) - purpose-built for computer vision and generative AI model training. The dataset was created in partnership with and Émet Research and represents a major step forward in data-centric AI image development. Jonathan Reich, CEO of Zedge, Commented: 'The DSD release marks a significant milestone for whose goal is to become a major supplier to enterprises that create foundational models in need of rights-cleared, high-quality images. The DSD annotation delivers measurable improvements over legacy solutions like AWS Rekognition, setting a new benchmark for high-quality, human-aligned AI training data. was able to assemble the DSD by leveraging GuruShots' tightly knit photographer community and their wide-ranging portfolio of photographs for high-quality AI training data. This release not only underscores the commercial potential of as a serious contender in the evolving B2B marketplace arena for AI datasets but also highlights the natural synergies that exist with our creators across both GuruShots and the Zedge Premium marketplace. It validates our ability to turn user-generated content into scalable, enterprise-grade datasets that can generate new revenue sources for Zedge.' The DSD is comprised of over 7,800 high-quality photos sourced from players of Zedge's leading photography game, GuruShots. Every image in the dataset was ranked by players of the game, and subsequently, each image was annotated by expert reviewers who provided detailed descriptions of the image content. The DSD release marks a major step in building the kind of real-world, human-reviewed data that improves the veracity of modern AI models. The introduction of the DSD highlights the inherent value in capacity to meet custom image demand promptly by launching relevant GuruShots photo challenges and/or by accessing existing images from GuruShots' massive catalog. Whether it is improving generative AI models, analyzing scenes or handling edge cases, the platform offers a scalable pipeline supported by tens of thousands of photographers that can provide diverse and rights-protected images. Ahmed Rashad, CEO of remarked, 'The partnership allowed us to apply our methodologies, which leverage domain expertise and AI, for high quality data annotation and while validating the results through comprehensive benchmarking research. We are thankful for Zedge's partnership and the meaningful contribution that is making to the AI community. DSD is a milestone for human-aligned dataset creation.' Freeman Lewin, CEO of Émet Research said, 'We're deeply grateful to Zedge, and for enabling this release. Together, we've not only demonstrated the power of data-centric AI but also introduced a best-in-class model for data to be used for AI training. We're excited to keep supporting important AI research efforts in conjunction with industry leaders like Zedge and The release of the DSD is accompanied by a evaluative research paper titled " Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery,' which shows how training AI models with the DSD yields 70% better results when compared to using typical benchmark datasets. The dataset, model weights and paper are now available to the public. The DSD was labeled through a multi-tiered process where human experts described scenes in natural language and even outlined certain objects down to the pixel. This helps AI learn in a way that's closer to how people view and explain the world. Technical Deep Dive: Research Findings and Differentiators The DSD was designed to serve as a reproducible benchmark for training and fine-tuning multimodal vision-language models. It includes 7,843 high-resolution, rights-cleared photographs sourced from GuruShots, each selected through a unique in-game peer-ranking system that reflects aesthetic and compositional quality validated by a global photography community. Each image was then enhanced with multi-tiered human annotation through expert-in-the-loop pipeline, including: This combination of peer review, expert annotation and visual diversity enables DSD to provide context-rich training data that improves model grounding and multimodal comprehension. Key empirical findings from the research paper include: What makes the DSD and uniquely valuable? This foundation positions Zedge's platform as a differentiated supplier of high-fidelity, human-reviewed datasets tailored to the evolving needs of the generative AI ecosystem. Access the research paper here Access the DSD here About Zedge Zedge empowers tens of millions of consumers and creators each month with its suite of interconnected platforms that enable creativity, self-expression and e-commerce and foster community through fun competitions. Zedge's ecosystem of product offerings includes the Zedge Marketplace, a freemium marketplace offering mobile phone wallpapers, video wallpapers, ringtones, notification sounds, and pAInt, a generative AI image maker; GuruShots, 'The World's Greatest Photography Game,' a skill-based photo challenge game; and Emojipedia, the #1 trusted source for 'all things emoji.' For more information, visit: Follow us on X: @Zedge Follow us on LinkedIn About offers both on-demand and off-the-shelf image and video datasets enriched with detailed metadata, perfectly suited for AI model training. By leveraging a vast global network of creators and an extensive catalog, we provide rapid data collection and diverse content, ensuring swift, scalable solutions that accelerate AI training. For more information, visit: About provides expert data annotation and enrichment services for AI development. Leveraging a curated global network and AI-assisted workflows, Perle delivers high-quality, multimodal datasets built for real-world performance. About Émet Research: Émet Research provides sourcing, annotation, evaluation, research, compliance, licensing, liquidity, and sales solutions for data suppliers and AI labs around the world. Through its deep partnerships, and its own marketplace, Brickroad, Émet Research helps bring high-fidelity, proprietary datasets to market. Contact: Brian Siegel, IRC, MBA Senior Managing Director Hayden IR (346) 396-8696 [email protected] SOURCE: Zedge, Inc. press release


TechCrunch
29-05-2025
- Business
- TechCrunch
Buildots raises $45M to help companies track construction progress
In the construction industry, managers can easily become disconnected from what's happening on-site. Among the many tasks to juggle are staying apprised of costs, communicating with all stakeholders, and assessing risk related to aspects like contractor billing and performance. Buildots wants to change all of that through AI and computer vision. Founded in 2018 by Roy Danon, Aviv Leibovici and Yakir Sudry, the Chicago startup offers a platform that tracks construction progress by processing images captured from 360-degree cameras mounted on managers' hard hats. The system doesn't just observe; it also forecasts. Teams can use a chatbot to ask questions about a project's status, and check a predictive tool that alerts them to possible delay risks or pacing issues that could turn into costly problems. 'It's transformative for site managers, construction executives, and other stakeholders,' said Danon, Buildots' CEO, who tells TechCrunch the company's clients include Intel and around 50 construction firms. '[They're] able to make informed decisions based on real, measurable data as opposed to information trickling in at different times from different sources and with different levels of reliability.' To build on its momentum, Buildots has raised $45 million in a Series D funding round led by Qumra Capital, with participation from OG Venture Partners, TLV Partners, Poalim Equity, Future Energy Ventures, and Viola Growth. The new cash brings the company's total raised to $166 million. According to Danon, the capital will mainly be used to expand Buildot's product to 'cover more stages of the construction lifecycle.' The plan is to use historical data to train AI models to further benchmark — and optimize — construction project performance. Buildots isn't the only company applying AI in the construction domain. Others include BeamUp, which is developing an AI-powered building design platform, and Versatile, which — like Buildots — captures and analyzes data across the construction site to provide a picture of construction progress. Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW With over 230 employees, Buildots ranks among the larger players in the space — and it's planning to expand its North American operations this year, with a focus on growing its R&D teams. '[Our] differentiation is strong due to our operations-focused platform and our approach to performance management in construction,' Danon said. 'The funding will accelerate all of [our] initiatives, but more importantly, it validates that the market is ready for the transformation that we're bringing.'