logo
#

Latest news with #PersonalIdentifiable

I see, I hear, I speak, I read
I see, I hear, I speak, I read

Time of India

time3 days ago

  • Time of India

I see, I hear, I speak, I read

Malaya Rout works as Director of Data Science with Exafluence in Chennai. He is an alumnus of IIM Calcutta. He has worked with TCS, LatentView Analytics and Verizon prior to the role at Exafluence. He takes pride in sharing his knowledge and insights on diverse topics of Data Science with colleagues and aspiring data scientists. LESS ... MORE I am amused at how we have started referring to traditional AIML as 'traditional'. I am equally amazed at the presence of 'traditional' LLMs. How fast do you want us to move? The so-called traditional LLMs are entirely text-based. The not-so-traditional LLMs are multimodal by nature. They handle images, videos, audio, as well as textual inputs and outputs. When you ask an LLM to write a poem for you and it generates a creatively crafted poem, that's unimodal (It's called a text-only LLM. Unimodal is technically correct). When you upload an image and ask the LLM to identify whether a person is in the image, that's multimodal (the output is text). When you upload a picture and ask the LLM to change the background from red to yellow, and the LLM returns the required image, that's multimodal (the output is an image). When you instruct the LLM to create a specific image for you, that's multimodal (the output is an image). Instead of an image, using audio or video is also a multimodal approach. Encoders take text, images, and audio and transform them into a mathematical format that the AI can understand. This is akin to translating everything into a common language. The fusion module utilises an input projector to integrate all the various types of processed information into a single, unified representation. The numeric representation of a cat's image, the numeric representation of the word 'cat', the numeric representation of the sound of saying 'cat', and that of the description of what a cat does are all related. A multimodal LLM is closer to reality than a unimodal LLM. Human beings deal with multimodality in their day-to-day lives. The context provided to and extracted from a multimodal large language model (LLM) is richer. The downside? Yes, multimodality requires intensive computing to process different types of data. We immediately notice the difference in inferencing speeds when using text only versus multimodal inputs and outputs. I would think twice before uploading a three-minute video onto a paid API LLM service and asking to explain what the LLM sees in the video. I would reach my innocent credit limits in no time. I would be more eager to do the same through an open-source, locally downloaded large language model (LLM). Multimodal large language models (LLMs) are widely used in various applications. For example, they are used in content moderation. They are used to flag off plagiarism, explicit content, toxic content, self-harm and drug use, graphic terrorism, racial abuse, bad gestures, legal compliance issues, political preferences, and Personal Identifiable Information (PII). Multimodal LLMs can also be utilised to build chatbots that can answer questions related to a repository of videos, audio, or text. For example, the user might like the bot to summarise videos, or determine the presence or absence of something specific that the user is interested in. The bot can automatically take you to the exact position in the artefact that contains the object of interest. They can be used to help health professionals diagnose abnormalities in reports, X-rays, and other medical imaging techniques. They can be used in highly creative areas such as music composition and video editing. While we stand at this juncture of technical advancement, multimodality takes us toward more human-like artificial intelligence. The shift from text-only to multimodal large language models (LLMs) is often underestimated. The shift has made AI more human-like by embodying a rich interplay of sight, sound, and language. Can I say that here we have an AI that thinks, sees, hears, speaks, and reads? Haven't we moved multiple steps closer to AGI (Artificial General Intelligence) with this? Think about it. Don't delegate all your thinking to AI, because we don't want the only sharp brain to be the AI's. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.

Black-market groups on Facebook are reportedly selling Uber and DoorDash accounts that let people bypass background checks and driver's license provisions
Black-market groups on Facebook are reportedly selling Uber and DoorDash accounts that let people bypass background checks and driver's license provisions

Yahoo

time15-04-2025

  • Business
  • Yahoo

Black-market groups on Facebook are reportedly selling Uber and DoorDash accounts that let people bypass background checks and driver's license provisions

A new report says Facebook is riddled with groups that allow people to buy or rent Driver accounts for ride-share services and delivery services. This could present safety issues for customers. The Tech Transparency Project says it found 80 groups with up to 800,000 members. Facebook said it is reviewing the report and will remove any content that violates its policies. A new report from the Tech Transparency Project (TTP) is raising new safety concerns about Facebook, accusing the social media company of hosting several "black market" pages where people can buy or rent driver accounts for a number of consumer-facing companies. As many as 800,000 Facebook users belong to 80 groups the TTP identified that let users trade driver accounts for Uber, DoorDash, and other ride-share and delivery companies. That lets people acquire the accounts without going through the screening process drivers are normally subjected to. And it raises both safety concerns for customers and fears of possible identity theft for riders or people who place orders. "Renting" an Uber Eats account, according to a Jan. 3, 2025, post, went for as little as $65. Most people listing accounts asked prospective renters or buyers to contact them directly, to keep the dealings out of public view. Meta told Fortune it is reviewing the report and will remove any content that violates its policies, which prohibit the "buying, selling, or trading of Personal Identifiable Information." TTP said the existence of these groups indicated content moderation on Facebook was not being properly enforced, as many of the accounts had obvious names such as "Doordash & Uber Account For Rent And Sell Group." (Facebook denied the issue had anything to do with the enforcement changes it announced in January.) "Meta's recent announcement that it would scale back policy enforcement on its platforms made clear that its automated systems would continue to root out illegal activity—including fraud," the group wrote. "But TTP's latest investigation found that Facebook is not meeting this lower bar for content moderation. It is allowing a thriving black market in driver accounts for Uber, DoorDash, and other rideshare and delivery services." To become a driver for most ride-share or delivery services, you'll need a valid ID or driver's license and insurance. You also must undergo a background check. Uber and DoorDash policies prohibit sharing driver accounts. This isn't the first time the TTP has accused Facebook of allowing the sale of materials that should not be available. In 2022, it issued a report that found sellers offering packages of documents used for account verification, including government ID cards, passports, and Social Security cards. Those accounts were later deleted by Meta. And last year, the group released a report that detailed over 450 ads running on Instagram and Facebook that were selling pharmaceutical and other drugs, including images of 'piles of pills and powders, or bricks of cocaine.' This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store