logo
AI doesn't know me. Good, let's keep it that way.

AI doesn't know me. Good, let's keep it that way.

Yahoo5 hours ago

The curtain of anonymity can produce amusing results. But there is a downside in the age of AI. (Photo illustration by Alexander Castro/Rhode Island Current)
My name is a catfish. Or, so I've been told. When you hear the name Jamie Jung, you might wonder if the face behind this article is a Korean girl or the great-grandson of Swiss psychiatrist Carl Jung. A name so mysterious, perhaps even AI would struggle to decipher my identity in my job application.
When the pandemic pivoted classrooms to Zoom, I hid behind a faceless black square, only two words revealing my name.. When calling attendance, many of my teachers would pronounce my last name with a German J as 'Yoong' or refer to me with the pronouns 'he' or 'him.' Others would sound my name out phonetically 'Jung' and refer to me as 'she' or 'her.'
As much as I enjoyed the curtain of anonymity, I have come to recognize there is a downside.
Companies such as Microsoft and Amazon delegate resume screening to AI tools in order to sift through countless applications from job-seekers. AI tools continue to evolve but there should be more attention on the flaws in algorithmic analysis, such as oversimplification and evaluation bias.
In 2014, Amazon attempted to automate its hiring process by building a computer program that would review applicants' resumes and spit out a list of the top candidates. The computers were trained to assess applicants by observing resumes submitted to the company over a 10-year period. The problem? A majority of the applicants were men, which unintentionally taught the algorithm that male candidates were superior.
The impact of algorithmic bias is not limited to gender. A 2024 study from the University of Washington reported computer models favored white-associated names in 85.1% of cases and female-associated names in only 11.1% of cases. In 2017, the University of Toronto released a study that revealed applicants with Asian names had a 28% reduced likelihood of receiving interviews compared to applicants with Anglo names.
This pattern of discrimination even within a recruitment process solely managed by humans establishes a foundation already tainted with bias. Despite the growing diversity of the American workforce, the lack of leadership opportunities given to underrepresented communities serves as evidence of the lasting effects of systemic discrimination. According to the National Library of Medicine, although 74% of health care professionals are women, only 33% of management positions were filled by women. Similarly, while Black employees comprise 14% of all U.S. employees, only 7% of managers are Black.
When you hear the name Jamie Jung, you might wonder if the face behind this article is a Korean girl or the great-grandson of Swiss psychiatrist Carl Jung.
AI has the potential to revolutionize the workplace. Automating monotonous tasks within the hiring process allows employees to maximize productivity, and many human resource managers have recognized these benefits.
But by analyzing existing demographics of the workforce, algorithms can deduce that ''top'' applicants who fit the standard are white men. As long as this foundation remains skewed, AI will continue to exclude talented applicants based on an outdated algorithm.
A survey by CareerBuilder states 55% of HR managers say AI will become a regular part of HR in the next five years. Although the prospects of an efficient recruitment process are appealing, managers must evaluate the current state of their workforce before integrating AI algorithms in order to provide a fair opportunity for all applicants.
By prioritizing equal representation even before implementing AI, companies will be able to utilize algorithms with less worries about bias. The innovation of AI begins with human reflection and revision.
AI assumes that I am only what my name allows me to be, ignoring the scope of my accomplishments. I only started going by Jamie in my freshman year of high school, and I thrived under this new ambiguous identity: that year, I became the social media manager of two clubs, was selected to present a TEDx Talk, and was awarded 'Freshman Writer of the Year' by my conservatory's director. When I introduced myself in person the next school year, I was amused by the look of surprise on many of my teachers' and classmates' faces. It was clear I was not who they expected me to be.
What's in a name? According to AI algorithms, a name is the reflection of our identities and the face behind these words. I wonder if I had introduced myself as Jaehee Jung, or if I had turned on my camera to reveal my true identity, would I have had the opportunities I did?
Maybe. My name is a gift I gave myself in search of belonging. Now I am searching to make this name my own. Not with recognition or achievements, but with the person I am behind the black square. And only I hold the power to decide when to turn it on or off.
SUBSCRIBE: GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Emeet reveals 4K webcam for streamers, vloggers and professionals
Emeet reveals 4K webcam for streamers, vloggers and professionals

USA Today

time25 minutes ago

  • USA Today

Emeet reveals 4K webcam for streamers, vloggers and professionals

Cameras have seamlessly woven their way into our daily lives, whether for remote work on a webcam, streaming content or creating for social media. Even my doorbell boasts a camera these days, though its quality often leaves much to be desired, and don't even get me started on the lackluster webcam built into my laptop. Thankfully, Emeet is now addressing these common frustrations with the release of the world's first dual-camera 4K webcam, promising a significant leap in video quality and versatility. Whether you're vlogging, streaming or collaborating with your work team, the Emeet PIXY webcam is designed to perform better. Even sweeter, for a limited time only, you can save an additional 6% off when you use the promo code EMEETPIXY at checkout on Amazon or the Emeet website. This promo code is valid through Friday, July 11. Check out below how you can shop the Emeet PIXY to bring your video calls into the 21st century. Emeet's PIXY: The World's First Dual-Camera AI PTZ 4K Webcam EMEET PIXY AI-Powered 4K webcam Combine 4K video with crystal clear audio and AI-powered tracking for your streaming, content and work needs. Shop Emeet at Amazon GAMING NEEDS: From a Nintendo Switch to a beverage fridge, your gaming must-haves are on sale at Amazon Emeet is calling PIXY the most versatile webcam ever. Here's what features it includes: Shop Emeet PIXY at Amazon

Forging A Responsible, Secure Way Forward For Open-Source AI
Forging A Responsible, Secure Way Forward For Open-Source AI

Forbes

time30 minutes ago

  • Forbes

Forging A Responsible, Secure Way Forward For Open-Source AI

Dirk-Peter van Leeuwen is the CEO of SUSE, a global leader in innovative, reliable and secure enterprise open source solutions. As AI adoption continues to accelerate, the focus is shifting from experimentation to execution. How do businesses harness AI's potential in ways that are practical, principled and at scale? The answer lies in an open approach. Organizations can achieve responsible, transparent and secure AI through open-source tools and open platforms that foster innovation without locking it behind proprietary walls. With large language models and other AI technologies evolving at lightning speed, agility and adaptability are essential—and an open, collaborative ecosystem can keep up. My conviction comes from observing previous technical revolutions that have moved from experimentation to scale. Large-scale adoption while keeping the pace of innovation only really happened when open source (and its community) was at its heart. Linux stands as a powerful testament to the value of open-source development. By making its code freely available, Linux invited a global community of developers to collaborate, test and innovate—resulting in one of the most secure and scalable operating systems in the world. Today, it runs everything from smartphones to supercomputers, proving that openness doesn't hinder progress—it accelerates it. Kubernetes is the backbone of modern cloud infrastructure. It enables scalable, vendor-neutral application deployments as well as open internet protocols, which enable the interoperable, decentralized growth of the internet. This proves that open standards can scale globally and empower billions without centralized control. As we chart the future of AI, embracing similar openness can ensure the technology benefits from collective expertise and serves the broader good. Understanding Open-Source AI Before diving into what an open approach to AI looks like in practice, we must first understand what 'open-source AI' really means. Defining it is a complex and evolving effort involving plenty of debate, but creating a standard is helpful for providing clear guidelines, promoting transparency and trust, and accelerating innovation and collaboration. One formal definition that has emerged comes from the Open Source Initiative (OSI): the Open Source AI Definition (OSAID), which is a work in progress that we endorse. At a high level, the OSAID defines open-source AI as an AI system that allows users to: • 'Use the system for any purpose and without having to ask for permission.' • 'Study how the system works and inspect its components.' • 'Modify the system for any purpose, including to change its output.' • 'Share the system for others to use with or without modifications, for any purpose.' I will be referencing the OSAID when discussing an open approach to AI. Ultimately, whichever definition you endorse, openness remains essential to building trust and driving AI progress. Building Blocks For Trustworthy And Open AI There are three cornerstones for a future where AI innovation is both fast-moving and fundamentally trustworthy: open datasets, open infrastructure and regulatory guardrails/legal frameworks. How to treat datasets used for training is a subject of debate because, unlike source code, data influences models through patterns and can include proprietary or sensitive information. Finding the right balance between openness and legal protections and practical restrictions is key. This is why the OSAID emphasizes making training data accessible to the public and ensuring transparency about all datasets and the process for cleaning and labeling them. Robust, open infrastructure is another critical factor for quick and secure GenAI deployments when security and observability are built into the platform. The benefits of this approach include flexibility and interoperability; applications can seamlessly integrate with a variety of technologies and platforms. Open infrastructure also allows for customization and avoids vendor lock-in. Organizations are under stricter scrutiny for their risk and security management, and that also applies to how they use AI. Regulatory guardrails and legal frameworks are essential to ensuring secure and responsible AI use against threats like data leakage, prompt injection attacks and modern manipulation—all of which can result in costly repercussions. How Businesses Can Adopt An Open Approach To AI In reality, most businesses can't achieve complete open source for AI use given legal and practical constraints like data privacy laws, intellectual property protection and the risk of misuse or security threats. Additionally, the complexity of scaling AI infrastructure and potential legal liability make companies cautious about releasing powerful models without safeguards. So what does an open approach to AI look like in practice? Leveraging the building blocks I mentioned earlier, let's look at some of the key implementation strategies: This first step is key to IP protection, as depending on the sensitivity of the information, different levels of openness may be applied. Your organization's guidelines for AI access and usage should prioritize data governance and security and proportion restrictions by risk level. Consider zero-trust security frameworks and conduct regular auditing and monitoring of AI systems using observability tools to detect and nip problems in the bud. Open weights refer to the final parameters of a trained AI model made publicly available. They differ from open-source AI in that they usually don't allow access to the model's architecture, training data and code. Since training AI models is a time-consuming, resource-intensive and expensive process, it's more realistic for most organizations to use, distribute and modify open weights instead of training a model from scratch. Open Source: The Future Of AI AI innovation and execution are moving at breakneck speed, but it requires an open approach to truly succeed through fostering transparency and trust and focusing on flexibility and security. Embracing open-source principles for AI models, data and infrastructure is key to unlocking AI's full potential while also ensuring it remains agile, trustworthy and, ultimately, beneficial for all. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Senators urge FTC to investigate Spotify's higher-priced bundled subscription
Senators urge FTC to investigate Spotify's higher-priced bundled subscription

Yahoo

time30 minutes ago

  • Yahoo

Senators urge FTC to investigate Spotify's higher-priced bundled subscription

Two U.S. senators have requested that the Federal Trade Commission (FTC) investigate Spotify due to allegations that the company bundled its music streaming and audiobook services into a more expensive subscription without obtaining user consent, while also reducing royalty payments to creators in the process. On Friday, June 20, U.S. Senators Marsha Blackburn and Ben Ray Luján wrote a letter to the FTC, claiming that Spotify converted standard premium subscriptions into higher-cost bundled subscriptions without informing consumers. They also highlighted that existing U.S. regulations permit digital music providers to pay a reduced music royalty rate if the subscription is bundled with other legitimate offerings. 'Spotify's intent seems clear—to slash the statutory royalties it pays to songwriters and music publishers. Not only has this harmed our creative community, but this action has also harmed consumers,' the letter states. Last year, the Mechanical Licensing Collective (MLC) sued Spotify for allegedly undercompensating songwriters and publishers, but the lawsuit was dismissed in January. In March 2024, Spotify restructured its Premium tiers to include 15 hours of audiobooks, raising the price to $12 for individuals and $20 for families. Users have to manually opt out of the plan. This change has reportedly caused publishers to lose $230 million in the first year, according to Danielle Aguirre, executive vice president of the National Music Publishers' Association. In a statement shared with Variety, a Spotify spokesperson noted that users were notified a month in advance about the price increase and the platform offers 'easy cancellations as well as multiple plans for users to consider.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store