Is your job application being rejected by AI? We asked 7 big companies.
It's the great mystery facing frustrated job seekers: Who — or what — is rejecting my application?
As more companies turn to AI to boost productivity,applicants often tell BI that they wonder whether a human ever reviewed their résumé. We reached out to seven major companies and found out that AI's role in the hiring process varies widely.
Mark Grimwood, Salesforce's SVP of Recruiting, said the company received "tens of thousands" of applications for account executive roles in the first quarter of this year — a position the company is investing heavily in.
Grimwood said two key factors help manage this volume: skilled recruiters who know how to spot talent with the right skills and experience, and Agentforce — the company's AI-powered tool. He said Agentforce helps recruiters scan for valued skills like collaboration, storytelling, and AI literacy, and identify promising candidates.
"Our human recruiters are overseeing this process from start to finish, but using AI in our hiring processes really helps our recruiters be more productive and prioritize their time on the most relevant candidates," he said.
Grimwood said the company's recruiters strive to give every application the attention it deserves, but not every one is reviewed by a human.
"The sheer volume we see — especially in areas like sales, where we are really growing and investing — means we have to be strategic," he said.
AI is playing a growing role in the hiring process. Some job seekers have used AI tools to optimize their résumés, submit hundreds of applications, and navigate interviews, while some businesses are using AI-assisted applicant tracking systems to evaluate and prioritize candidates.
While AI has helped streamline parts of the process, it's also created headaches on both sides: Some applicants have told Business Insider they worry they're being rejected by algorithms with little or no human review, while companies are overwhelmed by AI-generated applications that aren't always accurate or well-crafted.
While job seekers' concerns are understandable, most companies haven't offloaded their entire application review process to AI, though many are using it to assist. Business Insider asked seven companies — Salesforce, Google, Kraft Heinz, McKinsey, Verizon, Exelon, and Allstate — what role AI plays in evaluating applicants.
Have you landed a new job in the last few years and are open to sharing your story? Please fill out this quick Google Form. Struggling to find work? Please fill out this Google Form.
How AI is a tool in the job candidate evaluation process
Some companies are trying to strike a delicate balance: using AI to help evaluate applicants without relying on it too heavily, and ensuring substantial human involvement. Google, Allstate, Kraft Heinz, and Exelon all said recruiters still review every application and decide who moves forward.
Sean Barry, Allstate's vice president of talent acquisition, said the company uses technology to pinpoint strong candidates, which has helped speed up the early stages of the hiring process. He said it used to typically take about 22 days for the company to follow up with promising candidates — asking for details like location and salary expectations — but that now it's happening in just 11 days.
"When you get 1,000 people applying for a single job, we use the technology not to decide who's the right fit, but to figure out which, say, 50 look like they could potentially be the right 50 to begin screening," he said.
However, Barry said every application is still reviewed by a human, and that humans continue to decide which candidates move forward, and who ultimately gets hired.
A Google spokesperson said the company's recruiting teams are exploring ways to make the application review process more efficient, and AI is a part of that effort.
"We use machine learning to suggest candidates for open roles based on their skills and experience, which in turn, frees up recruiters to focus more on building relationships with the best candidates," they said.
While this technology helps prioritize candidates, the spokesperson said every application submitted to Google is still reviewed by a human.
Denise Galambos, chief people and equity officer at Exelon, said the company uses AI to help rank candidates based on various criteria, but a recruiter looks at every résumé.
"We are not using AI to just right off the bat, exclude people," she said.
Some companies are still relying heavily on recruiters
Some companies have been slower to adopt AI for candidate evaluation, or have focused on other ways to apply the technology in hiring.
Spring Lacy, Verizon's vice president of talent acquisition, said the company doesn't use AI tools to filter or rank applications — that job still falls to its recruiters.
She said Verizon is open to using AI to make hiring more efficient, potentially freeing up recruiters to spend more time with top candidates. But any technology, she said, would need to function properly.
"We want to make sure that any tools that we use are fair, and that there's no bias in the AI," she said. "That it can accurately and equitably screen résumés based on our qualifications."
Blair Ciesil, partner, global talent attraction at McKinsey, said the company doesn't use AI to rank applicants during the screening process. Applications are reviewed by humans who have a set of criteria they're looking for in candidates.
"We do not use AI to evaluate cover letters or résumés," she said, adding that AI's primary role in the hiring process is a "candidate bot" that helps employees prepare to interview applicants for open roles.
Allstate is also exploring alternative ways to use AI in hiring — including to revisit past applicants. Barry said the company adopted a tool last year that helps flag qualified candidates who were initially turned down and recommends them for other roles. Through this process, Allstate has hired more than 100 people, many of them for claims roles.
"While they might've been a no-go for that role at that time, it certainly doesn't mean that they're not a fit for the company and potentially a fit for another need," Barry said.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
38 minutes ago
- Yahoo
Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services
Sword Health announced Tuesday that it had raised $40 million in a recent funding round, giving it a $4 billion valuation. Founded in 2015, the healthcare startup has focused on helping people manage chronic pain at home. Using AI tools, the platform connects users with expert clinicians who then provide patients with tools for digital physical therapy, pelvic health, and overall mobility health. However, the company says this new round of funding will largely go towards developing a mental health arm of its program called Mind. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can "Today, nearly 1 billion people worldwide live with a mental health condition. Yet care remains fragmented, reactive, and inaccessible," Sword said in the announcement. "Mind redefines mental health care delivery with a proactive, 24/7 model that integrates cutting-edge AI with licensed, Ph.D-level mental health specialists. Together, they provide seamless, contextual, and responsive support any time people need it, not just when they have an appointment." Sword CEO Virgílio Bento told CNBC, "[Mind] really a breakthrough in terms of how we address mental health, and this is only possible because we have AI." Users will be equipped with a wearable device called an M-band, which will measure their environmental and physiological signals so that experts can reach out proactively as needed. The program will also offer access to services like traditional talk therapy. Bento told CNBC that a human is "always involved" in patients care in each of its programs, and that AI is not making any clinical decisions. Trending: Maximize saving for your retirement and cut down on taxes: . For example, if a Sword patient has an anxiety attack, AI will identify it through the wearable and bring it to the attention of a clinician, who can then provide an appropriate care plan. "You have an anxiety issue today, and the way you're going to manage is to talk about it one week from now? That just doesn't work," Bento told CNBC. "Mental health should be always on, where you have a problem now, and you can have immediate help in the moment." According to Bento, Sword Mind already has a waiting list, and is being tested by some of its partners who appreciate it's "personalized approach and convenience." "We believe that it is really the future of how mental health is going to be delivered in the future, by us and by other companies," he told CNBC. "AI plays a very important role, but the use of AI — and I think this is very important — needs to be used in a very smart way." The rest of the cash raised in the funding round, which was led by General Catalyst, will go towards acquisitions, global expansion, and AI development, Sword Health says. Read Next: Here's what Americans think you need to be considered Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Business Insider
2 hours ago
- Business Insider
Does Canada have UBI? Everything you need to know about the country's basic income programs.
As more basic income pilots and programs launch in the United States, Canada is following suit. Recently, politicians in Canada have considered how to implement no-strings-attached money initiatives, especially as many citizens lost jobs during the pandemic and the cost of living has increased. These discussions have drawn on studies and trials dating back nearly a century. As dozens of basic income programs in the United States spread, some leading policy experts have discussed whether these initiatives could be extended to a universal level. Other countries with basic income programs and experiments include Brazil, China, Germany, and India. Advocates for universal basic income — which offers recurring cash payments to all individuals in a population, regardless of their socioeconomic status — say Canada has the resources to create a program that covers every citizen. While universal income would be on a universal scale across a population, basic income programs typically target lower-income or vulnerable populations. Organizations in some provinces are testing what basic income could look like on a local level through guaranteed basic income pilots — recurring cash payments geared toward specific groups, like vulnerable populations. While many Canadian politicians across the political spectrum support basic income, some argue that these programs are costly to the local economy. Support from residents, meanwhile, varies. About 60% of Canadians support guaranteed basic income, while 37% support universal basic income, according to a poll published in 2022 by the market research firm Narrative Research. To be sure, cash payments can't replace full-time income or lift everyone out of poverty, but it can give many lower-income residents more opportunities to engage with the economy, said Sheila Regehr, a founding member and chairperson of the Basic Income Canada Network, an organization working to expand basic income access across the country. "From a fiscally conservative perspective, that little bit of investment could save a ton of money down the road and get better results for everything," Regehr told Business Insider, referring to the initial expense and potential benefits of basic income. "This idea we had several generations ago to get a good job, stay in a company for life, that doesn't happen anymore." Shortly after the pandemic began, talks about the efficacy of basic income in Canada accelerated. In 2020, 50 senators sent a letter to Prime Minister Justin Trudeau, Deputy Prime Minister Chrystia Freeland, and Finance Minister Bill Morneau commending them for their actions and calling for a minimum basic income. And in the last federal election in 2021, 189 candidates — representing 46% of Canada's electoral districts — pledged to support basic income. Canada's Parliamentary Budget Officer found that between 2022 and 2023, a universal basic income would cost $87.6 billion but would cut poverty by at least 40% in nearly every province; the cost of poverty totals about $80 billion each year, the PBO estimates. "We certainly have the capacity, there is no question that we do," Regehr said. In 2021, Ontario Sen. Kim Pate introduced Bill S-233 and Winnipeg MP Leah Gazan introduced Bill C-233, creating the country's first national framework for a guaranteed livable basic income for people over the age of 17. Both bills are now under consideration in the Senate. Canada, which has an Old Age Security pension, already has a version of basic income for older residents called the Guaranteed Income Supplement. The GIS is a monthly payment distributed to low-income pensioners aged 65 and older. The maximum monthly payment is $1,087 for someone who is single, divorced, or widowed. "Getting a government check has no stigma to it here; it's just something that happens," Regehr said. Canada's experiments with basic income Basic income experiments in Canada are not new. Talk of implementing a basic income dates back to the 1930s in Alberta, though the first major experiment took place in Manitoba starting in 1974. That project, called Mincome, was studied after completion, and researchers found that participants — who received $3,800, $4,800, or $5,800 a year through 1979 — on the whole continued to work and had higher secondary school graduation rates. Researchers also found that there was an 8.5% drop in hospitalizations for participants at the program's completion. Four decades later, Ontario launched one of the biggest pilots in Canadian history, the Ontario Basic Income Pilot Project. The pilot, whose participants were mostly employed and lower-income, gave up to $16,989 annually for single participants aged 18 to 64 and $24,027 for couples. Ontario Premier Doug Ford's government canceled the pilot just 10 months after payments were first distributed. Reasons for the cancellation included high costs and indications that the program didn't help people contribute enough to the economy. Still, interviews with participants after the pilot found that basic income helped them better afford necessities than traditional welfare payments and assisted in long-term financial planning. A 2020 Canadian Centre for Economic Analysis study determined that a basic income could create 600,000 jobs and contribute $80 billion to Canada's economy in five years, potentially generating $400 billion in additional GDP during that period. Current basic income programs in Canada Various basic income and cash transfer pilots are ongoing, including in Newfoundland and Labrador for people between the ages of 60 and 64. Quebec has a basic income of $1,309 monthly for people with limited income, according to Quebec's government website. Prince Edward Island, which recently hosted a conference on basic income, started its T-BIG pilot — the Targeted Basic Income Guarantee — in 2021 for over 600 people. The program gives cash to participants to bring them within 85% of the federal poverty level. Meanwhile, a few provinces, including Saskatchewan, have debated adopting a sovereign wealth fund that pays dividends similar to the Alaska Permanent Fund. In 2021, the British Columbia Basic Income Panel created 65 recommendations for implementing targeted basic income programs for people with disabilities, young Canadians aging out of government care, and women escaping violence. However, the panel recommended overhauling certain social programs and suggested against a general basic income. "We have concluded that moving to a system around a basic income for all as its main pillar is not the most just policy option," the report reads. "The needs of people in this society are too diverse to be effectively answered simply with a cheque from the government." The province's New Leaf Experiment has seen promising results. In New Leaf's first rendition, launched in 2018, which gave $7,500 total upfront to 50 people experiencing homelessness with a control group of 65 people, participants did not increase spending on goods like drugs or alcohol and spent 99 fewer days unhoused, according to a research note on the pilot's outcomes. The pilot also helped participants with financial literacy and getting them proper IDs and paperwork. Results are forthcoming for another iteration, which started in 2022. "The findings are that they work more hours, they get paid more per hour, and a lot of the individuals we're working with are accessing training," said Amber Dyce, CEO of Foundations For Social Change, a charitable organization that runs the New Leaf pilot. "By getting the cash transfer, they have more breathing room. They're trying to empower themselves to become more financially stable through employment."
Yahoo
2 hours ago
- Yahoo
Why is AI halllucinating more frequently, and how can we stop it?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."