logo
#

Latest news with #bias

New York State Regents review's definition of Zionism draws the ire of many on Long Island
New York State Regents review's definition of Zionism draws the ire of many on Long Island

CBS News

time13 hours ago

  • Politics
  • CBS News

New York State Regents review's definition of Zionism draws the ire of many on Long Island

There is controversy on Long Island over a New York State Regents exam study packet that some say inaccurately defines Zionism and includes factual errors. A mother told CBS News New York on Tuesday she wants more than an apology. What the review packet says Michelle Herman of Melville gives her daughter's 10th grade Global History Regents review packet at Half Hollow Hills East High School a failing grade. It defines Zionism as "an example of extreme nationalism." "To call Zionism extreme nationalism is propaganda. I consider myself a Zionist. There is nothing extreme about Zionism. It is loving my country," Herman said. The review includes historical inaccuracies and bias, including statements like, "Every war ended with Israel gaining more land" and "Jews taking land away from Palestinians." "It's incorrect. It's propaganda. It's biased," Herman said. Anti-Defamation League frustrated by the review packet The Anti-Defamation League says the nine-line summary is similar to what it has seen taught elsewhere. "Completely ignoring the first intifada, the second intifada, and many many decades of terrorist incidents. Before that, all the wars that Israel fought were wars of defense," the ADL's Scott Richman said. "It's skewed to show the Israelis as aggressors, as criminals, and Palestinians are completely innocent actors, and none of that is true." It set off a firestorm of comments at a recent board meeting. No one spoke in favor of the handout. "As a Jewish student sitting in a classroom and seeing the history of my people distorted and mocked was heartbreaking," one student said. Brian Conboy, the interim superintendent of the Half Hollow Hills School District acknowledged it, "contained language and ideas that were factually incorrect and offensive" and was not created by anyone in the district. He added the curriculum going forward will be vetted by experts. "On behalf of our district, we want you all to know that offensive and inaccurate material such as this do not meet our standards of excellence and are not something we take lightly. We can and will do better moving forward," Conboy said. Origin of the review packet remains a mystery The source of the study sheet is still a mystery. The New York State Education Department says it had no hand in it. A spokesperson for the school district said it has not found the origin. Herman said she wonders how many prior grades received the lesson. "We have been indoctrinating our own children and giving them the wrong information," she said. She is calling for accountability -- not to punish, but to educate -- and she wants what is wrong to be righted. The Education Department said it's taking the matter seriously and will continue to monitor and take appropriate action as needed.

AI, Bias, And Empathy: How To Ensure Fairness In An AI World
AI, Bias, And Empathy: How To Ensure Fairness In An AI World

Forbes

time3 days ago

  • Business
  • Forbes

AI, Bias, And Empathy: How To Ensure Fairness In An AI World

Algorithms have not only enhanced how we work, they are reshaping how we hire, assess value, and define success. But what happens when they also absorb our deepest biases, judging people before they even get a chance to show up? AI can be a powerful tool for improving how organizations perform and how productive employees are. But when algorithms are biased, they can undermine fair hiring practices, leading to discrimination based on gender, race, age, or even faith. A Scientific Reports paper shows that when employees were evaluated by AI systems (like algorithms or automated tools) instead of human managers, they were more likely to feel disrespected or devalued. Biases in the datasets used to train AI models can both skew recommendations and the decision-making processes of the leaders who use them. In the past, hiring teams handled tasks like reviewing resumes, onboarding new employees, and conducting performance evaluations. These moments created opportunities to build connection, show curiosity, and develop meaningful workplace relationships. But in today's AI-driven workplace, many of these tasks can be automated to save time and boost efficiency. The downside? As we replace human interaction with algorithms, we risk losing those moments of genuine connection. And while human decision-making can be biased, whether consciously or not, AI can carry those same biases too, just in less visible ways. In a previous article, I explored whether empathy is still essential in the age of AI, or if we can simply outsource it. While the benefits of using AI in the workplace are clear, there are some challenges it can't fix, like the biases built into AI systems and the crucial role empathy plays in addressing them. Empathy is a vital first step toward simply understanding how people feel about AI and the future of work. A recent report from the Pew Research Center highlights a striking divide: '73% of AI experts surveyed say AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years.' In contrast, among U.S. adults, 'that share drops to 23%.' An empathetic leader will get curious about the inverse of this statistic: the 77% of U.S. adults who don't believe AI will have a positive impact on how they do their jobs now and in the future. An empathetic leader wants to acknowledge hardship and listen to the perspectives that often go unspoken — without fear. In an episode of The Empathy Edge podcast, speaker, author, and filmmaker Minter Dial highlights the key questions we need to ask to bring heart into AI and the workplace: 'What is your intention? Before you bring in the AI, what are you trying to achieve? Is it linked to your strategy? Or is it just linked to saving money, cutting corners, getting rid of the hassle of dealing with people?' Until we get clear on both our relational and business goals, we can't truly embed empathy into the way we use AI. Here are three ways leaders can embed empathy in AI-driven hiring, performance, and decision-making processes. Transparency Over Opacity Transparency is the foundation of an empathetic workplace. It's essential for both hiring managers and job seekers to understand required skills and pay scales. Leaders want insight into what their teams need, which benefits attract top candidates, where to find great talent, and what skills are worth developing. At the same time, employees deserve to know how AI is being used in HR. The more we know, the more confidently we can make decisions. The same principle applies to AI. Recruitment algorithms should be transparent and easy to audit. As David Paffenholz writes for the Forbes Technology Council, 'algorithms must account for gaps in candidate data and use systems to evaluate passive and active candidates equitably. This inclusivity ensures your AI tools identify the best talent rather than the most visible talent.' Create Diverse Development Teams Empathy starts in the design room. A PwC report on algorithmic bias and trust emphasizes that involving people from diverse backgrounds in developing and testing AI systems is key to building trust. When teams include a mix of races, genders, ages, economic backgrounds, education levels, and abilities, they're better equipped to spot and address different types of bias. As the report notes, 'Building diverse teams helps reduce the potential risk of bias falling through the cracks,' because 'each will have their own view of the threat of bias and how to help mitigate it.' Juji, an AI company pioneering human-centered agents that combine generative and cognitive AI to automate complex, nuanced business interactions, aims to create empathic AI solutions. Co-founder and CEO Dr. Michelle Zhou, in her interview on The Empathy Edge podcast, explains that while AI is designed to identify patterns and similarities, becoming more empathetic means learning to recognize differences too, and not just what's common. Still, as the Pew Research Center report shows, public trust in AI, especially in the workplace, is far from guaranteed. That's why human oversight remains critical for sensitive decisions. Even if humans can't process vast datasets as quickly, for employees who are cautious or skeptical of AI, knowing there's a person involved in final hiring and performance decisions can make all the difference. Conduct Empathy Audits First Effective people management starts with putting people first, and management second. According to Businessolver's 2024 State of Workplace Empathy Executive Report, leaders need to regularly reflect on whether they're truly meeting employees' needs and expectations. That also means being open and honest about where they may be falling short. From this place of transparency, empathy can be practiced, not just by supporting employees as professionals, but as whole people, embedded in broader communities. When leaders tune into the human dynamics within their organizations, especially how past decisions have affected different groups, they gain valuable insight into their own internal biases. This kind of reflection doesn't just benefit workplace culture; it also informs better practices for AI audits. As an Emerald Insights report on AI bias auditing explains, involving diverse stakeholders and community voices is essential to building rigorous, inclusive audit processes. In this way, empathy audits are more than just a tool for supporting teams, they lay the foundation for human-centered, bias-aware AI systems. As AI transforms the workplace, empathy must remain at the center. It's not just about smarter systems, it's about fairer, more human ones. By leading with empathy, prioritizing transparency, and involving diverse voices, we can design AI that supports both performance and people. The future of work should be efficient, yes. But never at the cost of connection or equity.

Can AI Change The Legal Profession Forever?
Can AI Change The Legal Profession Forever?

Forbes

time3 days ago

  • Business
  • Forbes

Can AI Change The Legal Profession Forever?

Sajal Singh is a Consulting Partner at Kyndryl Nordics, Global Innovation Expert for UN Compact. Board member, IE Business School, Spain. In 2020, Detroit resident Robert Williams was wrongfully arrested after an AI-powered facial recognition system misidentified him as a suspect despite the system's known limitations and warnings. He spent hours in custody before the mistake was uncovered—a stark reminder that AI bias isn't theoretical but deeply consequential. Similar cases have occurred elsewhere, and tools like the COMPAS algorithm have been shown to falsely label Black defendants as high-risk nearly twice as often as white defendants. These stories reveal a disturbing truth: AI can automate and amplify existing biases, leading to real-world injustice. The question we must ask is: When algorithms make mistakes in the legal system, who is held accountable, and how do we ensure fairness and oversight? These are deep questions that require much more than an article. But before we ever get to that stage, AI has many questions to answer for itself. Nevertheless, the legal industry is taking early advantage of AI despite concerns about secular adoption, as I have noted in my prior analyses of AI industry trends. According to the 2024 edition of the American Bar Association's Legal Technology Survey Report, AI adoption within the legal profession nearly tripled year-over-year, from 11% in 2023 to 30% in 2024. This growth spans all firm sizes, though larger firms are implementing AI at a faster pace. This trend is particularly noteworthy when compared to previous technology transitions. A 2024 survey by ACEDS and Everlaw found that legal professionals in the U.S. are adopting generative AI roughly five times faster than they did cloud-based eDiscovery software. This unprecedented rate of adoption underscores the transformative potential that legal professionals see in AI technologies. Market data further illustrates this growth trajectory. The global legal AI market was valued at $1.45 billion in 2024 and is projected to grow at a compound annual growth rate (CAGR) of 17.3% from 2025 to 2030. The rapid adoption of AI in legal practice is driven by compelling efficiency and performance gains. Research comparing large language models (LLMs) to traditional legal invoice reviewers revealed striking efficiency differences: While human lawyers take 194 to 316 seconds per invoice review, LLMs can complete reviews in as little as 3.6 seconds. This represents a 98% reduction in processing time. Cost efficiencies are equally impressive. The same report shows that this reduction in review time adds up to 99.97% in saved expenses. Similar efficiency gains are being observed in contract review processes, where LLMs complete reviews in mere seconds compared to the hours required by human reviewers. So, for a mid-sized firm reviewing 5,000 invoices annually, AI could slash labor costs from $21,350 (human reviewers) to $0.15 (AI systems). This 99.97% cost reduction directly boosts margins by lowering operational expenses. Manual legal work carries inherent error rates of 15% to 20% in tasks like contract clause identification. AI systems can reduce errors by 60%, minimizing costly revisions and liability risks. For a firm generating $10 million annually, a 5% reduction in error-related losses preserves $500,000 in revenue. As for usage, according to the 2025 State of Contracting Survey, the leading use case for AI adoption is contract review. The survey found that 14% of legal teams are now using AI for contract review, up from 8% in early 2024. When asked about key advantages, legal teams cited three primary benefits: faster turnaround times, time savings and reduced tedious work. We're seeing legal professionals also actively integrating AI tools for document review and discovery purposes. This represents a clear evolution from cautious exploration to broader deployment throughout 2024. Key benefits driving this adoption include improved service delivery, competitive differentiation and cost savings. As AI adoption accelerates, regulatory frameworks are evolving to address the unique challenges and opportunities these technologies present. In March 2024, the European Parliament formally adopted the EU Artificial Intelligence Act ("AI Act"), establishing the first comprehensive regulatory framework for AI globally. Shortly after, the United Nations General Assembly unanimously adopted the first global resolution on artificial intelligence, designed to encourage the protection of personal data, risk monitoring and human rights safeguards. These regulatory developments are not deterring adoption but are encouraging responsible innovation. A stable regulatory framework reduces uncertainty and promotes investment in AI research and development, particularly in industries with stringent requirements such as legal services. The data suggests that AI adoption in legal practice has reached a critical inflection point. Law students are increasingly recognizing the importance of AI skills for their future careers. A survey conducted from July to August 2024 found that law students view AI competency as essential not only for operational effectiveness but also for helping formulate future legal frameworks that will regulate this technology across industries. AI is fundamentally transforming the legal profession through unprecedented adoption rates, significant efficiency gains and expanding applications across multiple areas of practice. As the technology continues to mature and regulatory frameworks evolve, AI will likely shift from being a competitive advantage to an essential component of legal practice. This transformation extends beyond simple automation of routine tasks. It's clear that AI can enable legal professionals to deliver services more efficiently, accurately and cost-effectively, allowing them to focus on more strategic and complex aspects of legal work. But can justice ever be on autopilot? Disrupting the scales of justice through AI seems to be some time away. But when justice is coded, who can be held accountable for mistakes or discrimination? These are the larger questions that lie way beyond what industry has adopted until now and will require a larger cross-section of society to deliberate. Law and AI seem to be less interesting than law and AGI put together. That is when we as a society will have to consider if, sometimes, it is just better to be old-fashioned. Forbes Business Development Council is an invitation-only community for sales and biz dev executives. Do I qualify?

How SaaS Companies Can Reduce AI Model Bias
How SaaS Companies Can Reduce AI Model Bias

Forbes

time4 days ago

  • Business
  • Forbes

How SaaS Companies Can Reduce AI Model Bias

As businesses realize the high value of artificial intelligence in improving operations, understanding customers, setting and meeting strategic goals, and more, embedding AI into their products is moving from a 'nice to have' feature to a competitive necessity for software as a service companies. However, it's essential to tread carefully; SaaS companies must be aware of the risk that both implicit and explicit bias can be introduced into their products and services through AI. Below, members of Forbes Business Council share strategies to help better detect and minimize bias in AI tools. Read on to learn how SaaS companies can ensure fairness and inclusivity within their products and services—and protect their customers and brand reputation. To build AI tools that people trust, businesses must embed ethical AI principles into the core of product development. That starts with taking responsibility for training data. Many AI products rely on open, Web-scraped content, which may contain inaccurate, unverified or biased information. Companies can reduce exposure to this risk by using closed, curated content stored in vector databases. - Peter Beven, iEC Professional Pty Ltd It is impossible to make AI unbiased, as humans are biased in the way we feed it with data. AI only sees patterns in our choices, whether they are commonly frowned upon patterns, like race and location, or not-so-obvious patterns, like request time and habits. Like humans, different AI models may come to different conclusions depending on their training. SaaS companies should test AI models with their preferred datasets. - Ozan Bilgen, Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify? You can't spot bias if your test users all look and think the same. Diverse testers help catch real harms, but trying to scrub every point of view just creates new blind spots. GenAI's power is in producing unexpected insights, not sanitized outputs. Inclusivity comes from broadening inputs, not narrowing outcomes. - Jeff Berkowitz, Delve Evaluations are key. SaaS businesses cannot afford expensive teams to validate every change when change is happening at a breakneck speed. Just like QA in software engineering has become key, every business must implement publicly available evaluations to validate bias. This is the most thorough and cost-effective solution out there. - Shivam Shorewala, Rimble Using third-party AI tools for independent audits is key to spotting and correcting bias. This approach helps SaaS companies stay competitive and maintain strong client trust by ensuring fairness, transparency and accountability in their AI-driven services. - Roman Gagloev, PROPAMPAI, INC. SaaS companies need to extend prelaunch audits with real-time bias monitoring to monitor live interactions. For example, one fintech customer reduced approval gaps by 40% by allowing users to flag biases within the app, dynamically retraining models. Ethical AI requires continuous learning and fairness built up through user collaboration, not solely code. - Adnan Ghaffar, LLC SaaS companies can reduce bias by diversifying their training data and using interdisciplinary teams when developing an AI model. They should also implement routine audits to verify that algorithms are fair and transparent, ensuring their AI is inclusive and equitable. This is essential to mitigate alienating customers and damaging brand equity, as biased AI systems lead to inequity. - Maneesh Sharma, LambdaTest Bias starts with who's at the table. If your team doesn't reflect the people you're building for, neither will your model. Audit your data before you code. Fairness isn't a feature you add later, but one that should be baked into the build. If you get that wrong, the harm done is on you. Inclusivity is a strategy, not charity. If your strategy's biased, so is your bottom line. - Aleesha Webb, Pioneer Bank We embed fairness audits at each stage of model development—data curation, training and output testing—using diverse datasets and human-in-the-loop validation. For SaaS, where scale meets intimacy, unchecked bias can harm thousands invisibly. Building trust starts with building responsibly. - Manoj Balraj, Experion Technologies In the age of social media, the best way to minimize bias is to let the users tell you about it. Collecting user-generated opinions through testing, MVPs and feedback forms is the best way to ensure your product is free from developer or even marketer biases. Just make sure you have a good number of users to test your AI product. - Zaheer Dodhia, One powerful way SaaS companies can tackle bias in AI models is by rigorously testing them against open-source and indigenous datasets curated specifically to spotlight underrepresented groups. These datasets act like a mirror, reflecting how inclusive or exclusive your AI really is. By stepping outside the echo chamber of standard data, companies gain a reality check. - Khurram Akhtar, Programmers Force Most teams focus on fixing bias at the data level, but the real signs often surface through day-to-day product use. I tell SaaS companies to loop in support and success teams early. They're closest to the users and usually flag issues first. Their feedback should feed directly into model reviews to catch blind spots that don't show up in training data. - Zain Jaffer, Zain Ventures SaaS companies should simulate edge-case users, including small sellers, niche markets, nonnative speakers and more, to test how AI performs for them. Real inclusivity means optimizing for the exceptions, not just the averages. If your product works for those on the edges, it'll work for everyone. - Lior Pozin, AutoDS Integrate diverse voices at every stage, from design and data to deployment. Uncovering bias begins with owning our blind spots, so use honesty as a guide. Inclusive AI isn't just ethical—it's also essential for relevance, reach and trust in today's diverse world. - Paige Williams, AudPop SaaS companies should establish a continuous feedback loop with external experts, such as ethicists and sociologists, to review AI model outcomes. These experts can identify unintended consequences that technical teams might miss, ensuring the AI model serves all communities fairly. This proactive approach helps avoid costly mistakes, improves user satisfaction and strengthens long-term brand credibility. - Michael Shribman, APS Global Partners Inc. Treat bias like a security bug by documenting it, learning from it and making spotting it everyone's job rather than just the AI team's responsibility. Build bias reports into internal processes and reward early detection. Building operational systems around bias detection keeps products fair, inclusive and trusted. - Ahva Sadeghi, Symba What finally shifted things for us was bringing real users from underserved communities into our QA process. We stopped pretending to know what fairness looks like for everyone. It turns out, when you ask the people most likely to be excluded, they'll tell you exactly how to fix it. - Ran Ronen, Equally AI One way SaaS companies can detect and minimize bias in their AI models is by conducting equity-focused impact assessments. These assessments can evaluate whether the model produces better, worse or neutral outcomes for each user group. This is important, because equity ensures that users from different backgrounds receive fair and appropriate outcomes, promoting true inclusivity and preventing systemic disadvantage. - Ahsan Khaliq, Saad Ahsan - Residency and Citizenship One way SaaS companies can better detect and minimize bias in their AI models is by actively inputting their own unique ideas and diverse perspectives into the system. In this way, the AI can be guided to develop solutions that reflect true inclusivity, ensuring that the outcomes are fair and representative of a wide range of users. - Jekaterina Beljankova, WALLACE s.r.o SaaS companies must shift from a 'software as a service' mindset to a 'service as software' mindset to recognize AI as a dynamic, evolving system. This mindset encourages continuous bias audits, inclusive datasets and real-world feedback loops, which are essential for fairness, trust and long-term relevance in diverse markets. - Kushal Chordia, VaaS - Visibility as a Service

Sheku judge accused of 'torpedoing' his own independence amid private meetings with the victim's family
Sheku judge accused of 'torpedoing' his own independence amid private meetings with the victim's family

Daily Mail​

time12-06-2025

  • Politics
  • Daily Mail​

Sheku judge accused of 'torpedoing' his own independence amid private meetings with the victim's family

An inquiry judge has 'torpedoed' his own independence with the 'spectacularly ill-advised' decision to meet with the family of a man who died in police custody, it has been claimed. The Sheku Bayoh Inquiry has been plunged into crisis amid an extraordinary bias row involving chairman Lord Bracadale, who held five private meetings with the family of the 31-year-old after he died being restrained by officers in Kirkcaldy, Fife, nearly a decade ago. The Scottish Police Federation and the officers at the centre of the probe have now demanded Lord Bracadale step down as inquiry chairman. But the retired High Court judge, who chaired a hearing into his own conduct, claimed that Mr Bayoh's family would have 'walked out' had he not met them. The landmark statutory inquiry - which began in November 2020 and is nearing its closing stages - aims to find out whether racism played a part in the death of the father-of-two in 2015. But the future of the chairman is now in doubt with a special two-day hearing which is said to cost around £2million being held in Edinburgh. Roddy Dunlop KC, representing the Scottish Police Federation, told the hearing Lord Bracadale had to go and the meetings he held were 'in almost their entirety completely inappropriate'. The KC added: 'They were doubtless well-meaning, they were doubtless arranged out of the best of intentions, but and with the greatest of respect, they were spectacularly ill-advised and they have torpedoed the independence of the chair.' Lord Bracadale told the inquiry, in a written statement, that the meetings were needed. He said: 'Given the fragility of the confidence of the families in the Inquiry at various stages, I consider that meeting them on an annual basis did contribute to obtaining and retaining their confidence in the Inquiry and securing their evidence. 'I consider that, if I had not had meetings with them, there is a high probability that they would have stopped participating and would have walked out of the Inquiry.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store