logo
How to use AI for good

How to use AI for good

Fast Company14-05-2025

Social media was mankind's first run-in with AI, and we failed that test horribly, according to tech ethicist Tristan Harris, whom The Atlantic called 'the closest thing Silicon Valley has to a conscience.' A recent survey found nearly half of Gen Z respondents wished social media had never been invented. Yet, 60% still spend at least four hours daily on these platforms.
Bullying, social anxiety, addiction, polarization, and misinformation—social media has become a cocktail of disturbing discourse. With GenAI, we have a second chance to ensure technology is used responsibly.
But this is proving difficult. Major AI companies are now adopting collaborative approaches to address governance challenges. Recently, OpenAI announced it would implement Anthropic's Model Context Protocol, a standard for connecting AI models to data sources that's rapidly becoming an industry norm with Google following suit.
With any new technology, there are unexpected benefits and consequences. As Harris put it, 'whatever our power is as a species, AI amplifies it to an exponential degree.'
While GenAI helps us accomplish more than ever before, dangers exist. A seemingly safe large language model (LLM) can be manipulated by bad actors to create harmful content or be jailbroken to write malicious code. How do we avoid these harmful use cases while benefiting from this powerful technology? Three approaches are possible, each with its own merits and drawbacks.
3 ways to benefit from AI while avoiding harm
Option #1: Government regulation
The automobile brought both convenience and tragedy. We responded with speed limits, seatbelts, and regulations—a process spanning over a century.
Legislators worldwide are attempting similar safeguards with AI. The European Union leads with its AI Act, which entered into force in August 2024. Implementation is phased, with some provisions active since February 2025, banning systems posing 'unacceptable risk' like social scoring and untargeted scraping of facial recognition data.
However, these regulations present challenges. European tech leaders worry that punitive EU measures could trigger backlash from the Trump administration. Meanwhile, U.S. regulation develops as a patchwork of state and federal initiatives, with states like Colorado enacting their own comprehensive AI laws.
The EU AI Act's implementation timeline illustrates this complexity: Some bans started in February 2025, codes of practice follow nine months after entry into force, rules on general-purpose AI at the 12-month mark, while high-risk systems have 36 months to comply.
A real concern exists: Excessive regulation might simply shift development elsewhere. Building a functional LLM model costs only hundreds of millions of dollars—within reach for many countries.
While regulation has its place, the process is too flawed for developing good rules currently. AI evolves too quickly, and the industry attracts too much investment. Resulting regulations risk either stifling innovation or lacking meaningful impact.
So, if government regulation isn't the panacea for AI's dangers, what will help?
Option #2: Social discourse
Educators are struggling with GenAI and academic honesty. Some want to block AI entirely, while others see opportunities to empower students who struggle with traditional pedagogy.
Imagine having a perpetually available tutor answering any question—but one that can also complete your assignments. As Satya Nadella put it recently on the Dwarkesh Podcast, his new workflow is to 'think with AI and work with my colleagues.' This collaborative approach to AI usage could be a model for educational settings, where AI serves as a thinking partner rather than a replacement for learning.
In homes, schools, online forums, and government, society must reckon with this technology and decide what's acceptable. Everyone deserves a voice in these conversations. Unfortunately, internet discussions often devolve into trading sound bites without context or nuance.
For meaningful conversations, we must educate ourselves. We need effective channels for public input, perhaps through grassroots movements guiding people toward safe and effective AI usage.
Option #3: Third-party evaluators
Before the 2008 financial crisis, credit rating agencies assigned AAA ratings to subprime mortgages, contributing to economic disaster. The problem? Industry-wide self-interest.
When it comes to AI regulators, of course, we run the risk of an incestuous revolving door that does more harm than good. That doesn't have to be the case.
Meaningful and thoughtful research is going into AI certifications and third-party evaluators. In the paper AI Certification: Advancing Ethical Practice by Reducing, Peter Cihon et al. propose several notions.
First, because AI technology is advancing so quickly, AI certification should emphasize evergreen principles, such as ethics for AI developers.
Second, AI certification today lacks nuance for particular circumstances, geographies, or industries. Not only is certification homogenous, but many programs treat AI as a 'monolithic technology' rather than acknowledging the diverse types, such as facial recognition, LLMs, and anomaly detection.
Finally, to see good results, customers must demand high-quality certifications. They have to be educated about the technology and the associated ethics and safety concerns.
The path forward
The way forward requires multistakeholder, multifaceted conversations about societal goals and preventing AI dangers. If government becomes the default regulator, we risk an uninvestable marketplace or meaningless rubber-stamping.
Independent third-party evaluators combined with informed social discourse offers the best path forward. But we must educate ourselves about this powerful technology's dangers and realities, or we'll repeat social media's errors on a grander scale.
Peter Wang is chief AI and innovation officer at Anaconda.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI cofounder Greg Brockman says vibe coding has taken away some of the fun parts of being an engineer
OpenAI cofounder Greg Brockman says vibe coding has taken away some of the fun parts of being an engineer

Business Insider

time4 minutes ago

  • Business Insider

OpenAI cofounder Greg Brockman says vibe coding has taken away some of the fun parts of being an engineer

OpenAI's cofounder said vibe coding has left human engineers to do quality control. On an episode of Stripe's "Cheeky Pint" podcast uploaded last week, OpenAI's cofounder and president, Greg Brockman, said that AI coding will only get better. But until then, it's taking away some parts of software engineering that he said are enjoyable. "What we're going to see is AIs taking more and more of the drudgery, more of this like pain, more of the parts that are not very fun for humans," Brockman said. He added, "So far, the vibe coding has actually taken a lot of code that is actually quite fun." He said that the state of AI coding has left humans to review and deploy code, which is "not fun at all." Brockman added that he is "hopeful" for progress in these other areas, to the point that we end up with a " full AI coworker" that could handle delegated tasks. Changing engineering landscape Using AI to write code, dubbed " vibe coding" by OpenAI cofounder Andrej Karpathy, has skyrocketed this year. Engineers and novices alike are using tools like Microsoft's Copilot, Cursor, and Windsurf to write code, develop games, and even build websites from scratch. Vibe coding has already started changing how much Big Tech and venture capital value people with software engineering expertise. In March, Y Combinator's CEO, Gary Tan, said that vibe coding is set to transform the startup landscape. He said that what would've once taken "50 or 100" engineers to build can now be accomplished by a team of 10, "when they are fully vibe coders." Earlier this month, Business Insider reported that AI coding is no longer a nice-to-have skill. Job listings from Visa, Reddit, DoorDash, and a slew of startups showed that the companies explicitly require vibe coding experience or familiarity with AI code generators like Cursor and Bolt. Still, some in tech circles say leaning on it heavily is short-sighted and the job is being trivialized. Bob McGrew, the former chief research officer at OpenAI, said that while product managers can make "really cool prototypes" with vibe coding, human engineers will still be brought in to "rewrite it from scratch." "If you are given a code base that you don't understand — this is a classic software engineering question — is that a liability or is it an asset? Right? And the classic answer is that it's a liability," McGrew said of software made with vibe coding. GitHub's CEO, Thomas Dohmke, said that vibe coding may also slow down experienced coders. On a podcast episode released last week, he said that a worst-case scenario is when a developer is forced to provide feedback in natural language when they already know how to do it in a programming language. That would be "basically replacing something that I can do in three seconds with something that might potentially take three minutes or even longer," Dohmke said.

Lake Street: AMT-130 Could Be Game-Changer For ClearPoint
Lake Street: AMT-130 Could Be Game-Changer For ClearPoint

Yahoo

time20 minutes ago

  • Yahoo

Lake Street: AMT-130 Could Be Game-Changer For ClearPoint

uniQure N.V. (NASDAQ:QURE) is among the 11 Best Genomics Stocks to Buy According to Hedge Funds. Lake Street Capital reiterated its Buy rating and $30 price target for ClearPoint following uniQure N.V. (NASDAQ:QURE) regulatory update on AMT-130, an experimental gene therapy for Huntington's disease. A team of scientists in lab coats studying a microscope, working on developing gene therapy solutions. uniQure N.V. (NASDAQ:QURE) declared that it has partnered with the FDA to seek an expedited approval process for AMT-130. If the trials go well and the FDA approves it, Lake Street sees AMT-130 as maybe ClearPoint's first 'meaningful partner asset.' ClearPoint provides disposables valued at almost $20,000 for each AMT-130 infusion, suggesting a significant financial benefit associated with the therapy's effectiveness. The FDA's support might result in the approval by the end of 2026, which would have a big effect on ClearPoint's financial standing and confirm its strategic focus on image-guided neurosurgery and gene therapy delivery. According to Lake Street, this regulatory milestone increases clarity on ClearPoint's contribution to the advancement of clinical-stage gene treatments and provides further assurance regarding the clearance timeline for AMT-130. While we acknowledge the potential of QURE as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 High-Growth EV Stocks to Invest In and 13 Best Car Stocks to Buy in 2025. Disclosure. None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Intellia Reports Positive 3-Year Lonvo-z Data in HAE Trial at EAACI Congress
Intellia Reports Positive 3-Year Lonvo-z Data in HAE Trial at EAACI Congress

Yahoo

time20 minutes ago

  • Yahoo

Intellia Reports Positive 3-Year Lonvo-z Data in HAE Trial at EAACI Congress

Intellia Therapeutics, Inc. (NASDAQ:NTLA) is among the 11 Best Genomics Stocks to Buy According to Hedge Funds. It presented encouraging three-year follow-up data from their Phase 1 trial of lonvoguran ziclumeran (lonvo-z) in patients with hereditary angioedema (HAE) at the EAACI Congress 2025. A lab scientist peering into a microscope focused on gene editing technology. All 10 patients were free of attacks and treatment for a median of 23 months after receiving a single dose of lonvo-z, which led to a 98% mean reduction in monthly HAE attacks. Positive safety results were maintained across all treatment dosages. The current Phase 3 HAELO trial has a majority of U.S. enrollment and has completed screening ahead of schedule. Intellia Therapeutics, Inc. (NASDAQ:NTLA) plans to launch in 2027 after submitting a BLA in 2026. Lonvo-z (NTLA-2002) is an experimental in vivo CRISPR-based gene editing therapy that targets the KLKB1 gene to prevent HAE attacks. The FDA has designated it as an Orphan Drug and RMAT, while the EMA has designated it as PRIME, among other regulatory classifications. Administered intravenously at doses ranging from 25 to 75 mg, the therapy demonstrated dose-dependent kallikrein protein decrease and long-term efficacy, with no major treatment-related side effects recorded. While we acknowledge the potential of NTLA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 10 High-Growth EV Stocks to Invest In and 13 Best Car Stocks to Buy in 2025. Disclosure. None. Erreur lors de la récupération des données Connectez-vous pour accéder à votre portefeuille Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données Erreur lors de la récupération des données

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store