
Innovative parks aren't just bold urban design—they lower the temperature in cities
Cities, and those who live in them, are clamoring for more green space, and the benefits parks, trees, and recreation areas provide. The Trust for Public Land's annual ParkScore report found nearly a quarter of Americans in the 100 largest cities don't live within a 10-minute walk of a park or greenspace.
While few cities have acres and acres of space to transform into parkland, they do have opportunities to create new types of urban parks, such as elevated parks, pocket parks fashioned from vacant lots, rails-to-trails projects or capping highways to create new greenspaces. New research, including exclusive project analysis for Fast Company, finds that these projects have a significant cooling impact, showcasing how these kinds of infrastructure interventions can provide some of the densest parts of urban America with much-needed cooling.
A study conducted by Climate Central on behalf of the High Line found that New York City's iconic linear park offers unique cooling and shading benefits, in addition to the social and environmental benefits of adding parkspace.
'We always had a suspicion that we can also make our community more healthier and livable, and we wanted data around it,' says Alan van Capelle, Executive Director of Friends of the High Line,
Researchers started by tracking the urban heat island intensity (UHII) of the areas surrounding the High Line in Manhattan. This measurement captures the additional heat created in urban environments by buildings and pavement, as well as density. Some neighborhoods near the High Line exhibited a 12.9°F UHII, among the highest temperatures Climate Central has found after analyzing 65 U.S. cities.
But the park–via the obvious shading impact from the structure itself, but even more importantly, from the additional shading, transpiration and overall cooling benefits of so many additional trees and plants–cut the UHII to just 4.7°F along many stretches of the park, creating an eight degree cooling impact.
There was variance along the High Line, with areas that are primarily rocks and shrub exhibiting a less pronounced cooling impact, underscoring how it's not just shading that makes the difference. And it's not exactly news that parks provide cooling benefits to cities. But evidence that adaptive reuse parks in the midst of cities can achieve such pronounced temperature differences suggest that they can be an important tool for urban cooling.
Climate Central found that other such parks exhibit similar impacts. In exclusive research for Fast Company, Jennifer Brady, senior data analyst for Climate Central, applied existing data and research to a number of newer urban parks across the country and found similar cooling impacts.
Chicago's 606, an elevated rails-to-trails project on the city's near northwest side, may cool the adjacent neighborhoods 6°F to 8°F, depending on the precise build type and density. Klyde Warren Park in Dallas, which caps a highway adjacent to downtown and runs through one of the city's hottest neighborhoods, yields approximately 4°F to 6°F cooler temperatures.
The Lafitte Greenway in New Orleans and Railroad Park in Birmingham, Alabama, both located in relatively cooler parts of their respective cities, still cool adjoining areas by 4°F.
The design of these parks–including shade structures, shading impact with bridges and overhangs, and of course plants and tree cover–can make a big difference, said Brady. It also helps that much of this kind of abandoned industrial infrastructure–composed of cement and old buildings–adds to the heat, so simply removing them reduces urban heat gain.
But it also shows that targeting particular dense areas with the most pronounced heat island effect can be done, and make a dramatic change. There's always been a strong case to transform vacant lots and leftover lots in areas without park access, both from a recreation and health angle as well as public safety. Adding cooling and climate resilience to the list should make an even stronger case for more investment in these kinds of industrial reuse park projects.
Last year, the nation's 100 largest cities invested a record $12.2 billion in parks; steering more of that funding towards these types of projects can have serious resilience impacts in an era of heightened climate change.
Van Capelle said there's currently 49 other such reuse park projects taking place across North America that are part of the High Line Network, an advocacy group for these kinds of greenspace projects. He sees the heat island mitigation impact as just another reason to advocate for and invest in these projects.
'Being able to step out of your apartment and go into a cool location, being able to know that in the summertime, when the city can become uncomfortable, there's a place like the High Line that runs along a number of neighborhoods is vitally important,' said van Capelle.
The final deadline for Fast Company's Next Big Things in Tech Awards is Friday, June 20, at 11:59 p.m. PT. Apply today.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Associated Press
an hour ago
- Associated Press
Corramedical Opens Applications for 2025 Biopsy Catchment Biobank Investigators Award
'A biopsy represents an opportunity for diagnosis and discovery. This award encourages creative minds in biobanking to redefine what's possible with the cells we already extract — and usually discard.'— Wilfrido Mojica, MD LA JOLLA, CA, UNITED STATES, June 23, 2025 / / -- Corramedical, Inc., a leader in innovative biopsy specimen technologies, is now accepting applications for the 2025 Biopsy Catchment Biobank Investigators Award. This competitive grant program will award a total of $15,000 to support original research exploring the biobanking and biomarker potential of dislodged tumor cells recovered using the Crow's Nest™ Biopsy Catchment System. The Crow's Nest system is a portable, disposable tool designed to recover a second specimen — composed of viable, analyzable tumor cells — from material that is typically discarded during core needle biopsies. This minimally handled specimen, unexposed to formalin, represents a high-quality resource for biobanks and downstream 'omics research. Corramedical invites researchers to submit 500-word proposals demonstrating how this recovered cellular material can be incorporated into forward-thinking, integrated biobanking workflows. Proposals should highlight applications across genomics, transcriptomics, proteomics, or other emerging multiomic fields and articulate how these workflows could add value to both (a) biobank infrastructure and (b) biomarker discovery pipelines in academia and pharma. Key Award Details: • Award Amount: Three (3) recipients will receive $5,000 each. • Additional Support: Each award includes Crow's Nest devices, free of charge, in quantities specific to the winning protocol. • Eligibility: Open to U.S.-based applicants. Concurrent funding from other sources does not disqualify applicants. • Consultation: Applicants may request a complimentary project consultation with a Corramedical-affiliated pathologist or technical staff member. • Acknowledgment: Recipients are expected to cite the Crow's Nest Biopsy Catchment System in the methods section of resulting publications. Application Process: • Deadline: Applications will be reviewed on a rolling basis beginning July 1, 2025, and accepted throughout the remainder of the year. • Submission: Proposals (max 500 words) should be submitted via email to [email protected]. • Contact: For inquiries, contact +1-833-4-BIOPSY or visit 'Each core needle biopsy represents an opportunity not only for diagnosis but for discovery,' said Dr. Wilfrido Mojica, Corramedical Chief Technology Officer. 'This award encourages creative minds in biobanking to help redefine what's possible with the cells we already extract — and usually discard.' For updates, find Corramedical online at and follow us on LinkedIn at Corramedical is an EvoNexus portfolio company. Crow's Nest™, One Biopsy, Many Answers™, and Biopsy Catchment™ are trademarks of Corramedical, Inc. Nathan Edwards Corramedical, Inc. +1 833-424-6779 ext. 1 email us here Visit us on social media: LinkedIn Legal Disclaimer: EIN Presswire provides this news content 'as is' without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Associated Press
an hour ago
- Associated Press
The Training Data Project Wins Prestigious ICEAA 2025 Best Paper Award for Work on AI Data Labeling and Risk Reduction
06/23/2025, Washington, D.C // PRODIGY: Feature Story // David Cook, Co-Founder of The Training Data Project (Source: The Training Data Project) The Training Data Project, a company focused on quantifying AI value and pioneering data labeling standards, has been awarded the 2025 Best Paper honor by the International Cost Estimating and Analysis Association (ICEAA) in the Management, EVM, Software & Agile category. The winning paper, 'Enabling Measurable Success in DoD AI Programs from Acquisition to Operations,' highlights the central role that training data and data labeling play in AI performance, accountability, and long-term program value. The paper, co-authored by The Training Data Project co-founder David Cook, was selected from a competitive field of government and industry contributors. It outlines a practical methodology for quantifying the value and risk associated with AI systems in Department of Defense programs, beginning not at deployment, but at the foundation: the training data pipeline. 'It's an incredible honor to be recognized by the ICEAA, especially at a conference of cost estimators, a community I've never formally belonged to,' said Cook. 'But that's also the point. As AI continues to expand, its financial and operational value depends on something often overlooked: the integrity of the data we feed into it.' At the core of The Training Data Project's mission is the belief that nothing moves in AI without quality data. Data labeling, the process of annotating and identifying data points to 'teach' AI models what to pay attention to, is the bridge between raw inputs and intelligent outcomes. When done incorrectly, the results can be not just ineffective, but dangerous. 'Bad training data is worse than no training data,' Cook added. 'Mistakes made early in the labeling process don't just vanish. They cascade. They replicate through the system like compound interest, and by the time you spot the failure, the only option might be to start over.' To train an AI to recognize a stop sign, for example, it's not enough to feed it thousands of perfectly clear images. The model must also be exposed to a wide range of real-world variations including poor lighting, partial obstructions, weather damage, unusual angles, and visual interference. The more representative and well-labeled the training data, the better the AI can generalize and respond accurately in unpredictable, real-life conditions. 'Training data is not optional, it is foundational,' said Cook. 'Its importance spans all forms of AI. For Large Language Models, which depend on scale, diversity, and structure to function, it is absolutely crucial. Without standards and measurable quality in training data, organizations invite unquantifiable risk across the entire AI pipeline. Value in AI begins with value in the data.' Founded in 2023 by Cook and CEO Noami DeVore, The Training Data Project helps government and enterprise organizations navigate the complex intersection of data labeling, AI governance, and risk reduction. The company's mission is structured around a framework it calls TRUST: Transparent, Reachable, Unbiased, Standards-based, and Traceable data practices. Its work spans three primary pillars: defining quality and standards for training data, sharing best practices for cost-effective curation, and developing open source tools that support responsible AI deployment. From military applications to commercial AI systems, The Training Data Project offers a clear warning and a hopeful path forward. If organizations commit to data quality at the outset, they can unlock both innovation and measurable value while avoiding costly downstream failures. Media Contact: Name - Noami DeVore Email - [email protected]


Forbes
an hour ago
- Forbes
How Retrieval-Augmented Generation Could Stop AI Hallucinations
Sagar Gupta, EST03 Inc., is an ERP Implementation Leader with over 20 years of experience in enterprise-scale technology transformations. Large language models (LLMs) like OpenAI's GPT-4 and Google's PaLM have captured the imagination of industries ranging from healthcare to law. Their ability to generate human-like text has opened the doors to unprecedented automation and productivity. But there's a problem: Sometimes, these models make things up. This phenomenon—known as hallucination—is one of the most pressing issues in the AI space today. The Hallucination Challenge At its core, an LLM generates responses based on statistical associations learned from massive datasets. It's like a parrot with access to all the books ever written—but no real understanding of what's true or relevant. That's why hallucinations happen: The model is trained to sound plausible, not necessarily be accurate. Researchers classify hallucinations into two main types: • Intrinsic: These contradict known facts or include logical inconsistencies. • Extrinsic: These are unverifiable, meaning there's no reliable source to back them up. The root causes lie in incomplete training data, ambiguous prompts and the lack of real-time access to reliable information. The RAG Solution Retrieval-augmented generation (RAG) enriches traditional LLMs with a system that fetches relevant documents from a trusted database in real time. The model then uses these documents to generate responses grounded in actual content, rather than relying solely on what it 'remembers' from training. The architecture typically includes: • A retriever, often based on technologies like dense passage retrieval (DPR) or best matching 25 (BM25) • A generator, usually a transformer-based model that crafts the response based on the retrieved data This combination essentially transforms the LLM into an open-book test-taker rather than a guesser. RAG In Action Real-world experiments show promise. A 2021 study reported a 35% reduction in hallucinations in question-answering tasks using RAG. Similarly, models like DeepMind's RETRO and Meta's Atlas demonstrate significantly better factual accuracy by incorporating retrieval systems. Innovations like the fusion-in-decoder (FiD) and REPLUG models take this further by improving how the model processes multiple retrieved documents or integrates them into frozen models for faster deployment. But even RAG has its limits. If the retriever pulls the wrong information or the generator misinterprets it, hallucinations can still occur. And there's an added trade-off: Retrieval increases system complexity and inference time—no small issue in real-time applications. Rethinking Evaluations Evaluating hallucinations is another hurdle. Existing metrics like FactCC and FEVER try to measure factual consistency, but they often miss nuances. Human evaluations remain the gold standard, but they're costly and slow. Researchers are now exploring reference-free factuality metrics and better ways to assess whether the retrieved documents actually support the generated answer. What's Next? Three exciting directions could further improve how we tackle hallucinations: 1. Differentiable Retrieval: Instead of separating the retriever and generator, future systems might train both components together in a fully end-to-end fashion. This could tighten the alignment between what's retrieved and what's generated. 2. Memory-Augmented Models: Some experts are exploring how AI can maintain long-term memory internally, reducing the need for external retrieval or complementing it when appropriate. 3. Fact-Aware Training: By incorporating factual correctness into the training objective itself—via techniques like reinforcement learning from human feedback—models might learn to prioritize truth over plausibility. How RAG Helps Enforce Departmental Private Policies Here's how RAG systems can support department-specific policies in real enterprise environments: With RAG, AI assistants can answer employee questions about HR policies using only internal documents—like the company's official handbook or compliance playbook—ensuring no public or outdated data leaks into responses. Examples: Confidential grievance reporting, DEI guidelines and code of conduct. Use Case: An employee asks about the process for reporting harassment. Instead of guessing or fabricating, the AI pulls directly from the current internal grievance protocol. Financial departments are governed by strict rules, often tailored to the business and changing frequently. RAG systems can help ensure AI-generated summaries, reports or answers reflect the latest finance policies pulled from internal financial controls documents or regulatory compliance handbooks. Examples: Internal audit procedures, expense reimbursement rules and compliance with SOX (Sarbanes–Oxley). Use Case: A junior accountant asks, 'Can I reimburse a client dinner without itemized receipts?' The AI retrieves the latest expense policy and provides an accurate, compliance-approved response. LLMs trained on public data should never guess legal advice. RAG enables law departments to control which internal documents are used, like NDAs, internal counsel memos or state-specific guidelines. Examples: Confidentiality agreements, IP handling protocols and litigation hold instructions. Use Case: A manager asks if they can share a prototype with a vendor. The AI accesses the legal department's approved NDA workflow and provides the required preconditions for IP protection. RAG helps enforce brand consistency and confidentiality. AI writing assistants can generate content only using approved brand tone documents, messaging guidelines or embargoed launch timelines. Examples: Brand tone guidelines, embargoed campaign details and competitive comparison policies. Use Case: A content writer asks, 'What's our positioning against competitor X?' Instead of hallucinating risky comparisons, the AI references an internal competitive intelligence deck. Sales reps often operate on tight timelines and ambiguous inputs. RAG-equipped AI assistants can ground responses in the official sales playbook, quoting rules and commission policies. Examples: Discount approval thresholds, territory conflict resolution and lead qualification rules. Use Case: A rep asks, 'Can I offer a 25% discount to a client in EMEA?' The AI checks the discount matrix and responds based on regional approval flows. Security-related queries are risky when answered with public data. RAG ensures internal policies guide responses. Examples: Data access controls, employee onboarding/offboarding protocols and acceptable use policy. Use Case: An employee asks how to report a phishing attempt. The AI retrieves and relays the internal incident response protocol and contact escalation path. Final Word In an age where trust, privacy and compliance are business-critical, RAG doesn't just reduce hallucinations—it helps operationalize private knowledge safely across departments. For enterprises betting big on generative AI, grounding outputs in real, governed data isn't optional—it's the foundation of responsible innovation. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?