Ethereum Community Releases Comprehensive Report Outlining Ether's Bull Case to Institutional Investors
New York, United States:
Report Underscores ETH's Value Proposition as the 'Digital Oil' Powering a Global Digital Economy
Report's Contributors Include Etherealize's Vivek Raman, Founder and CEO of ether.fi Mike Silagadze, and Other Leading Figures from the Ethereum Community
Members of the Ethereum community today announced the release of a new report targeting institutional investors that presents 'The Bull Case for ETH.' The report, which can be found in full here, represents a collaborative effort between many prominent leaders and researchers from the Ethereum ecosystem, with contributors that include Etherealize co-founders Vivek Raman, Danny Ryan, and Grant Hummer, as well as Founder and CEO of ether.fi Mike Silagadze.
The report outlines why ETH – the native asset underpinning Ethereum's transformative ecosystem – is among the most significantly mispriced assets in global markets, offering one of the largest asymmetric upside opportunities across all asset classes today. With the global financial system poised for a generational transformation as more and more institutions begin to put their assets onchain, Ethereum has emerged as the most viable base layer for a fully digital and composable financial ecosystem – already hosting over 80% of all tokenized assets and serving as the default platform for stablecoins and institutional blockchain infrastructure. As the report explains, ETH is more than just a store of value – it is the fuel, collateral, and reserve asset fueling the financial system of the future.ETH is digital oil powering the financial digital economy.
The report details why ETH should be considered a core allocation in institutional strategies that prioritize long-term value creation, technology exposure, and future-proof financial infrastructure – laying out its case across three core sections: Understanding ETH: The Digital Oil of the Digital Economy – Explores the relationship between Ethereum and ETH, ETH's utility and unique properties, proper valuation frameworks for assessing ETH's value as an asset, and the reasons it is currently undervalued and underrepresented in the portfolios of institutional investors looking for asymmetric opportunities and productive stores of value. Ethereum: The Infrastructure Driving ETH's Ascent – Covers the structural, technological, and economic drivers behind the Ethereum network's growing momentum, and presents a case for why Ethereum's likely position as the foundational layer of the global digital financial system will support and amplify ETH's economic importance. Ethereum & AI: The Economic Engine of the Autonomous Economy – Evaluates Ethereum's potential role and value in a financial system powered by autonomous agents.
'We've reached a tipping point where Ethereum and ETH are no longer optional for traditional finance,' said Vivek Raman, co-founder and CEO of Etherealize. 'ETH is becoming the indispensable asset at the heart of a new, digitally native financial system, where tokenization and onchain infrastructure are the norm, not the future. Simply put, ETH is digital oil: the essential fuel for tomorrow's global financial rails. The chance to be early in this transformation and to harness ETH's unmatched value is more powerful than ever. Our goal with this report is to educate institutions at this critical moment.'
'Institutional investors have been so focused on Bitcoin and its narrative as a store of value that they have overlooked an asset with far greater growth potential,' said Joseph Lubin, Founder and CEO of Consensys and Co-founder of Ethereum. 'ETH not only shares the same store of value properties that made Bitcoin popular, but it also has extensive utility, offers more predictable scarcity, and provides a regular yield, positioning it as the ultimate productive reserve asset. As Ethereum further entrenches itself as the backbone of the digital economy, ETH becomes more indispensable – not only as the fuel powering Ethereum, but as a strategic investment in the infrastructure of the future.'
The report's full list of contributors includes: Danny Ryan, Vivek Raman, Grant Hummer, Zach Obront, Rodrigo Vazquez, Ryan Berckmans, Leo Lanza, Hanniabu, Mike Silagadze, Anthony Sassano, Ryan Sean Adams, Andrew Keys, Tim Lowe, Maria Shen, Ken Deeter, Amanda Cassatt, Aftab Hossain, William Mougayar, Mariano Di Pietrantonio, Agustin do Rego, and Valeria Salazar.
About Etherealize
Etherealize is a project focused on accelerating the adoption of and bringing institutional assets onto Ethereum. Founded in 2024, Etherealize serves as a bridge between traditional and decentralized finance, offering research, educational content, and products that support the integration of real-world assets onto the Ethereum blockchain. Etherealize aims to help establish Ethereum as the digital back office for Wall Street – enabling a new era of financial infrastructure that is digital, programmable, transparent, and open to the world.
For more information, visit www.etherealize.com.
About ether.fi
ether.fi is a liquid staking protocol that allows stakers to retain control of their keys while delegating validator operations to a node operator. Formed under a shared vision of what DeFi should be, ether.fi offers stakers a decentralized, non-custodial staking solution that can serve as a building block for web3 infrastructure.
View source version on businesswire.com: https://www.businesswire.com/news/home/20250610570293/en/
Disclaimer: The above press release comes to you under an arrangement with Business Wire. Business Upturn takes no editorial responsibility for the same.
Ahmedabad Plane Crash
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
39 minutes ago
- Yahoo
EZCORP Expands in Mexico with 40-Store Acquisition
EZCORP, Inc. (NASDAQ:EZPW) is among the best small company stocks to invest in. EZCORP, Inc. (NASDAQ:EZPW), with a market capitalization of $811.591 million, has announced the acquisition of 40 pawn stores across 13 Mexican states in a press release. This attempt to expand its global footprint positions the company well in the auto pawn segment, which is considered one of the fastest-growing sectors in Mexico's pawn industry. The purchased stores, functioning under the Monte Providencia and Tu Empeño Efectivo brands, provide conventional pawn loans and auto pawn transactions. EZCORP, Inc. (NASDAQ:EZPW) assumed control of the management of seven additional Monte Providencia stores, with the anticipated completion of the purchase in the months ahead. An employee in a pawn store, counting jewelry and consumer electronics that were pawned. One of the company's standout strengths is its robust liquidity position, which has current assets exceeding short-term liabilities by a factor of four. This allows the giant to engage in more strategic transactions. Just recently, EZCORP, Inc. (NASDAQ:EZPW) declared investments worth SEK 95 billion in Swedish AI infrastructure, marking it one of the largest AI investments by the alternative asset manager in Europe. EZCORP, Inc. (NASDAQ:EZPW) is a Texas-based provider of pawn services in the United States and Latin America. With three main segments: U.S. Pawn, Latin America Pawn, and Other Investments, the company offers pawn loans and retails merchandise. Incorporated in 1989, the small-cap company is the go-to choice for short-term cash needs and quality pre-owned goods. While we acknowledge the potential of EZPW as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: The Best and Worst Dow Stocks for the Next 12 Months and 10 Unstoppable Stocks That Could Double Your Money. Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services
Sword Health announced Tuesday that it had raised $40 million in a recent funding round, giving it a $4 billion valuation. Founded in 2015, the healthcare startup has focused on helping people manage chronic pain at home. Using AI tools, the platform connects users with expert clinicians who then provide patients with tools for digital physical therapy, pelvic health, and overall mobility health. However, the company says this new round of funding will largely go towards developing a mental health arm of its program called Mind. Don't Miss: Maker of the $60,000 foldable home has 3 factory buildings, 600+ houses built, and big plans to solve housing — Peter Thiel turned $1,700 into $5 billion—now accredited investors are eyeing this software company with similar breakout potential. Learn how you can "Today, nearly 1 billion people worldwide live with a mental health condition. Yet care remains fragmented, reactive, and inaccessible," Sword said in the announcement. "Mind redefines mental health care delivery with a proactive, 24/7 model that integrates cutting-edge AI with licensed, Ph.D-level mental health specialists. Together, they provide seamless, contextual, and responsive support any time people need it, not just when they have an appointment." Sword CEO Virgílio Bento told CNBC, "[Mind] really a breakthrough in terms of how we address mental health, and this is only possible because we have AI." Users will be equipped with a wearable device called an M-band, which will measure their environmental and physiological signals so that experts can reach out proactively as needed. The program will also offer access to services like traditional talk therapy. Bento told CNBC that a human is "always involved" in patients care in each of its programs, and that AI is not making any clinical decisions. Trending: Maximize saving for your retirement and cut down on taxes: . For example, if a Sword patient has an anxiety attack, AI will identify it through the wearable and bring it to the attention of a clinician, who can then provide an appropriate care plan. "You have an anxiety issue today, and the way you're going to manage is to talk about it one week from now? That just doesn't work," Bento told CNBC. "Mental health should be always on, where you have a problem now, and you can have immediate help in the moment." According to Bento, Sword Mind already has a waiting list, and is being tested by some of its partners who appreciate it's "personalized approach and convenience." "We believe that it is really the future of how mental health is going to be delivered in the future, by us and by other companies," he told CNBC. "AI plays a very important role, but the use of AI — and I think this is very important — needs to be used in a very smart way." The rest of the cash raised in the funding round, which was led by General Catalyst, will go towards acquisitions, global expansion, and AI development, Sword Health says. Read Next: Here's what Americans think you need to be considered Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Sword Health Now Valued At $4 Billion, Announces Expansion Into Mental Health Services originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
2 hours ago
- Yahoo
Why is AI halllucinating more frequently, and how can we stop it?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The more advanced artificial intelligence (AI) gets, the more it "hallucinates" and provides incorrect and inaccurate information. Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark. That's more than double the rate of the older o1 model. While o3 delivers more accurate information than its predecessor, it appears to come at the cost of more inaccurate hallucinations. This raises a concern over the accuracy and reliability of large language models (LLMs) such as AI chatbots, said Eleanor Watson, an Institute of Electrical and Electronics Engineers (IEEE) member and AI ethics engineer at Singularity University. "When a system outputs fabricated information — such as invented facts, citations or events — with the same fluency and coherence it uses for accurate content, it risks misleading users in subtle and consequential ways," Watson told Live Science. Related: Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals The issue of hallucination highlights the need to carefully assess and supervise the information AI systems produce when using LLMs and reasoning models, experts say. The crux of a reasoning model is that it can handle complex tasks by essentially breaking them down into individual components and coming up with solutions to tackle them. Rather than seeking to kick out answers based on statistical probability, reasoning models come up with strategies to solve a problem, much like how humans think. In order to develop creative, and potentially novel, solutions to problems, AI needs to hallucinate —otherwise it's limited by rigid data its LLM ingests. "It's important to note that hallucination is a feature, not a bug, of AI," Sohrob Kazerounian, an AI researcher at Vectra AI, told Live Science. "To paraphrase a colleague of mine, 'Everything an LLM outputs is a hallucination. It's just that some of those hallucinations are true.' If an AI only generated verbatim outputs that it had seen during training, all of AI would reduce to a massive search problem." "You would only be able to generate computer code that had been written before, find proteins and molecules whose properties had already been studied and described, and answer homework questions that had already previously been asked before. You would not, however, be able to ask the LLM to write the lyrics for a concept album focused on the AI singularity, blending the lyrical stylings of Snoop Dogg and Bob Dylan." In effect, LLMs and the AI systems they power need to hallucinate in order to create, rather than simply serve up existing information. It is similar, conceptually, to the way that humans dream or imagine scenarios when conjuring new ideas. However, AI hallucinations present a problem when it comes to delivering accurate and correct information, especially if users take the information at face value without any checks or oversight. "This is especially problematic in domains where decisions depend on factual precision, like medicine, law or finance," Watson said. "While more advanced models may reduce the frequency of obvious factual mistakes, the issue persists in more subtle forms. Over time, confabulation erodes the perception of AI systems as trustworthy instruments and can produce material harms when unverified content is acted upon." And this problem looks to be exacerbated as AI advances. "As model capabilities improve, errors often become less overt but more difficult to detect," Watson noted. "Fabricated content is increasingly embedded within plausible narratives and coherent reasoning chains. This introduces a particular risk: users may be unaware that errors are present and may treat outputs as definitive when they are not. The problem shifts from filtering out crude errors to identifying subtle distortions that may only reveal themselves under close scrutiny." Kazerounian backed this viewpoint up. "Despite the general belief that the problem of AI hallucination can and will get better over time, it appears that the most recent generation of advanced reasoning models may have actually begun to hallucinate more than their simpler counterparts — and there are no agreed-upon explanations for why this is," he said. The situation is further complicated because it can be very difficult to ascertain how LLMs come up with their answers; a parallel could be drawn here with how we still don't really know, comprehensively, how a human brain works. In a recent essay, Dario Amodei, the CEO of AI company Anthropic, highlighted a lack of understanding in how AIs come up with answers and information. "When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does — why it chooses certain words over others, or why it occasionally makes a mistake despite usually being accurate," he wrote. The problems caused by AI hallucinating inaccurate information are already very real, Kazerounian noted. "There is no universal, verifiable, way to get an LLM to correctly answer questions being asked about some corpus of data it has access to," he said. "The examples of non-existent hallucinated references, customer-facing chatbots making up company policy, and so on, are now all too common." Both Kazerounian and Watson told Live Science that, ultimately, AI hallucinations may be difficult to eliminate. But there could be ways to mitigate the issue. Watson suggested that "retrieval-augmented generation," which grounds a model's outputs in curated external knowledge sources, could help ensure that AI-produced information is anchored by verifiable data. "Another approach involves introducing structure into the model's reasoning. By prompting it to check its own outputs, compare different perspectives, or follow logical steps, scaffolded reasoning frameworks reduce the risk of unconstrained speculation and improve consistency," Watson, noting this could be aided by training to shape a model to prioritize accuracy, and reinforcement training from human or AI evaluators to encourage an LLM to deliver more disciplined, grounded responses. RELATED STORIES —AI benchmarking platform is helping top companies rig their model performances, study claims —AI can handle tasks twice as complex every few months. What does this exponential growth mean for how we use it? —What is the Turing test? How the rise of generative AI may have broken the famous imitation game "Finally, systems can be designed to recognise their own uncertainty. Rather than defaulting to confident answers, models can be taught to flag when they're unsure or to defer to human judgement when appropriate," Watson added. "While these strategies don't eliminate the risk of confabulation entirely, they offer a practical path forward to make AI outputs more reliable." Given that AI hallucination may be nearly impossible to eliminate, especially in advanced models, Kazerounian concluded that ultimately the information that LLMs produce will need to be treated with the "same skepticism we reserve for human counterparts."