
The AI race just shifted gears — LQMs are now driving real-world results
A little over two years since OpenAI kicked off the AI race, the launch of DeepSeek's R1 model added a new dimension to the conversation, bringing fresh attention to the role of smaller models, open-source innovation, and compute optimisation. Virtually overnight, the industry's focus shifted from sheer model size to efficiency, accessibility, and practical deployment. Now, with an increasing number of high-performance models available as open-source, traditional barriers to entry are eroding. This is forcing AI providers to rethink their value propositions, and set a new direction for those seeking an AI advantage.
Yes, LLMs remain a powerful tool, and the diversity of thought and approach afforded by the open-source community will only accelerate innovation. However, increasing accessibility to powerful models also signals the need for differentiation beyond raw model capability. This shift is already pushing the industry toward new frontiers, where AI innovation is no longer just about who can design and deploy AI solutions that solve complex business challenges with measurable results. Among the most promising of these are Large Quantitative Models (LQMs).
AI's exciting new avenue
The key difference between LLMs and LQMs lies in the data they are trained on and the problems they are designed to solve. LLMs are built on vast amounts of textual data, enabling them to understand and generate human language, making them ideal for tasks like answering questions, generating content, and facilitating natural interactions. LQMs, on the other hand, are trained on numerical data, leveraging machine learning to analyse complex datasets, identify patterns, and drive data-driven decision-making in fields like finance, healthcare, and scientific research.
For the UAE and its GCC neighbors, where economic vision projects emphasise homegrown innovation, LQMs present significant opportunities. In areas such as pharmaceutical discovery and petrochemical R&D, these models offer advanced analytical capabilities that can accelerate breakthroughs and enhance decision-making.
From near-horizon to here-now
LQMs are not just the future of AI — they are already delivering real-world impact for industry pioneers today. However, broader market adoption has been hindered by the complexity of implementation. Unlike LLMs, LQMs require a blend of deep domain expertise, sophisticated software engineering, and robust data management capabilities. This makes in-house development challenging unless organisations are prepared to make significant upfront investments. But securing such investment depends on a compelling business case, which requires leaders to identify high-value applications within their operations. Fortunately, decision-makers can draw inspiration from existing real-world deployments where LQMs have demonstrated clear advantages over conventional AI approaches.
Accelerated drug discovery
The UAE is forging a reputation as a medical tourism hub, having drawn widespread respect for its decisiveness during the COVID crisis by being among the first-to-market with life-saving treatments and running one of the world's most successful vaccination programs. The country is eager to bolster its drug-discovery credentials, and LQMs can play a crucial role by establishing links between the chemical structure of compounds and their biological activity, allowing researchers to optimise drug candidates more effectively. Unlike traditional AI models, LQMs excel at capturing intricate relationships within complex datasets, enabling more precise predictions and deeper insights — key advantages in pharmaceutical R&D, where accuracy and efficiency are paramount. They can model molecular interactions, predict protein folding, and accelerate hypothesis testing, significantly reducing the time and cost associated with bringing new therapies to market.
These same characteristics make LQMs valuable in other high-stakes, data-intensive fields, such as materials science, where they can identify novel compounds with desirable properties, or financial risk modeling, where they can uncover complex patterns in high-noise, low-signal economic data. As adoption grows, industries that rely on deep scientific or strategic reasoning will increasingly see LQMs drive breakthroughs.
Fuelling advancements in oil & gas
LQMs are also being used by the region's petrochemical sector as its players pursue growth within the confines of net-zero and sustainability commitments. Saudi Aramco is currently developing a differentiable computational fluid dynamics (CFD) solver for use in oil and gas processing facilities. LQMs can simulate how gases and liquids interact, allowing Aramco to optimise a critical business process while still reducing emissions and waste.
What makes LQMs particularly well suited to enhancing the petrochemical production chain is their ability to model complex chemical reactions and process optimisations with high fidelity, even when data is sparse or highly specialised. By analysing reaction kinetics, refining efficiencies, and material properties, LQMs help drive breakthroughs in catalyst design, fuel formulation, and carbon capture technologies.
Such advantages translate over to other industries that require precise modeling of intricate physical systems, such as advanced manufacturing, where they can optimise production workflows, or aerospace engineering, where they can enhance aerodynamics and materials performance.
Impetus for ideation
Effectively leveraging LQMs requires a clear understanding of their capabilities and the challenges they are best suited to address. Organisations should begin by identifying high-impact problems that rely on quantitative analysis. Industries such as biopharma, energy, and aerospace frequently require scientific precision — whether in predicting molecular behavior in drug discovery or simulating battery performance in energy storage.
Once a quantifiable problem has been defined, the next step is to evaluate the availability of high-fidelity data. LQMs can both perform simulations and utilise simulation-generated data, making them particularly effective in domains where experimental testing is costly or impractical. However, the quality and relevance of this data are critical — models must be trained on datasets that accurately reflect the systems they are designed to analyse. A robust data pipeline is essential to ensure consistency and reliability.
The ultimate measure of an LQM's effectiveness is its ability to generate actionable insights with measurable business impact. Some LQMs can predict key performance metrics — such as battery efficiency — at a fraction of the time required by conventional approaches, leading to accelerated R&D cycles. By enabling faster iteration and deeper optimisation, these models not only provide a competitive edge but also open the door to transformative breakthroughs that can reshape entire industries.
Opportunity abounds
PwC estimates that AI could generate $320 billion in economic value for the Middle East by 2030, but capturing this opportunity requires strategic investment in the right technologies. LQMs stand out as one of the most effective tools in the AI arsenal, offering a level of precision and adaptability that traditional models struggle to match. However, their impact hinges on business leaders recognising where they can drive the most value. The organisations that move swiftly to understand and deploy LQMs in the right areas will be the ones best positioned to capitalise on AI's economic promise.
The writer is Head of AI Strategy & Partnerships at SandboxAQ.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Tahawul Tech
2 days ago
- Tahawul Tech
OpenAI enters into lucrative deal with U.S. government
OpenAI recently secured a $200 million contract with the US Department of Defence (DoD) to deploy AI tools across administrative functions, part of an initiative to integrate AI tech within government operations. In a statement, the company explained it is launching the OpenAI for Government initiative to provide AI expertise to public servants and support the US government in deploying the technology for the public good. The overriding aim of its work is to enhance the capabilities of government workers, helping to cut down on red tape and paperwork. Its first partnership under the new initiative will be a pilot programme with the DoD through its Chief Digital and Artificial Intelligence Office. It will help identify and prototype how frontier AI could transform administrative operations including improving how US military members and their families access healthcare, streamlining programme and data acquisition, and supporting proactive cyber defence. OpenAI added the initiative consolidates its existing government projects under one area, including the development of a version of ChatGPT for government workers, along with its work with space agency NASA and the Air Force Research Laboratory. Through the initiative, OpenAI will offer US federal, state and local governments access to its most advanced ChatGPT models, custom set-ups for national security, hands on support and an insight into future AI advancements. OpenAI added it is just getting started and is looking forward to helping US government leaders 'harness AI to better serve the public'. Source: Mobile World Live Image Credit: Stock Image/OpenAI


Arabian Post
2 days ago
- Arabian Post
AI Copyright Quietly Redrawing Legal Lines
Twelve consolidated copyright suits filed by US authors and news outlets against OpenAI and Microsoft have landed in the Southern District of New York, elevating the question of whether the extent of human input in AI training crosses the threshold of lawful fair use. The judicial panel cited shared legal and technical claims involving unauthorised use of copyrighted material, notably books and newspapers, as justifying centralised legal proceedings. The US Copyright Office added its authoritative voice in May, questioning whether AI training on copyrighted texts can be deemed fair use, particularly in commercial contexts. The Office clarified that while transformative use may be permissible in research, mass replication or competition with original works likely exceeds established boundaries. Its report highlighted that the crux lies in purpose, source, market impact and guards on output — variables which may render AI models liable under copyright law. A pivotal case involving Thomson Reuters and Ross Intelligence offers early legal clarity: a court ruled that Ross improperly used Westlaw content, rejecting its fair use defence. The judgement centred on the need for AI systems to 'add something new' and avoid copying wholesale, reinforcing the rights of content owners. This ruling is being cited alongside the US Copyright Office's latest guidance as foundational in shaping how courts may assess generative AI. ADVERTISEMENT Legal practitioners are now navigating uncharted terrain. Lawyers such as Brenda Sharton from Dechert and Andy Gass of Latham & Watkins are at the cutting edge in helping judges understand core AI mechanics — from training data ingestion to output generation — while balancing copyright protection and technological progress. Their work emphasises that this lifetime of litigation may not be resolvable in a single sweeping judgment, but will evolve incrementally. At the heart of many discussions lies the condition for copyright protection: human authorship. The US Copyright Office reaffirmed in a February report that merely issuing a prompt does not satisfy the originality requirement. It stated that current systems offer insufficient control for human authors to claim sole credit, and that copyright should be considered case‑by‑case, grounded in Feist's minimum creativity standard. Critics argue this stance lacks clarity, as no clear threshold for the level of human input has been defined. Certain jurisdictions are taking diverse approaches. China's Beijing Internet Court recently ruled in Li v Liu that an AI–generated image was copyrightable because the plaintiff had provided substantial prompts and adjustments — around 30 prompts and over 120 negative prompts — demonstrating skill, judgment and aesthetic choice. In the United Kingdom, the Copyright, Designs and Patents Act 1988 attributes authorship to the person who undertakes 'arrangements necessary' for a computer‑generated work, hinting that both programmers and users may qualify as authors depending on context. In contrast, India's legal framework remains unsettled. Courts have emphasised human creativity in ruling on computer‑generated works, as seen in Rupendra Kashyap v Jiwan Publishing and Navigators Logistics Ltd v Kashif Qureshi. ANI, India's largest news agency, has brought forward a high‑profile case against OpenAI, with hearings held on 19 November 2024 and 28 January 2025. The Delhi High Court has appointed an amicus curiae to navigate this untested area of copyright, with Indian lawyers emphasising that the outcome could shape licensing practices and data‑mining norms. India reserves copyright protection for creations exhibiting 'minimal degree of creativity' under its Supreme Court rulings such as Eastern Book Co v Modak. In February 2025, experts noted that determining whether AI training qualifies as fair dealing or whether generative AI outputs amount to derivative works will be pivotal. Currently, scraping content for AI training falls outside clear exemptions under Indian law, though the Delhi case could catalyse policy reform. ADVERTISEMENT Amid these legal fires, signs point toward statutory intervention. In the US, the Generative AI Copyright Disclosure Act would require developers to notify the Copyright Office of copyrighted works used in training models at least 30 days before public release. While UK policymakers are consulting on a specialised code of practice, India lacks similar formal mechanisms. The evolving legal framework confronts a fundamental philosophical and commercial dilemma: making space for generative AI's potential innovation without undermining creators' rights. AI developers contest that mass text and data mining fuels advanced models, while authors and journalists argue such training must be controlled to safeguard original expression. Courts appear poised to strike a balance by scrutinising the nuance of human input, purpose and impact — not by enacting sweeping exclusions.


Zawya
3 days ago
- Zawya
Microsoft prepared to abandon high-stakes talks with OpenAI, FT reports
Microsoft is prepared to abandon its high-stakes negotiations with OpenAI over the future of its alliance, the Financial Times reported on Wednesday. The tech giant has considered pausing discussions with the ChatGPT maker if the two sides remain unable to agree on critical issues such as the size of Microsoft's future stake in OpenAI, the report said, citing people familiar with the matter. The company plans to rely on its existing commercial contract to maintain access to OpenAI's technology until 2030, according to the FT report. Microsoft and OpenAI did not immediately respond to Reuters' requests for comment on the report. OpenAI executives have considered accusing Microsoft of anticompetitive behavior in their deal, the Wall Street Journal reported on Monday, adding that both the companies are discussing revising the terms of Microsoft's investment, including the future equity stake it will hold in the AI startup. "Talks are ongoing and we are optimistic we will continue to build together for years to come," the companies said in a statement in response to the WSJ report. Microsoft's multi-billion dollar investments into OpenAI has been a key factor in positioning it as a leading player in the artificial intelligence space. OpenAI requires approval from Microsoft, its major backer, to complete its transition into a public-benefit corporation, which it believes will make it easier to raise more capital. (Reporting by Harshita Mary Varghese in Bengaluru and Juby Babu in Mexico City; Editing by Maju Samuel)