logo
Lleida.net increased its active customer base by more than twenty-five percent in the year's first quarter.

Lleida.net increased its active customer base by more than twenty-five percent in the year's first quarter.

Yahoo26-05-2025

Madrid, May 26.- Spanish technology company Lleida.net (BME: LLN) (EPA:ALLLN) (OTCQX:LLEIF) increased its active client base by more than 25% in the first quarter of the year.
As of March 31, 2025, the company had 6,053 active clients, compared to 4,725 on the same date last year.
"This increase represents the largest growth in clients since 2020, demonstrating that digital trust services, which were barely novel five years ago, have now become a fundamental part of the economy," explained Sisco Sapena, the company's CEO.
Most of Lleida.net's customers come from outside Spain, and the company now records 51.96% international revenue, compared to 48.05% from clients based within Spain.
The company considers active clients which have been invoiced at least once in the past 24 months.
According to information sent today to BME Growth and Euronext Growth, the average ticket of the company's main clients has increased from around twenty-five thousand two hundred euros per quarter to slightly above twenty-nine thousand six hundred euros in the year's first quarter.
The company has observed that its largest customer segment consists of those who use its trust services above five thousand euros quarterly.
"Our clients have grown thanks to us. We've already observed an interesting evolution showing that our largest clients are the ones who use our services, such as certified electronic signature and notification, most recurrently," Sapena explained.
The company recently presented the best quarterly results in its history, resulting from the effective execution of its recovery plan.
The company holds more than 350 international patents in over 64 countries and has been listed on various international stock exchanges for ten years.
SAFE HARBOR STATEMENT
This press release contains statements regarding the future of the company and its innovations. Statements regarding the future may be accompanied by words such as "anticipate", "believe", "estimate", "wait", "anticipate", "pretend", "power", "plan", "potential", the use of future time and other terms of similar meaning. No undue reliance should be placed on these claims. These statements involve risks and uncertainties that could cause actual results to differ materially from those reflected in such statements, including uncertainty of the company's commercial success, ability to protect our intellectual property rights, and other risks. These statements are based on current beliefs and forecasts and refer only to the date of this press release. The company assumes no obligation to publicly update its forward-looking statements, regardless of whether new information, future events or any other circumstance arise.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development
ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development

Yahoo

time3 hours ago

  • Yahoo

ChatGPT Has Already Polluted the Internet So Badly That It's Hobbling Future AI Development

The rapid rise of ChatGPT — and the cavalcade of competitors' generative models that followed suit — has polluted the internet with so much useless slop that it's already kneecapping the development of future AI models. As the AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself an ersatz AI creation. Repeat this process enough, and AI development begins to resemble a maximalist game of telephone in which not only is the quality of the content being produced diminished, resembling less and less what it's originally supposed to be replacing, but in which the participants actively become stupider. The industry likes to describe this scenario as AI "model collapse." As a consequence, the finite amount of data predating ChatGPT's rise becomes extremely valuable. In a new feature, The Register likens this to the demand for "low-background steel," or steel that was produced before the detonation of the first nuclear bombs, starting in July 1945 with the US's Trinity test. Just as the explosion of AI chatbots has irreversibly polluted the internet, so did the detonation of the atom bomb release radionuclides and other particulates that have seeped into virtually all steel produced thereafter. That makes modern metals unsuitable for use in some highly sensitive scientific and medical equipment. And so, what's old is new: a major source of low-background steel, even today, is WW1 and WW2 era battleships, including a huge naval fleet that was scuttled by German Admiral Ludwig von Reuter in 1919. Maurice Chiodo, a research associate at the Centre for the Study of Existential Risk at the University of Cambridge called the admiral's actions the "greatest contribution to nuclear medicine in the world." "That enabled us to have this almost infinite supply of low-background steel. If it weren't for that, we'd be kind of stuck," he told The Register. "So the analogy works here because you need something that happened before a certain date." "But if you're collecting data before 2022 you're fairly confident that it has minimal, if any, contamination from generative AI," he added. "Everything before the date is 'safe, fine, clean,' everything after that is 'dirty.'" In 2024, Chiodo co-authored a paper arguing that there needs to be a source of "clean" data not only to stave off model collapse, but to ensure fair competition between AI developers. Otherwise, the early pioneers of the tech, after ruining the internet for everyone else with their AI's refuse, would boast a massive advantage by being the only ones that benefited from a purer source of training data. Whether model collapse, particularly as a result of contaminated data, is an imminent threat is a matter of some debate. But many researchers have been sounding the alarm for years now, including Chiodo. "Now, it's not clear to what extent model collapse will be a problem, but if it is a problem, and we've contaminated this data environment, cleaning is going to be prohibitively expensive, probably impossible," he told The Register. One area where the issue has already reared its head is with the technique called retrieval-augmented generation (RAG), which AI models use to supplement their dated training data with information pulled from the internet in real-time. But this new data isn't guaranteed to be free of AI tampering, and some research has shown that this results in the chatbots producing far more "unsafe" responses. The dilemma is also reflective of the broader debate around scaling, or improving AI models by adding more data and processing power. After OpenAI and other developers reported diminishing returns with their newest models in late 2024, some experts proclaimed that scaling had hit a "wall." And if that data is increasingly slop-laden, the wall would become that much more impassable. Chiodo speculates that stronger regulations like labeling AI content could help "clean up" some of this pollution, but this would be difficult to enforce. In this regard, the AI industry, which has cried foul at any government interference, may be its own worst enemy. "Currently we are in a first phase of regulation where we are shying away a bit from regulation because we think we have to be innovative," Rupprecht Podszun, professor of civil and competition law at Heinrich Heine University Düsseldorf, who co-authored the 2024 paper with Chiodo, told The Register. "And this is very typical for whatever innovation we come up with. So AI is the big thing, let it go and fine." More on AI: Sam Altman Says "Significant Fraction" of Earth's Total Electricity Should Go to Running AI

OpenAI Concerned That Its AI Is About to Start Spitting Out Novel Bioweapons
OpenAI Concerned That Its AI Is About to Start Spitting Out Novel Bioweapons

Yahoo

time3 hours ago

  • Yahoo

OpenAI Concerned That Its AI Is About to Start Spitting Out Novel Bioweapons

OpenAI is bragging that its forthcoming models are so advanced, they may be capable of building brand-new bioweapons. In a recent blog post, the company said that even as it builds more and more advanced models that will have "positive use cases like biomedical research and biodefense," it feels a duty to walk the tightrope between "enabling scientific advancement while maintaining the barrier to harmful information." That "harmful information" includes, apparently, the ability to "assist highly skilled actors in creating bioweapons." "Physical access to labs and sensitive materials remains a barrier," the post reads — but "those barriers are not absolute." In a statement to Axios, OpenAI safety head Johannes Heidecke clarified that although the company does not necessarily think its forthcoming AIs will be able to manufacture bioweapons on their own, they will be advanced enough to help amateurs do so. "We're not yet in the world where there's like novel, completely unknown creation of biothreats that have not existed before," Heidecke said. "We are more worried about replicating things that experts already are very familiar with." The OpenAI safety czar also admitted that while the company's models aren't quite there yet, it expects "some of the successors of our o3 (reasoning model) to hit that level." "Our approach is focused on prevention," the blog post reads. "We don't think it's acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards." As Axios notes, there's some concern that the very same models that assist in biomedical breakthroughs may also be exploited by bad actors . To "prevent harm from materializing," as Heidecke put it, these forthcoming models need to be programmed to "near perfection" to both recognize and alert human monitors to any dangers. "This is not something where like 99 percent or even one in 100,000 performance is sufficient," he said. Instead of heading off such dangerous capabilities at the pass, though, OpenAI seems to be doubling down on building these advanced models, albeit with ample safeguards. It's a noble enough effort, but it's easy to see how it could go all wrong. Placed in the hands of, say, an insurgent agency like the United States' Immigrations and Customs Enforcement, it would be easy enough to use such models for harm. If OpenAI is serious about so-called "biodefense" contracting with the US government, it's not hard to envision a next-generation smallpox blanket scenario. More on OpenAI: Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions

Why AI Literacy Is Essential For Success In An AI-Driven Economy
Why AI Literacy Is Essential For Success In An AI-Driven Economy

Forbes

time3 hours ago

  • Forbes

Why AI Literacy Is Essential For Success In An AI-Driven Economy

AI literacy is essential in an AI driven economy While students across America master algebra and essay writing, they're graduating without understanding the technology that's reshaping every industry. Artificial intelligence now powers everything from customer service chatbots to medical diagnoses, yet most high schools treat AI literacy as optional—if they address it at all. This educational gap creates a serious disadvantage for young people entering the workforce. AI literacy should be taught in schools just like math, science, and English—as a fundamental subject necessary for navigating the modern world. Without this foundation, students miss critical opportunities to understand and leverage the technology that will define their careers. Why Schools Must Teach AI Literacy Now The numbers make the case for urgent educational reform. McKinsey research indicates that generative AI could contribute between $2.6 trillion and $4.4 trillion annually to the global economy. Companies successfully implementing AI see up to 40% performance increases across their workforce. Despite this massive economic impact, most students graduate without basic AI literacy. This educational oversight creates real consequences. These young people instinctively understand AI's potential but lack the structured learning that would help them use it effectively and ethically. Schools that ignore AI literacy are doing their students a disservice. Just as computer literacy became essential in the 1990s, AI literacy is becoming mandatory for professional success. Educational institutions must integrate AI training into their core curricula, rather than treating it as an elective or afterthought. This requires more than simply allowing AI tools in classrooms. Schools need to hire qualified instructors or partner with AI education specialists to develop comprehensive training programs. They must create AI usage plans that outline how AI will be used across subjects, what ethical guidelines students should follow, and how AI education aligns with broader learning objectives. Educators themselves need thorough training on the school's AI plan to implement it consistently and effectively. Parents and students deserve transparency about these policies, including which AI tools are approved for use and how schools plan to prepare students for an AI-driven future. How AI Skills Connect to Future Career Success The modern workplace demands AI literacy across industries and roles. Marketing professionals use AI for content creation and customer analysis. Healthcare workers rely on AI-powered diagnostic tools. Financial advisors leverage AI for investment research and risk assessment. Even creative fields like graphic design and music production now incorporate AI tools as standard practice. For young people entering this job market, AI literacy provides three critical advantages: Enhanced Problem-Solving Capabilities AI tools amplify human intelligence by processing vast amounts of data and identifying patterns humans might miss. Workers who understand how to leverage these capabilities become more effective problem-solvers and strategic thinkers. Increased Productivity and Efficiency AI can automate routine tasks, allowing workers to focus on higher-value activities that require creativity and critical thinking. Employees who master AI tools often outperform those who rely solely on traditional methods. Adaptability to Technological Change The pace of AI development continues to accelerate. Workers with strong AI literacy can adapt to new tools and applications as they emerge, while those without this foundation struggle to keep pace with technological evolution. How Teen Entrepreneurs Are Leading AI Adoption Teen entrepreneurs demonstrate what's possible when young people embrace AI literacy. Teen entrepreneurs in WIT (Whatever It Takes), which I founded in 2009, have a distinct advantage because we teach AI usage through our own AI platform, WITY. We show students how AI can be used for good—to enhance their capabilities and solve real problems—but not to replace their unique thoughts, creativity, and authentic voice. AI-Powered Market Research Teen entrepreneurs utilize AI to analyze social media trends, comprehend customer sentiment, and pinpoint market gaps. These tools process thousands of data points in minutes, providing insights that would take weeks to gather manually. AI-Enhanced Content Creation Young business owners leverage AI for writing product descriptions, creating marketing materials, and generating social media content. They use these tools for initial drafts and brainstorming, then add their voice and expertise to create authentic, engaging content. AI-Driven Business Operations Teen entrepreneurs implement AI tools for inventory management, customer service automation, and financial tracking. This allows them to operate efficiently with small teams while competing against much larger businesses. AI-Assisted Customer Engagement Young entrepreneurs utilize AI chatbots for initial customer inquiries, AI-powered email marketing for personalized communications, and AI analytics to gain insights into customer behavior patterns and preferences. The success of these teen entrepreneurs proves that age isn't a barrier to AI adoption. Their willingness to experiment with new technologies gives them significant advantages over competitors using traditional methods. Building AI Skills for Future Success For students and young professionals who haven't yet developed AI literacy, the path forward involves practical, hands-on learning: Start with Immediate Applications Begin by using AI tools for tasks you already perform. If you write frequently, experiment with AI writing assistants. If you manage social media, try AI-powered content creation tools. This approach fosters familiarity while addressing real-world problems. Understand Industry-Specific AI Use Cases Research how AI is transforming your field of interest. Healthcare students should explore AI diagnostic tools. Business majors should understand the applications of AI in marketing and operations. This knowledge helps you speak intelligently about AI's role in your chosen career. Practice Ethical AI Implementation Learn to recognize AI limitations, bias, and ethical considerations. Understanding when AI might produce inaccurate or biased results helps you use these tools responsibly while maintaining credibility with colleagues and customers. Experiment with Different AI Tools Try various AI platforms to understand their strengths and limitations. Experience with multiple tools enables you to select the most suitable AI solution for specific tasks and comprehend how various approaches function. Preparing for an AI-Driven Future AI literacy isn't just about understanding current tools—it's about developing the mindset needed to adapt as technology continues advancing. The students graduating today will work in careers where AI plays an increasingly central role. Those with strong AI foundations will thrive, while those without these skills may struggle to remain relevant. The solution requires a coordinated effort between educators, policymakers, and industry leaders. Schools require updated curricula, teacher training, and resources to integrate AI education effectively. Students need access to AI tools and guidance on using them ethically and effectively. The question isn't whether AI will transform education and careers—it already has. The question is whether our educational system will adapt quickly enough to prepare students for this new reality.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store