Latest news with #AIOpportunitiesActionPlan


Ottawa Citizen
4 days ago
- Business
- Ottawa Citizen
Feds partner with Canadian firm to accelerate AI use in public service
Article content The Government of Canada has partnered with Cohere, a Canadian AI firm, to accelerate the adoption of artificial intelligence in the public service. Article content In a joint statement published Sunday, Prime Minister Mark Carney said the federal government has signed memorandums of understanding (MOUs) with Cohere and the United Kingdom to 'deepen and explore new collaborations on frontier AI systems to support our national security.' Article content Article content Article content The statement also said Cohere will build data centres across Canada and expand its presence in the U.K. to support the the country's AI Opportunities Action Plan. Article content Article content 'The government of Canada has been working closely with Cohere, one of Canada's — and one of the world's — leading AI companies,' Carney said at a pooled press event Sunday with U.K. Prime Minister Keir Starmer. Article content 'We're absolutely thrilled that a partnership is developing between the United Kingdom and Cohere … The U.K. — and Canada, we like to say as well — have been one of the pioneers in not just AI development, but also the safety and security of the applications of AI, to really realize the full potential. And we're deepening the collaboration between Canada's AI Safety Institute and the new U.K. Security Institute, and this is going to help to realize the full potential for all our citizens.' Article content Article content Aidan Gomez, Cohere's CEO, said the company will work on accelerating the adoption of AI into the public sector. The CEO has been participating in discussions with Carney and Starmer, including promises to make government more productive and efficient, according to a company blog post. Article content Article content 'We're super excited to be partnering with both governments … Cohere is excited to strengthen the innovation of both of our countries, as well as the sovereignty. Thank you for your partnership, and your support,' Gomez said at Sunday's event. Article content Details still unclear


New Statesman
13-06-2025
- Business
- New Statesman
Does the UK need an AI Act?
Photo by Charles McQuillan / Getty Images Britain finds itself at a crossroads with AI. The stakes are heightened by the fact that out closest allies appear to be on diverging paths. Last year, the EU passed its own AI act, seeking controlled consensus on how to regulate new technologies. The US, meanwhile, is pursuing a lighter-touch approach to AI – perhaps reflecting the potential financial rewards its Big Tech companies could lose if stifled by regulation. Prime Minister Keir Starmer and Science Secretary Peter Kyle seem to be mirroring the US strategy. In the January launch of the government's AI Opportunities Action Plan, Kyle wants Britain to 'shape the AI revolution rather than wait to see how it shapes us'. Many have called for the government to bring forward an AI act, to lay the foundation for such leadership. Does Britain need one, and if so, how stringent should it be? Spotlight reached out to sectoral experts to give their views. 'An AI act would signal that Britain is serious about making technology work for people' Gina Neff – Professor of responsible AI at Queen Mary University of London This government is betting big on AI, making promises about turbo-charging innovation and investment. But regulatory safeguards are fragmented, public trust remains uncertain, and real accountability is unclear. Charging forward without a clear plan means AI will be parachuted into industries, workplaces, and public services with little assurance that it will serve the people who rely on it. An AI act would signal that Britain is serious about making AI work for people, investing in the places that matter for the country, and harnessing the power of AI for good. An AI act would create oversight where there is ambiguity, insisting on transparency and accountability. An AI act could provide the foundation to unlock innovation for public benefit by answering key questions: who is liable when AI fails? When AI systems discriminate? When AI is weaponised? Starmer's government borrows from Silicon Valley's logic, positioning AI regulation as the opposite of innovation. Such logic ignores a crucial fact: the transition to AI will require a major leap for workers, communities and societies. Government must step in where markets won't or can't: levelling the playing field so powerful companies do not dominate our future, investing in education and skills so more people can benefit from opportunities, ensuring today's laws and regulations continue to be fit for purpose, and building digital futures with companies and civil society. Subscribe to The New Statesman today from only £8.99 per month Subscribe Under Conservative governments, the UK took a 'proportionate', 'proinnovation' approach outlined in the AI White Paper, suggesting responsibility for safe and trustworthy AI rests with the country's existing 90 regulators. That was always envisioned to be a wait-and-see stop-gap before new measures. The AI Opportunities Action Plan sketches out support for the UK's AI industry, but does not go far enough on how to manage the social, cultural and economic transitions that we face. With worries about the impact on entry-level jobs, on our children, on information integrity, on the environment, on the UK's creative sector, on growing inequality, on fair yet efficient public services: there is a long list of jobs now for government to do. Lack of action will only create confusion for businesses and uncertainty about rights and protections for workers, consumers and citizens. Without an AI act to help shore it up, the good work that is already happening in the UK won't be able to fully power benefits for everyone. An AI act must go beyond data protections to establish transparency requirements and accountability provisions, outline safeguards for intellectual property, set clearer rules around and recourse for automated decision-making. These are responsibilities that tech companies are largely evading. Who can blame them? They have cornered global markets and will gain handsomely with our new investments in AI. A UK AI act could empower regulators with stronger enforcement tools to right the imbalance of power between British society and the world's biggest players in this sector. An AI act would give real structure to this country's ambitions for AI. The UK needs clarity on what AI can and cannot do, and that won't come from piecemeal guidance – it will come from leaders with vision helping us build the society that we all so rightly deserve. 'The government's hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow' Marina Jirotka and Keri Grieman – Professor of human-centred computing at the University of Oxford; Research associate, RoboTIPS project. The EU AI act entered into force not even a year ago, and there is already serious discussion on whether to reduce enforcement and simplify requirements on small and medium enterprises in order to reduce burdens on companies in a competitive international marketplace. The US House of Representatives has narrowly approved a bill that blocks states from enforcing AI regulations for ten years, while forwarding one bipartisan federal act that criminalises AI deepfakes but does not address AI on a broader level. Large language model updates are rolled out faster than the speed of subscription model billing. AI is invading every corner of our lives, from messaging apps to autonomous vehicles – some used to excellent effect, others to endless annoyance. The British government has chosen a policy of investment in AI – investing in the industry itself, in skill-building education and in inducing foreign talent. Its hesitancy to regulate seems borne out of the fear of hobbling a potential cash cow. However, this leaves the regulatory burden on individual sectors: piecemeal, often siloed and without enough regulatory AI experts to go around, with calls coming from inside the house – the companies themselves – for a liability system. The UK needs clarity: for industry, for public trust and for the prevention of harm. There are problems that transcend individual industries: bias,discrimination, over-hype, environmental impact, intellectual property and privacy concerns, to name a few. A regulator is one way to tackle these issues, but can have varying levels of impact depending on structure: coordinating between industry bodies or taking a more direct role; working directly with companies or at arm's length; cooperative investigation or more bare-bones enforcement. But whatever the UK is to do, it needs to provide regulatory clarity sooner rather than later: the longer the wait, the more we fail to address potential harms, but we also fall behind in market share as companies choose not to bet the bank on a smaller market with an unclear regulatory regime. 'Growth for whom? Efficiency to what end?' Baroness Beeban Kidron – House of Lords member and digital rights activist All new technology ends up being regulated. On arrival greeted with awe. Claims made for its transformative nature and exceptionality. Early proponents build empires and make fortunes. But sooner or later, those with responsibilities for our collective good have a say. So here we are again with AI. Of course we will regulate, but it seems that the political will has been captured. Those with their hands on the technology are dictating the terms – terms that waver between nothing meaningful to almost nothing at all. While government valorises growth and efficiency without asking: growth for whom? Efficiency to what end? In practical terms, an AI act should not seek to regulate AI as a technology but rather regulate its use across domains: in health (where it shows enormous benefit); in education (where its claims outweigh its delivery by an unacceptable margin); in transport (where insurers are calling the shots); and in information distribution (where its deliberate manipulation, unintended hallucination and careless spread damages more than it explains). If we want AI to be a positive tool for humanity then it must be subject to the requirements of common goods. But in a world of excess capital restlessly seeking the next big thing, governments bent over to do the bidding of the already-too-powerful, and lobbyists who simultaneously claim it is too soon and too late, we see the waning of political will. Regulation can be good or bad, but we are in troubling times where the limit of our ambition is to do what we can, not what we should – which gives it a bad name. And governments – including our own – legislate to hardwire the benefits of AI into the ever-increasing concentration of power and wealth of Silicon Valley. Tech companies, AI or otherwise, are businesses. Why not subject them to corporate liability, consumer rights, product safety, anti-trust laws, human and children's rights? Why exempt them from tax, or the full whack for their cost to planet and society? It's not soon and it is not too late – but it needs independence and imagination to make AI a public good, not wilful blindness to an old-school playbook of obfuscation and denial while power and money accumulate. Yes, we need regulation, but we also need political will. 'The real test of a bill will be if it credibly responds to the growing list of everyday harms we see' Michael Birstwistle – Associate director, Ada Lovelace Institute AI is everywhere: our workplaces, public services, search engines, our social media and messaging apps. The risks of these systems are made clear in the government's International AI Safety Report. Alongside long-standing harms like discrimination and 'hallucination' (where AI confidently generates false information), systemic harms such as job displacement, environmental costs and the capacity of newer 'AI agents' to misinform and manipulate are rapidly coming to the fore. But there is currently no holistic body of law governing AI in the UK. Instead, developers, deployers and users must comply with a fragmented patchwork of rules, with many risks going unmanaged. Crucially, our current approach disincentivises those building AI systems from taking responsibility for harms they are best placed to address; regulation tends to only look at downstream users. Our recent national survey showed 88 per cent of people believe it's important that the government or regulators have powers to stop the use of a harmful AI product. Yet more than two years on from the Bletchley summit and its commitments, it's AI developers deciding whether to release unsafe models, according to criteria they set themselves. The government's own market research has said this 'wild west' is lowering business confidence to adopt. These challenges can only be addressed by legislation, and now is a crucial time act. The government has announced an AI bill, but its stated ambition (regulating 'tomorrow's models not today's') is extremely narrow. For those providing scrutiny in parliament, press and beyond, the real test of a bill will be whether it credibly responds to the growing list of everyday harms we see today — such as bias, misinformation, fraud and malicious content — and whether it equips government to manage them upstream at source. 'There's a temptation to regulate AI with sweeping, catch-all Bills. That impulse is mistaken' Jakob Mökander – Director of science and technology policy, Tony Blair Institute for Global Change As AI transforms everything from finance to healthcare, the question is not whether to regulate its design and use – but how to do it well. Rapid advances in AI offer exciting opportunities to boost economic growth and improve social outcomes. However, AI poses risks, from information security to surveillance and algorithmic discrimination. Managing these risks will be key in building public trust and harnessing the benefits. Globally, there's an understandable temptation to regulate AI with sweeping, catch-all Bills that signal seriousness and ease public concern. However, this impulse is mistaken. Horizontal legislation is a blunt tool that struggles to address the many different risks AI poses in various real-world contexts. It could also end up imposing overly burdensome restrictions even on safe and socially beneficial use cases. If the UK government is serious about implementing the AI Opportunities Action Plan, it should continue its pro-innovation, sector-specific approach: steering the middle ground between the overly broad EU AI Act and the US' increasingly deregulatory approach. This way, supporting innovation can go hand-in-hand with protection of consumer interests, human rights and national security. Regulators like the CMA, FCA, Ofcom and HSE are already wrestling with questions related to AI-driven market concentration, misinformation and bias in their respective domains. Rather than pursuing a broad AI bill, the government should continue to strengthen these watchdogs' technical muscle, funding, and legal tools. The £10m already allocated to this effort is welcome – but this should go much further. Of course, some specific security concerns may be insufficiently covered by existing regulation. To address this gap, the government's proposal for a narrow AI Bill to ensure the safety of frontier-AI models is a good starting point. The AI Security Institute has a crucial role to play in this – not as a regulator, but as an independent centre to conduct research, develop standards and evaluate models. Its long-term legitimacy should continue to be served by clear independence from both government and industry, rather than the distraction of enforcement powers. Britain has an opportunity to set a distinctive global example: pro-innovation, sector-specific, and grounded in actual use cases. Now's the time to stay focused and continue forging that path. This article first appeared in our Spotlight on Technology supplement, of 13 June 2025. Related


New Statesman
13-06-2025
- Business
- New Statesman
Digital sovereignty should sit at the core of the UK's AI strategy
Image by Shutterstock With the Prime Minister's AI Opportunities Action Plan, the UK government pledged to turbocharge the economy by infusing AI throughout the public sector. From hospitals leveraging AI for faster diagnoses to public sector teams freed from administrative drudgery, the goal is to use AI as the engine of British progress. But as the government throws its weight behind this technological revolution, several crucial questions arise: who owns and controls the digital foundations upon which our AI-powered future will be built? What tools, platforms and companies make up the digital supply chains of public and private sector services? And how can we ensure that homegrown innovations in AI are scalable? Ultimately, the challenge lies in establishing 'digital sovereignty' – ensuring the UK can secure and govern the foundations of its AI-driven future. In times of global unrest and economic uncertainty, digital sovereignty is a necessity, not a luxury. It means the UK retaining control over its critical technological infrastructure, data and algorithms. It's about ensuring that the tools underpinning our public services and industries are not black boxes managed from afar, but transparent, accountable systems shaped by our values. The risks of dependency are real. Over-reliance on foreign-owned platforms can expose our institutions to security vulnerabilities, regulatory misalignment and loss of control over sensitive data. And yet, pragmatism will need to be practised. Technological supply chains will undoubtedly cross international lines. Achieving digital sovereignty, therefore, requires a balanced approach: ensuring transparency so the public can understand these supply chains, prioritising domestic and European technology solutions, and working with a carefully vetted group of international partners. This approach will also help the UK tackle one of its biggest challenges with AI: scaling projects from proofs of concept to delivering value more quickly and widely. Digital sovereignty empowers the UK to set its own standards, foster innovation within a trusted ecosystem and maintain control over the process of moving AI projects from concept to widespread implementation. Subscribe to The New Statesman today from only £8.99 per month Subscribe With this in mind, consider Humphrey, the UK government's new AI assistant, which is being trialled in 25 local authorities to streamline administrative tasks such as planning, archive searches and transcription. Early results are promising. Government pilots found that Humphrey's 'Minute' notetaking tool saved officials an hour of admin for every 60-minute meeting, freeing staff to focus on higher-value work and improving morale. Other components, like 'Consult', can analyse thousands of public consultation responses far faster than human teams, with comparable accuracy and significant projected cost savings across the civil service. If the platform can continue to deliver such results as its adoption scales, Humphrey may serve as a valuable case study for public-sector AI implementation. At the same time, with increasing attention on how governments manage and govern AI tools, providing clear information about the platform's technical underpinnings – from the large language models powering it to its hosting setup – will help build confidence and set standards for future initiatives. Digital sovereignty fits into a wider framework of responsible digitalisation – a guiding principle for Netcompany. It means deploying technology in ways that are ethical, transparent and aligned with societal needs. Our experience delivering large-scale digital projects across the UK and European public sector has shown that responsible digitalisation is not only possible but essential for building trust and ensuring long-term impact. Whether deploying a digital patient registration service used by 98 per cent of English GPs or developing an AI-powered delay prediction tool for rail networks across Europe, we let our customers take control of their processes and data, foster collaboration and commit to re-using technologies, never developing the same tools twice. The same goes for the EASLEY AI platform. Developed by Netcompany, EASLEY is a secure, model-independent generative AI solution for both public and private sector organisations. Unlike many off-the-shelf AI products, EASLEY puts data privacy and organisational control at its core. It integrates seamlessly with existing systems, allowing clients to switch between AI models as technology evolves – without relinquishing control over their data or processes. In practice, this means a local authority can automate document processing or improve citizen services with confidence, knowing their data never leaves UK or European jurisdiction. Legacy IT systems are silent saboteurs of digital progress. Across the UK and Europe, outdated infrastructure drains budgets and stifles innovation, with up to 80 per cent of IT budgets spent just keeping these obsolete systems running – resources that could otherwise fund better digital services, innovation and security. In June 2025, we announced Feniks AI, a pioneering tool that accelerates the transition from legacy systems to modern, open architectures – cutting delivery times by up to 60 per cent. In short, what once took years can now be completed in months. The tool has already delivered promising results in three large-scale public sector projects in Denmark, and we look forward to bringing it to the UK. Feniks AI is built on Netcompany's unique methodology and platforms, developed through 25 years of experience delivering large-scale, business-critical projects across the public and private sectors in Europe. By embracing such solutions, we can help our customers break free from decades of digital debt and lay the foundations for a more innovative and secure future. As the UK charts its course towards an AI-powered future, cross-sector collaboration is key to delivering digital transformation at scale. Partnerships focused on transparency, scalability and pragmatic digital sovereignty will best position the UK to become a leader in the development and deployment of AI. In doing so, we can shape a digital landscape that is not only world-leading but also serves the needs and aspirations of our citizens. Related


Asia Times
11-06-2025
- Business
- Asia Times
Nvidia pushes hardware but experts say UK AI needs something else
There's disagreement on what it would take to turn the United Kingdom into an artificial intelligence powerhouse. Nvidia, the world's largest supplier of graphics processing units (GPUs), has called on the UK to boost hardware investment to catch up with the United States and China in the global AI race. During his recent visit to the country, Jensen Huang, co-founder and chief executive of Nvidia, told UK Prime Minister Keir Starmer that the UK could become the world's third-largest AI ecosystem. 'The UK has the third-largest AI venture capital (VC) investment in the world. The two largest are the US and China, which is fairly obvious,' Huang said in a panel discussion with Starmer at the London Tech Week on Monday. 'The UK has one of the richest AI communities anywhere on the planet, the deepest thinkers, and the best universities … and you're rich with great computer scientists. It's a fantastic place for venture capital to invest.' He said the UK is in a 'Goldilocks circumstance' or a 'just right' situation where the country has both investors and scientists to develop AI. (For any reader who doesn't know the children's story 'Goldilocks and the Three Bears,' a young girl named Goldilocks wanders into the home of three bears and finds that one of the bowls of porridge on the table is too hot, another too cold – but the bowl for the small bear is at the right temperature.) Then Huang pointed out that the UK lacks the hardware infrastructure to create an AI ecosystem that can compete with the US and China. 'If you are a particle physicist, you need a linear accelerator. If you are an astronomer, you need a radio telescope,' he said. 'If you're in the world of AI, you can't do machine learning without a machine.' Huang said he was among a group of technology entrepreneurs, including Google's former Chief Executive Eric Schmidt, Wayve's Chief Executive Alex Kendall and executives of Synthesia and Elevenlabs, at an event hosted by Starmer on June 8. He said the UK government is committed to AI development. 'We're going to invest in helping start the AI ecosystem' in the UK, he said. 'Infrastructure will enable more research, breakthroughs and companies.… Then the flywheel will start taking off. It's already quite large, but we've just got to get that flywheel going.' On Monday, on the same stage, Starmer announced an additional £1 billion ($1.3 billion) of funding to boost the country's AI compute power by 20 times. He also announced the government's plan to invest £185 million and partner with 11 companies to train 7.5 million UK workers, one-fifth of the country's workforce, in essential skills to use AI by 2030. In January this year, the Labor government unveiled the AI Opportunities Action Plan, saying it would set out a long-term plan for the UK's AI infrastructure needs, backed by a 10-year investment commitment. It said it had attracted £39 billion of private investment to the AI sector since it took office in July 2024. The Joint Academic Data Science Endeavour (JADE) consortium, comprising 20 UK universities and the Turing Institute, uses Nvidia's technologies for its AI development. For example, the University of Manchester uses the Nvidia Earth-2 platform to develop pollution-flow models. The University of Bristol's Isambard-AI supercomputer focuses on climate modeling and next-generation science. While Huang says insufficient hardware facilities are the main obstacle to forming the UK's AI ecosystem, a research report published by Tech Nation, a unit of the Founders Forum Group, said the problem lies in the lack of growth funds and exit opportunities. The report said the UK is home to more than 17,000 VC-backed startups. It said UK technology startups have raised over $7 billion in VC investment so far this year, 43% of which originated from funds in the US. 'UK founders rate the UK as a good place to start a tech company, but they are less positive about scaling or exiting their companies in the UK,' the report said, citing its survey on more than a thousand UK technology firms. It said 43% of UK founders it surveyed are considering relocating their company's headquarters outside the UK. 'Almost all of the founders we surveyed who are considering relocating are targeting the US,' it added. 'Of those, more than one in three are looking for better funding availability, exit opportunities and access to a larger market outside the UK.' According to the survey, half of the founders surveyed suggested that the UK government provide them with tax credits or use a sovereign wealth fund or a co-investment fund to support the growth of their businesses. The National Wealth Fund, the UK's sovereign wealth fund, mainly invests in green hydrogen, carbon capture, ports, gigafactories, and green steel sectors. Balderton Capital, a UK-based venture capital firm, said AI startups in the UK raised $15.9 billion last year, compared with $3.1 billion four years ago. According to PitchBook Data, AI startups in the US raised a record $97 billion last year. VCs poured $209 billion into US startups in 2024, compared with $61.6 billion in Europe and $75.9 billion in Asia. It's unclear how the Starmer administration's £1 billion long-term funding can help the UK change the game in the global race. Meanwhile, China seems to be in another extreme – too much investment but too little experience. A study by the Stanford Center on China's Economy and Institutions showed that China's government VC funds had actually invested $912 billion in startups in the country from 2013 to 2023, about 23% of which were directed to 1.4 million AI-related firms. That amounted to $209 billion in total, or $150,000 each. The study said 4,115 AI firms received government and private VC investments from 2000 to 2023. Most of these firms initially received government VC investments and sought private investments. A report published by the MIT Technology Review in March this year said that China had built hundreds of AI data centers in recent years, but many of them have become 'distressed assets' after many AI projects failed. Read: US Trojan horse alarms pushing China's robots to Europe
Yahoo
10-06-2025
- Business
- Yahoo
Student funding to create generation of AI pioneers
A new scholarship aimed at developing the next generation of artificial intelligence "pioneers" will open to applicants in spring 2026. The Sparck AI scholarships, named after pioneering British computer scientist Karen Sparck Jones, will give master's degree university students access to industry-leading firms as the Government looks to boost the UK's AI credentials. The University of Bristol will be one of nine universities to offer the fully-funded Government scholarship, alongside faculties in Newcastle, Manchester and Edinburgh. The university said they "relish the opportunity," with the grant covering both students tuition and living costs. More news stories for Bristol Watch the latest Points West Listen to the latest news for Bristol The scheme has been developed in line with the Government's 'AI Opportunities Action Plan', with more than £17m of funding from Westminster. Alongside masters places, 100 scholars will receive placements in leading AI companies, as well as mentorship from industry experts. In 2024, Bristol was named 'AI University of the Year' and has developed the Isambard-AI computer - the most powerful of its kind in the country. Vice-chancellor, Evelyn Welch said the scholarships would give Bristol the chance to "explore bold new ideas and nurture exceptional talent." It is hoped the scholarships will give students "unparalleled access" to the fast-moving industry. Technology Secretary Peter Kyle MP said he believed the scheme would help students secure "highly skilled jobs" and build "a workforce fit for the future." Finn Stevenson, Co-founder and Chief Executive of Flok Health said they were "delighted" to be part of the scheme, adding attracting the "world's best talent" was vital in aiding AI advancement in the UK AI talent acquisition firm, Beamery said the scholarships would help their goal to create "equal access to work" and connect "talent to opportunity". Applications open in Spring 2026, with the first cohort beginning studies the following autumn. Follow BBC Bristol on Facebook, X and Instagram. Send your story ideas to us on email or via WhatsApp on 0800 313 4630.