logo
#

Latest news with #dataScientists

Five essential skills for building AI-ready teams
Five essential skills for building AI-ready teams

Entrepreneur

time09-06-2025

  • Business
  • Entrepreneur

Five essential skills for building AI-ready teams

AI is developing at a rapid pace and transforming the way global industries operate. As companiesaccelerate AI adoption in order to stay competitive and reap its potential benefits, the urgency forbuilding AI-ready capabilities in the organisation is increasing. Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur United Kingdom, an international franchise of Entrepreneur Media. The true value of AI-based solutions depends on the teams who understand, challenge, embrace and integrate it wisely. In my new book, Artificial Intelligence For Business, I highlight the impact of AI on the future of work, specifically the skills gaps and job displacements, as well as future essential skills required in global organisations. For business leaders, building AI-ready teams means more than just hiring technical experts or data scientists. Success in the AI business landscape means upskilling the workforce to develop five essential capabilities that will enable people to thrive. AI literacy and understanding Knowledge and understanding of AI can seem overwhelming, particularly to those in non-technical roles who may struggle with the constant flood of information about large language models, Python codes and AI platform functionalities. While not everyone needs to have a deep technical understanding for how to develop AI-based solutions, they should understand what AI can do and where its limitations lie. AI literacy goes beyond a mere understanding of AI technologies. It involves building foundational understanding of its context and value, as well as the ability to question its design and implementation. AI literacy should be developed across all teams in an organisation, including how AI works, the different types of AI solutions, how data is used, where bias can creep in, and what real world applications look like in the relevant industry. Building AI literacy begins with organisational education and training programs that offer executive- level understanding of AI capabilities, limitations and risks, as well as industry-specific applications. Additionally, hands-on experience and real-world applications are critical in developing an understanding AI in a business context. The aim is to raise the level of understanding to ensure every AI-related business decision is made with awareness and purpose. Critical thinking and data scepticism As we increasingly apply AI-based technologies in our daily business, the outcomes can be quite compelling. The potential productivity gains and scale of benefit are driving organisations to implement AI-based solutions across various business functions. The outputs of AI tools may appear clean and professional, but may not always be rooted in accuracy or truth. In addition, there may be hidden biases that could be detrimental, particularly if the outputs are used in critical decision- making processes. AI-ready teams need to develop critical thinking skills – the ability to analyse AI outputs, identify anomalies or biases, and make well-informed decisions relating to its use. As organisations increasingly use AI-based systems, there is a risk of over-reliance and trust on its output, without truly understanding how the outcomes are derived. This is where critical thinking becomes indispensable. Building internal capabilities in 'data scepticism', or the ability to challenge assumptions, examine how models are trained, and identify potential errors, anomalies or biases in the output, is critical for organisations. Although a certain level of technical competency may be required to deep dive into the AI-system capabilities, a basic level of confidence to raise concerns and questions across all teams interacting with AI solutions and outputs will be essential for organisations. Deep technical training is not required for this. More importantly, leadership teams should prioritise building an organisational culture where employees are encouraged to question and analyse AI- generated insights. For example, establishing scenario-based exercises, diverse team discussions and formalised feedback loops will help sharpen critical thinking skills across the organisation. Human-machine collaboration As the capabilities of AI-based technologies rapidly advance, the question of whether to replace human resources with AI is becoming increasingly dominant in the global business landscape. In recent months, we have seen several global organisations make headlines as the decision to replace laid-off workers with AI and automation takes centre stage. This includes brands such as Klarna, UPS, Duolingo, Google and Salesforce, among many others. In my experience, the integration of new technologies does not automatically mean replacing people. As we have observed over decades of industrial revolution, technology enables shifts in working environments, taking over tasks and pushing human resources to more complex or different types of work. Albeit AI development is significantly more rapid and its capabilities enable more sophisticated tasks, the cycle of shifting work remains the same. In the AI age, this means creating new kinds of teams where humans and intelligent systems collaborate effectively to deliver cohesive and sophisticated work at an accelerated pace. To support this, companies should focus on role redesign, process mapping, and experimentation with AI tools in real workflows. Encourage cross-functional collaboration - between business, tech, and data teams - to break down silos and co-create solutions. The key is to help people see AI as an assistant, not a threat. Ethical reasoning and responsible innovation With the rise of AI application in business comes a surge of ethical concerns and risks, including bias, data privacy and over-reliance of AI for critical decision making. To leverage AI-based technologies effectively, organisations cannot afford to overlook these concerns, particularly considering the developing regulatory scrutiny and fragility of consumer trust. Every team should receive education and training on the ethical concerns and challenges of AI application in business, including the ability to recognise biases in data and outputs, understanding explainability requirements, and making inclusive decisions. Responsible use of AI should be a foundational part of the organisational culture. Realistically, this goes beyond formal training programs to enable successful adoption in organisations. Transparent communication, open dialogue, best practices and use cases are needed to explore potential unintended consequences and ensure responsible use is top of mind for all teams. Ethical reasoning should not be designed to slow innovation, but ensure that it is able to flourish within the space of safe and responsible use for the business. Adaptive learning and growth mindset One of the most foundational skills for an AI-ready team is adaptability. Exponential technologies, particularly AI, are developing rapidly and constantly changing. The most valuable skill in an AI-ready organisation is not the knowing everything, but being curious, open to change and continuously willing to learn. Embedding this growth mindset in how teams work and collaborate gives employees permission to explore new capabilities, learn quickly from failure, and experiment with new tools and solutions within a safe environment. In the current AI age, organisations need to prioritise investments in microlearning platforms that are able to encourage continuous rapid learning, knowledge sharing and reward curiosity. Critically, leadership teams should model this mindset, demonstrating the willingness to evolve and rethink traditional assumptions and limitations. Adaptability will ensure the organisation does not just survive the era of AI transformation, but thrives in it. AI-readiness goes beyond training programs, certifications and tools proficiency. It is truly a team- wide capability that requires sustainable investment in people. The future of work is not only impacted by the rapid development of AI, but how intelligently organisations are able to prepare the workforce to embrace it responsibly.

Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It
Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It

Forbes

time27-05-2025

  • Business
  • Forbes

Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It

Most predictive AI projects neglect to estimate the potential profit – a practice known as ML ... More valuation – and that spells project failure. Here's the how-to. To be a business is to constantly work toward improved operations. As a business grows, this usually leads to the possibility of using predictive AI, which is the kind of analytics that improves existing, large-scale operations. But the mystique of predictive AI routinely kills its value. Rather than focusing on the concrete win that its deployment could deliver, leaders get distracted by the core tech's glamor. After all, learning from data to predict is sexy. This in turn leads to skipping a critical step: forecasting the operational improvement that predictive AI operationalization would deliver. As with any kind of change to large-scale operations, you can't move forward without a credible estimation of the business improvement you stand to gain – in straightforward terms like profit or other business KPIs. Not doing so makes deployment a shot in the dark. Indeed, most predictive AI launches are scrubbed. So why do most predictive AI projects fail to estimate the business value, much to their own demise? Ultimately, this is not a technology fail – it's an organizational one, a glaring symptom of the biz/tech divide. Business stakeholders delegate almost every aspect of the project to data scientists. Meanwhile, data scientists as a species are mostly stuck on arcane technical metrics, with little attention to business metrics. The typical data scientist's training, practice, shop-talk and toolset omits business metrics. Technical metrics define their comfort zone. Estimating the profit or other business upside of deploying predictive AI – aka ML valuation – is only a matter of arithmetic. It isn't the "rocket science" part, the ML algorithm that learns from data. Rather, it's the much-needed prelaunch stress-testing of the rocket. Say you work at a bank processing 10 million credit card and ATM card transactions each quarter. With 3.5% of the transactions fraudulent, the pressure is on to predictively block those transactions most likely to fall into that category. With ML, your data scientists have developed a fraud-detection model that calculates a risk level for each transaction. Within the most risky 150,000 transactions – that is, the 1.5% of transactions that are considered by the model most likely to be fraud – 143,000 are fraudulent. The other 7,000 are legitimate. So, should the bank block that group of high-risk transactions? Sounds reasonable off the cuff, but let's actually calculate the potential winnings. Suppose that those 143,000 fraudulent transactions represent $18,225,000 in charges – that is, they're about $127 each on average. That's a lot of fraud loss to be saved by blocking them. But what about the downside of blocking them? If it costs your bank an average of $75 each time you wrongly block due to cardholder inconvenience – which would be the case for each of the 7,000 legit transactions – that will come to $525,000. That barely dents the upside, with the net win coming to $17,700,000. So yeah, if you'd like to gain almost $18 million, then block those 1.5% most risky transactions. This is the monetary savings of fraud detection, and a penny saved is a penny earned. But that doesn't necessarily mean that 1.5% is the best place to draw the line. How much more might we save by blocking even more? The more we block, the more lower-risk transactions we block – and yet the net value might continue to increase if we go a ways further. Where to stop? The 2% most risky? The 2.5% most risky? To navigate the range of predictive AI deployment options, you've just got to look at it: A savings curve comparing the potential money saved by blocking the most risky payment card ... More transactions with fraud-detection models. The performance of three competing models is shown. This shows the monetary win for a range of deployment options. The vertical axis represents the money saved with fraud detection – based on the same kind of calculations as those in the previous example – and the horizontal axis represents the portion of transactions blocked, from most risky (far left) to least risky (far right). This view has zoomed into the range from 0% to 15%, since a bank would normally block at most only the top, say, two or three percent. The three colors represent three competing ML models: two variations of XGBoost and one random forest (these are popular ML methods). The first XGBoost model is the best one overall. The savings are calculated over a real collection of e-commerce transactions. So was the previous example's calculations. Let's jump to the curve's peak. We would maximize the expected win to more than $26 million by blocking the top 2.94% most risky transactions according to the first XGBoost model. But this deployment plan isn't a done deal yet – there are other, competing considerations. First, consider how often transactions would be wrongly blocked. It turns out that blocking that 2.94% would inconvenience legit cardholders an estimated 72,000 times per quarter. That adverse effect is already baked into the expected $26 million estimate, but it could incur other intangible or longer-term costs; the business doesn't like it. But the relatively flatness that you can see near the curve's peak signals an opportunity: If we block fewer transactions, we could greatly reduce the expected number wrongly blocked with only a small decrease in savings. For example, it turns out that blocking 2.33% rather than 2.94% cuts the number of estimated bad blocks in half to 35,000, while still capturing an expected $25 million in savings. The bank might be more comfortable with this plan. As compelling as these estimated financial wins are, we must take steps to shore up their credibility, since they hinge on certain business assumptions. After all, the actual win of any operational improvement – whether driven by analytics or otherwise – is only certain after it's been achieved, in a "post mortem" analysis. Before deployment, we're challenged to estimate the expected value and to demonstrate its credibility. One business assumption within the analysis described so far is that unblocked fraudulent transactions cost the bank the full magnitude of the transaction. A $100 fraudulent transaction costs $100 (while blocking it saves $100). And a $1,000 fraudulent transaction indeed costs ten times as much. But circumstances may not be that simple, and they may be subject to change. For example, certain enforcement efforts might serve to recoup some fraud losses by investigating fraudulent transactions even after they were permitted. Or the bank might hold insurance that covers some losses due to fraud. If there's uncertainty about exactly where this factor lands, we can address it by viewing how the overall savings would change if such a factor changed. Here's the curve when fraud costs the bank only 80% rather than 100% of each transaction amount: The same chart, except with each unblocked fraudulent transaction costing only 80% of the amount of ... More the transaction, rather than 100%. It turns out, the peak decreases from $26 million down to $20 million. This is because there's less money to be saved by fraud detection when fraud itself is less costly. But the position of the peak has moved only a little: from 2.94% to 2.62%. In other words, not much doubt is cast upon where to draw the decision boundary. Another business assumption we have in place is the cost of wrongly blocking, currently set at $75 – since an inconvenienced cardholder will be more likely to use their card less often (or cancel it entirely). The bank would like to decrease this cost, so it might consider taking measures accordingly. For example, it could consider providing a $10 "apology" gift card each time it realizes its mistake – an expensive endeavor, but one that might turn out to decrease the net cost of wrongly blocking from $75 down to $50. Here's how that would affect the savings curve: The same chart, except with each wrongly-blocked transaction costing only $50, rather than $75. This has increased the peak estimated savings to $28.6 million, and moves that peak from 2.94% up to 3.47%. Again, we've gained valuable insight: This scenario would warrant a meaningful increase in how many transactions are blocked (drawing the decision boundary further to the right), but would only increase profit by $2.6 million. Considering that this guesstimated cost reduction is a pretty optimistic one, is it worth the expense, complexity and uncertainty of even testing this kind of "apology" campaign in the first place? Perhaps not. For a predictive AI project to defy the odds and stand a chance at successful deployment, business-side stakeholders must be empowered to make an informed decision as to whether, which and how: whether the project is ready for deployment, which ML model to deploy and with what decision boundary (percent of cases to be treated versus not treated). They need to see the potential win in terms of business metrics like profit, savings or other KPIs, across a range of deployment options. And they must see how certain business factors that could be subject to change or uncertainty affect this range of options and their estimated value. We have a name for this kind of interactive visualization: ML valuation. This practice is the main missing ingredient in how predictive AI projects are typically run. ML valuation stands to rectify today's dismal track record for predictive AI deployment, boosting the value captured by this technology up closer to its true potential. Given how frequently predictive AI fails to demonstrate a deployed ROI, the adoption of ML valuation is inevitable. In the meantime, it will be a true win for professionals and stakeholders to act early, get out ahead of it and differentiate themselves as a value-focused practitioner of the art.

AI is being used by British Airways planes to avoid bad weather and flight delays
AI is being used by British Airways planes to avoid bad weather and flight delays

The Sun

time14-05-2025

  • Business
  • The Sun

AI is being used by British Airways planes to avoid bad weather and flight delays

BRITISH Airways' £7billion investment in AI has led to the airline's flight punctuality soaring to record levels - with 86 per cent of jets now taking off on time. The impressive first quarter 2025 figures compared to a punctuality record of just 46 per cent in 2008 thanks to new cutting edge technology first revealed in The Sun. 3 BA services hit more than 90 per cent on-time departures on 38 of the 89 operational days. And across April, two thirds of all the airline's Heathrow departures left ahead of time - more than double the 2023 figures. Last year the Sun first revealed BA's £7bn investment programme - including £100m developing digital tools and apps to boost operational performance. More than 100 data scientists are now employed by the airline. BA can now allocate aircraft landing at Heathrow to stands based on a live analysis of the onward travel plans of customers on any given flight - cutting m issed connections and disruption to onward journeys. This has saved 160,000 minutes of delays. A real-time weather program proactively reroutes aircraft to avoid problems. This has saved 243,000 minutes of delays. New apps for pilots, cabin crew teams and aircraft dispatch teams will help speed-up aircraft departures. The Sun was the first media outlet invited inside the airline's new security-restricted nerve centre to showcase cutting edge technology making flying better. We revealed how six separate computer systems for different areas of BA have been jettisoned for one giant global interface - dubbed 'Mission Control' - which unites the airline; streamlining services and tracking aircraft movements. The Sun's Travel Editor Lisa Minot shares her expert packing tips The live-data is beamed into BA's Heathrow hub on huge screens, allowing bosses to take pre-emptive action to limit operational hazards. "It's like an elaborate game of computer puzzle Tetris', Richard Anderson, the airline's Director of Global Operations, told The Sun. Ground-breaking immediate responses to limit disruption are now the norm – meaning a smoother and seamless travel experience for flyers. Thrilled BA chairman Sean Doyle said yesterday that AI, forecasting, optimisation and machine learning' have transformed the airline's operational performance. At an aviation summit in Pittsburgh in the US, he said: "Improving operational performance is a key part of our investment programme because we know the impact delays and disruption can have on our customers. "Whilst disruption to our flights is often outside of our control, our focus has been on improving the factors we can directly influence and putting in place the best possible solutions for our customers when it does happen. "That's why we've invested £100m in our own operational resilience, putting funding into technology and tools, and devising a better way of working on the ground at Heathrow as well as creating an additional 600 operational roles into the airport. 3 "The tech which colleagues have at their fingertips has been a real game-changer for performance, giving them the confidence to make informed decisions for our customers based on a rapid assessment of vast amounts of data. "It's exciting that our industry is able to harness this capability, which will develop even further in the months and years to come." The Sun also tried out Plus, the insane training regime BA First Class flight attendants have to go through – with strict teapot and pillow rules. 3

Why Companies Are Losing AI Talent — And How Leaders Can Stop It
Why Companies Are Losing AI Talent — And How Leaders Can Stop It

Forbes

time12-05-2025

  • Business
  • Forbes

Why Companies Are Losing AI Talent — And How Leaders Can Stop It

Companies keep blaming the AI talent shortage on competition and compensation. But the real problem ... More may lie within — in rigid cultures, outdated leadership and a failure to build environments where AI professionals actually want to stay. 'We can't find enough AI talent.' That's one of the major dilemmas in boardrooms around the world right now, as AI continues on an upward trajectory. The job postings are live, compensation is competitive and tools are top-tier. Yet still, machine learning engineers and data scientists walk away — or worse, never apply for these roles. But what if this isn't a hiring crisis at all? What if it's a leadership one? While the spotlight has been on salaries and skills shortages, some experts argue that it isn't just that AI professionals are hard to hire, but also that they're easy to lose. The argument is that this phenomenon isn't because these professionals aren't engaged with the work, but because the environment they're asked to work in is often fundamentally misaligned with how AI innovation thrives. 'AI professionals' rare expertise gives them unprecedented leverage in today's market,' noted Erika Glenn, a C-suite executive and board advisor. 'They can command high compensation while prioritizing workplace flexibility elsewhere. Many companies maintain rigid policies under leadership that rarely understands AI culture's unique needs — and that disconnect pushes experts to leave.' The case today, at least for a large chunk of the industry, is that AI talent isn't chasing ping-pong tables or inflated job titles. They're going after meaning, autonomy and a future-focused mission. When they don't find that, they leave — often to start their own ventures or join smaller companies with more adaptive cultures. According to Michelle Machado, a neurochange solutions consultant and global educator, the deeper issue lies with legacy mindsets. 'Too many leaders are still operating with 20th-century thinking while trying to compete in a 21st-century AI race,' she told me in an interview. 'It's like watching companies in the year 2000 debate whether they needed a website.' Machado pointed to a telling stat: nearly 40% of companies are failing at AI implementation because leadership doesn't understand its potential. This misunderstanding manifests in all the wrong ways — treating AI like a side project, demanding office-based routines for remote-ready work, or imposing waterfall processes on what should be experimental systems. Glenn added that many leaders 'still treat AI development like traditional software engineering, enforcing rigid schedules and micromanagement that stifle innovation.' That kind of control-heavy approach repels the very minds companies are desperate to retain. Worse, it builds resentment. When leadership demands agility from tech teams but clings to bureaucracy in its own decision-making, AI experts read the signal loud and clear: this is not a place where real innovation is welcome. A common misconception is that AI professionals are simply poached by bigger paychecks. But Machado challenges that. 'Unless leaders build a culture of experimentation, collaboration, and future-focused thinking, even the best AI hires won't stay,' she said. 'It's culture, not just compensation, that ultimately attracts and retains top talent.' Glenn agrees, noting that great leaders 'foster cultures of open dialogue and shared incentives, where controversial viewpoints are welcomed without repercussion.' They balance autonomy with accountability, shield teams from politics and reward experimentation, even when it fails. That environment is rare. But when it exists, it creates gravity that retains talent. And the organizations drawing and keeping the brightest AI minds are the ones with that kind of gravity, necessarily those with the most advanced models. When it comes to retaining talent, Machado's advice is that transparency is what fuels trust. 'People stay when they understand the impact of their work and how it connects to broader business outcomes,' she said. In a field as cross-functional and fast-paced as AI, where models must touch operations, compliance, customer data and ethics, that transparency must be baked into every layer of leadership. It also requires vulnerability; a willingness to admit what the company doesn't yet know and a commitment to build that knowledge together. 'When people feel seen, heard and valued,' Machado explained, 'they don't just contribute — they commit.' This is especially vital in large enterprises, where AI efforts often suffocate under organizational silos. 'Silos don't just slow innovation,' she added. 'They stall transformation.' Losing a top AI engineer doesn't just mean opening another job requisition — it sets off a chain reaction. Projects stall, morale dips and, perhaps worst of all, institutional knowledge walks out the door. 'Replacing technical professionals can cost between one-half to two times their annual salary,' said Glenn, citing Gallup. SHRM confirms these costs across industries, especially in high-skill domains like AI and cybersecurity. But the true impact isn't financial alone. 'Team morale deteriorates, skillset imbalances emerge, and product development suffers,' she warned. Machado put it bluntly: 'Failing to retain AI talent comes at a steep price, not just in turnover, but in missed relevance.' She compared it to the cautionary tales of Kodak and Blockbuster — companies that didn't fail for lack of talent, but for lack of leadership readiness. 'In this market, you either evolve or dissolve. There is no middle ground.' Machado's argument isn't exaggerated at all, according to the stats. In a 2024 Bain & Company survey, 75% of executives admitted they don't fully understand how to scale AI within their organizations. And that uncertainty at the top trickles down — creating friction, confusion and eventually, flight. So what makes AI talent stay? Both Glenn and Machado agree that it's not just about technical ability but about how leaders show up. 'The best leaders create environments of genuine autonomy,' Glenn said. 'They demonstrate problem-solving engagement, regardless of their technical depth, shield their teams from politics, balance accountability with empowerment and treat failure as an important part of the process.' For Machado, great leadership begins with trust and human connection. 'AI may run on data, but exceptional outcomes still run on trust,' she said. 'When leaders share purpose, invite diverse perspectives and celebrate progress over perfection, teams move from compliance to commitment.' In these types of environments, AI professionals don't just build better models — they build momentum, innovate and, most importantly, stay. The bottom line is that there's no AI strategy without a talent strategy — and no talent strategy without leadership. Yes, compensation still matters and the global shortage of AI professionals is real. But throwing more money at the problem won't fix a culture that's broken. Attracting and retaining AI talent is not just about who you hire, but more about how you lead. The AI talent gap, according to Machado, isn't simply a hiring problem — it's a leadership one. She added that 'this problem at its core is about trust: trust in your people, in your strategy and in your capacity to lead through change.' If AI companies want to stay competitive, the message from Glenn and Machado is that they'll need more than advanced models. They'll need leaders who can think forward, act with empathy and build environments where AI professionals can thrive. 'Innovation stalls when leadership fails. But with the right leadership? AI becomes a force multiplier, not a flight risk,' Glenn said.

Cassava Technologies and Zindi collaborate to showcase African Artificial Intelligence (AI) innovation
Cassava Technologies and Zindi collaborate to showcase African Artificial Intelligence (AI) innovation

Zawya

time12-05-2025

  • Business
  • Zawya

Cassava Technologies and Zindi collaborate to showcase African Artificial Intelligence (AI) innovation

Cassava Technologies ( a global technology leader of African heritage, is pleased to announce that it has signed a Memorandum of Understanding (MOU) with Zindi, the largest professional network for data scientists in Africa, to deliver artificial intelligence (AI) solutions and GPU-as-a-Service (GPUaas) across the African continent. This partnership represents a significant step in accelerating digital transformation in Africa and will see the two organisations collaborate on several initiatives. These include using Cassava's GPUaaS capabilities for Zindi's AI solution development and identifying opportunities for both organisations to leverage one another's platforms and ecosystems. 'For Africa's AI ecosystem to grow and thrive, it is essential to provide platforms and resources for the continent's developers and start-ups. Combining our data centres' advanced GPU capabilities with Zindi's innovative data science platform creates a powerful foundation for digital advancement. Cassava is proud to support local stakeholders as they develop digital solutions to some of Africa's most pressing problems,' said Hardy Pemhiwa, President and Group CEO of Cassava Technologies. As Africa's pioneering AI challenge platform, Zindi collaborates with companies, non-profit organisations, and government institutions to develop, curate, and prepare data-driven challenges. This partnership underscores their mutual commitment to nurturing AI talent and innovation throughout the continent. 'Zindi is thrilled at the opportunity to partner with Cassava Technologies to strengthen African datasets, address local problems with locally-developed solutions, and help more African AI builders access the resources they need to succeed. Collaborating on the launch of a challenge specifically aimed at nurturing Africa's AI talent will not only expose entrepreneurs and innovative solutions; it will help build new skills and create employment opportunities,' said Zindi CEO and Co-Founder, Celina Lee. With the signing of this MOU, Cassava and Zindi are set to make significant inroads in Africa's AI landscape. The partnership supports Cassava's objective of providing world-class digital solutions and advancing responsible AI adoption, innovation, and growth in Africa. This follows its recent announcement of its plans to build Africa's first AI factory and the 2024 launch of Cassava's AI business. Distributed by APO Group on behalf of Cassava Technologies. About Cassava Technologies: Cassava Technologies is a global technology leader of African heritage providing a vertically integrated ecosystem of digital services and infrastructure enabling digital transformation. Headquartered in the UK, Cassava has a presence across Africa, the Middle East, Latin America and the United States of America. Through its business units, namely, Cassava AI, Liquid Intelligent Technologies, Liquid C2, Africa Data Centres, and Sasai Fintech, the company provides its customers' products and services in 94 countries. These solutions drive the company's ambition of establishing itself as a leading global technology company of African heritage. About Zindi: Zindi is the leading global AI community platform, established to make data science and artificial intelligence (AI) skills accessible to all. With a network of over 80,000 data scientists registered on the platform, from more than 180 countries worldwide, Zindi helps data practitioners of all levels learn, grow and access professional opportunities while building real-world solutions for global organisations.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store