Latest news with #dataScience


Associated Press
12 hours ago
- Business
- Associated Press
Mississippi partners with tech giant Nvidia for AI education program
The state of Mississippi and technology giant Nvidia have reached a deal for the company to expand artificial intelligence training and research at the state's education institutions, an initiative to prepare students for a global economy increasingly driven by AI, Gov. Tate Reeves announced Wednesday. The memorandum of understanding, a nonbinding agreement, between Mississippi and the California-based company will introduce AI programs across the state's community colleges, universities and technical institutions. The initiative will aim to train at least 10,000 Mississippians using a curriculum designed around AI skills, machine learning and data science. Mississippi now joins Utah, California and Oregon, which have signed on to similar programs with Nvidia. 'This collaboration with Nvidia is monumental for Mississippi. By expanding AI education, investing in workforce development and encouraging innovation, we, along with Nvidia, are creating a pathway to dynamic careers in AI and cybersecurity for Mississippians,' Reeves said. 'These are the in-demand jobs of the future — jobs that will change the landscape of our economy for generations to come. AI is here now, and it is here to stay.' The agreement does not award any tax incentives to Nvidia, but Reeves said the state would provide funding for the initiative. Still, he did not foresee having to call a special legislative session in order to pay for it. Reeves said officials and Nvidia were still determining the exact dollar figure the project would require, but the state would spend as much as it took to reach its goal of training at least 10,000 Mississippians. Some of the funding may come from $9.1 million in grants to state institutions of higher learning through the Mississippi AI Talent Accelerator Program, which Reeves announced last week. Nvidia designs and supplies graphics processing units (GPUs), and the Mississippi program will focus on teaching people to work with GPUs. The company has seen growing demand for its semiconductors, which are used to power AI applications. Now the world's most valuable chipmaker, Nvidia announced in April that it will produce its AI supercomputers in the United States for the first time. Louis Stewart, head of strategic initiatives for Nvidia's global developer ecosystem, said the Mississippi program is part of a larger effort to bolster the United States' position as the global leader in artificial intelligence. 'Together, we will enhance economic growth through an AI-skilled workforce, advanced research, and industry engagement, positioning Mississippi as a hub for AI-driven transformation to the benefit of its communities.' ___ This story was originally published by Mississippi Today and distributed through a partnership with The Associated Press.


Forbes
3 days ago
- Business
- Forbes
Data products and services are playing a new role in business.
Data mechanics isn't an industry. Discussion in this space tends to gravitate around the notion of 'data science' as a more pleasing umbrella term. Perhaps borrowing half its name from the core practice of computer science (the label usually put on university studies designed to qualify software application developers who want to program), there is a constant push for development in the data mechanics and management space, even though we're now well over half a century on from arrival of the first database systems. Although data mechanics may not be an industry, it is (in various forms) a company. Data-centric cloud platform company NetApp acquired Data Mechanics back in 2021. Known as a managed platform provider for big data processing and cloud analytics, NetApp wanted Data Mechanics to capitalize on the growing interest in Apache Spark, the open source distributed processing system for big data workloads. But the story doesn't end there, NetApp sold off some of its acquisitions that work at this end of the data mechanics space to Flexera, which makes some sense as NetApp is known for its storage competencies and as the intelligent data infrastructure company, after all. Interestingly, NetApp confirmed that the divestiture of technologies at this level will often leave a residual amount of software engineering competencies (if not perhaps intellectual property in some organizations on occasions) within the teams that it still operates, so these actions have two sides to them. NetApp is now turning its focus to expanding its work with some major technology partners to provide data engineering resources for the burgeoning AI industry. This means it is working with Nvidia on its AI Data Platform reference design via the NetApp AIPod service to (the companies both hope) accelerate enterprise adoption of agentic AI. It is also now offering NetApp AIPod Mini with Intel, a joint technology designed to streamline enterprise adoption of AI inferencing - and that data for AI thought is fundamental. If there's one very strong theme surfacing in data mechanics right now, it's simple to highlight - the industry says: okay you've got data, but does your data work well for AI? As we know, AI is only as smart as what you tell it, so nobody wants garbage in, garbage out. This theme won't be going away this year and it will be explained and clarified by organizations, foundations, evangelists and community groups spanning every sub-discipline of IT from DevOps specialists to databases to ERP vendors and everybody in between. Operating as an independent business unit of Hitachi, Pentaho calls it 'data fitness' for the age of AI. The company is now focusing on expanding the capabilities of its Pentaho Data Catalog for this precise use. Essentially a data operations management service, this technology helps data scientists and developers know what and where their data is. It also helps monitor, classify and control data for analytics and compliance. "The need for strong data foundations has never been higher and customers are looking for help across a whole range of issues. They want to improve the organization of data for operations and AI. They need better visibility into the 'what and where' of data's lifecycle for quality, trust and regulations. They also want to use automation to scale management with data while also increasing time to value," said Kunju Kashalikar, product management executive at Pentaho. There's a sense of the industry wanting to provide back-end automations that shoulder the heavy infrastructure burdens associated with data wrangling on the data mechanic's workshop floor. Because organizations are now using a mix of datasets, (some custom-curated, some licenced, some anonymized, some just plain old data) they will want to know which ones they can trust at what level for different use cases. Pentaho's Kashalikar suggests that those factors are what the company's platform has been aligned for. He points to its ability to now offer machine learning enhancements for data classification (that can also cope with unstructured data) designed to improve the ability to automate and scale how data is managed for expanding data ecosystems. These tools also offer integration with model governance controls, this increases visibility into how and where models are accessing data for both appropriate use and proactive governance. The data mechanics (or data science) industry tends to use industrial factory terminology throughout its nomenclature. The idea of the data pipeline is intended to convey the 'journey' for data that starts its life in a raw and unclassified state, where it might be unstructured. The pipeline progresses through various filters that might include categorization and analytics. It might be coupled with another data pipeline in some form of join, or some of it may be threaded and channelled elsewhere. Ultimately, the data pipe reaches its endpoint, which might be an application, another data service or some form of machine-based data ingestion point. Technology vendors who lean on this term are fond of laying claim to so-called end-to-end data pipelines, it is meant to convey breadth and span. Proving that this part of the industry is far from done or static, data platform company Databricks has open sourced its core declarative extract, transform and load framework as Apache Spark Declarative Pipelines. Databricks CTO Matei Zaharia says that Spark Declarative Pipelines tackles one of the biggest challenges in data engineering, making it easy for data engineers to build and run reliable data pipelines that scale. He said end-to-end too, obviously. Spark Declarative Pipelines provide a route to defining data pipelines for both batch (i.e. overnight) and streaming ETL workloads across any Apache Spark-supported data source. That means data sources including cloud storage, message buses, change data feeds and external systems. Zaharia calls it a 'battle-tested declarative framework' for building data pipelines that works well on complex pipeline authoring, manual operations overhead and siloed batch or streaming jobs. 'Declarative pipelines hide the complexity of modern data engineering under a simple, intuitive programming model. As an engineering manager, I love the fact that my engineers can focus on what matters most to the business. It's exciting to see this level of innovation now being open sourced, making it more accessible,' said Jian Zhou, senior engineering manager for Navy Federal Credit Union. A large part of the total data mechanization process is unsurprisingly focused on AI and the way we handle large language models and the data they churn. What this could mean for data mechanics is not just new toolsets, but new workflow methodologies that treat data differently. This is the view of Ken Exner, chief product officer at search and operational intelligence company Elastic. 'What IT teams need to do to prepare data for use by an LLM is focus on the retrieval and relevance problem, not the formatting problem. That's not where the real challenge lies,' said Exner. 'LLMs are already better at interpreting raw, unstructured data than any ETL or pipeline tool. The key is getting the right private data to LLMs, at the right time… and in a way that preserves context. This goes far beyond data pipelines and traditional ETL, it requires a system that can handle both structured and unstructured data, understands real-time context, respects user permissions, and enforces enterprise-grade security. It's one that makes internal data discoverable and usable – not just clean.' For Exner, this is how organizations will successfully be able to grease the data mechanics needed to make generative AI happen. It by unlocking the value of the mountains of (often siloed) private data that they already own, that's scattered across dozens (spoiler alert, it's actually often hundreds) of enterprise software systems. As noted here, many of the mechanics playing out in data mechanics are aligned to the popularization of what the industry now agrees to call a data product. As data now becomes a more tangible 'thing' in enterprise technology alongside servers, applications and maybe even keyboards, we can consider its use as more than just information; it has become a working component on the factor floor.

Wall Street Journal
3 days ago
- Health
- Wall Street Journal
Defending RFK Jr.'s Adviser
In your editorial 'Meet RFK Jr.'s Vaccine Advisers' (June 13), you write of the secretary's new plans for the Advisory Committee on Immunization Practices: 'One appointee, Retsef Levi, is an MIT business school professor of operations management. What does he know about vaccines?' Allow me to rise to my colleague's defense. The members of ACIP don't develop vaccines. They are charged with assessing their safety and efficacy. That is done through statistics and data science, areas in which Mr. Levi excels. His assessments will be conducted through vaccine trials, using a statistically significant number of participants and correlating the efficacy with a host of variables such as age, comorbidity, gender and dosage. The primary tool available to the researchers is statistical modeling. Only after discovering correlations may medical researchers try to explain them, but that is a secondary part of the trials.


Forbes
4 days ago
- Business
- Forbes
Africa: An Unexpected Hot Hub For IT Talent
Chris Barbin, CEO & Founder, Tercera. getty AI may be taking up all the mindshare these days, but when it comes to designing, developing and delivering technology (especially AI), people will always be part of the equation. Which is why tech and tech services leaders need to have their pulse on where global talent is. Tariffs and geopolitical issues may be weighing on the world's businesses and influencing investment decisions, but great talent will always be needed. And that talent is increasingly global. So, where is the next up-and-coming global talent destination? If you guessed Southeast Asia or Latin America you were wrong. I believe it's Africa. Africa poses enormous potential for IT services, especially as companies go on the hunt for data scientists and AI engineers. With wages on the rise in traditional hotspots like Latin America, Canada and Eastern Europe, leaders are starting to explore Africa as another option. Africa boasts one of the world's youngest, fastest-growing populations, which is projected to nearly double in the next quarter century. By 2050, the UN estimates that Africans will make up a quarter of the global population and more than a third of the world's 15 to 24 year olds, most of whom are growing up as digital natives. Africa's growth, the continued graying of workforces in more mature economies and the rising need for data and AI talent will make Africa harder to ignore. My firm has made understanding Africa's talent ecosystems a priority. We've researched which markets make sense for our portfolio companies, and which have homegrown firms that can support next-generation IT services. We've conducted dozens of meetings with founders and investors, university leaders, VC-backed start-ups and organizations focused on building talent. We've met leaders from Nigeria, Ghana, Senegal, Kenya, South Africa, Egypt, Rwanda, Côte d'Ivoire, Ethiopia, Tunisia and Mauritius. While these efforts have only scratched the surface of understanding this unique, diverse continent, here are a few of my early takeaways. Africa is experiencing an explosion of tech-focused bootcamps. Corporate and government-sponsored training programs and tech-focused boot camps are supplementing universities to produce professionals with the technical and business skills firms seek. In Kenya, for instance, tech-staffing firm Tana equips entry-level tech talent with in-demand skills, offering international clients a sustainable way to scale. And across the continent, the ALX Foundation trains people on Salesforce, data, AI and other skills. This is partly to counteract the soaring unemployment across the continent. More than 8 million African youth will enter the labor market annually in the coming decades, yet only about 3 million formal wage jobs are being created each year. For Africans aged 18 to 35, unemployment tops the list of problems they want their governments to address and national leaders are under intense pressure to compete for business opportunities and create jobs. Today, trained resources still far exceed available jobs. Nonetheless, these programs will pay dividends in the long term, especially as they shift focus to training and staffing the 'missing middle'—those mid-level roles with the technical and soft skills needed in the AI era. A steady flow of tech giants, accounting firms and global systems integrators (GSIs) have already moved operations onto the continent. Microsoft and Amazon were some of the first big tech companies to invest in the area. It was a team in South Africa that helped develop what became AWS cloud computing. Google launched an AI research center in Ghana in 2018, and more recently opened up a product development center in Kenya. And ServiceNow, which has seen tremendous growth, is in the early stages of exploring how it builds out a base of certified talent there and investz in services businesses in the region. As for large services firms, we looked at a representative sample of 25 GSIs and found that 16 had an Africa presence—including Accenture, Deloitte, PwC, EY and Infosys. Ten had offices in multiple countries. The African markets with the most GSIs are South Africa, Morocco, Egypt, Kenya, Nigeria and Tunisia. Companies looking to build out their talent base in Africa should consider the distinct nuances of the different countries. On one end of the spectrum are countries like South Africa, Egypt and Kenya that have mature tech ecosystems and economies that I believe can more easily support new entrants. In the middle are countries like Nigeria, Ghana, Rwanda and Senegal that have growing economies and large young workforces worthy of investment, but where navigating regulations, economic factors, language and cultural differences may require deeper investment and a more nimble local partner. On the other end of the spectrum are countries like Cameroon and the Democratic Republic of Congo (DRC) that harbor fast-growing populations of talented young people, but may also have political, security and infrastructure challenges that might hinder immediate investment. As always, when investing in any new area, it's important for leaders to know how a regional investment fits into the broader business and operational strategy. To go in with eyes wide open—to fully understand the opportunities and risks. Certain technical and soft skills may be scarce, given the younger and less-developed status of the labor market in parts of Africa. And leaders tell us that once individuals have been trained to an internationally competitive level, keeping them engaged in the local market can be challenging. Africa won't be the answer for every firm, nor will it be the only region where companies look to find scarce technical talent in the AI era. For example, I fully expect India to see a surge in investment as well, especially from U.S. companies looking to navigate a growing trade war in other areas of the world. However, those firms that can get into the right market early, with the right level of oversight, may just outpace their competitors that play it safe. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?


Forbes
08-06-2025
- Science
- Forbes
Are We Paying Too Much Attention To Machines?
Are we paying too much attention to machines? As we delve into everything that artificial intelligence can do today, we also run into some questions about what we choose to offer to the technology. In some ways, you could boil this down to talking about the attention mechanism. Stephen Wolfram is a renowned data scientist and mathematician who often talks about the ways that AI curates human attention, or the ways that we can direct it to focus on what's useful to us. Here's a bit of what he said in a recent talk that I wrote about a few weeks ago: 'Insofar as we humans still have a place, so to speak, it's defining what it is that we want to do, and that's something that you can specify that more precisely computationally,' Wolfram said. 'That's how you get to (the answer) that is precisely what we want.' Interested in the intersection of human and AI attention, I typed the following question into Microsoft Copilot: 'are we paying too much attention to machines?' Here are the five fundamental sources that the model used to reply. The first one is from one of our own authors at Forbes, Curt Steinhorst, who asked: how will we keep people at the center of business? 'We seem to believe that we are only one 'life hack' away from limitless productivity, that the skilled use of human focus can be reduced to a productivity system, and that if we simply want it bad enough, we can beat the machines at their own game,' Steinhorst writes. 'But this attitude amounts to a passive indictment of our innate humanity, and it is a problem. We will never catch machines and digital tools in the ways they excel—but there is reason to believe that technology will never catch up to humanity in the ways that we excel. The key is to understand and nurture the differences, rather than pursue the parallels.' The second source Copilot shows is a scientific paper in the International Journal of Information Management that asks: what is it about humanity that we can't give away to intelligent machines? I'm going to quote from the conclusions of the study: 'Humans must retain the role of meaningful, responsible critique of the design and application of AI, and the intelligent machines it can create. Critique is a vital concept that humanity can retain as a means to ensure liberation from intelligent machines. Suppose intelligent machines are used to help shape decision processes in life-changing situations, such as criminal court proceedings, or to aid emergency care first responders in disaster situations. In that case, they should serve only as referees or expert guides and not as decision-makers. It is critical that such machine 'referees' or 'guides' should be subject to constant human critique. Second, a human must be kept in the loop of intelligent machine decision-making processes. This involvement is vital to preserve our ability to systematically reflect on the decisions we make, which ultimately influence our individuality, a central feature of humanism.' I think that's useful, too. The third source is a LinkedIn piece from Shomila Malik noting that the brain looks for information about 4 times per second, and talking about how our human attention is paid. I think this is leading toward the next piece that I'll summarize next. Here, there's sort of an emphasis on prolific media and stimulus 'flooding the zone' and overwhelming our human attention spans. There's an interesting proposition in the fourth link that I found talking about the recent work of pioneers like Ezra Klein. The author also reveals a theory from professor of psychiatry Joel Nigg. In a nutshell, it's that our attention is being degraded through attentional deficits caused by things like a pathogenic environment, inadequate sleep, unhealthy diets, air pollution, lack of physical activity, other health conditions, overwork, excessive stress, early trauma, relationship strains, and smoking cigarettes. In the last of the links at the New York Times, Stephen Hawking is quoted, saying artificial intelligence could be a real danger and explaining the problem that way: 'It could design improvements to itself and outsmart us all,' Hawking theorized. I'll let that comment speak for itself. (Be sure to check out Hawking's words on 'killer machines' and frightening scenarios, and remember, this guy is a renowned scientist.) In a recent talk at Imagination in Action, David Kenny talked about applying lessons from IBM Watson's performance on Jeopardy, and other landmarks of AI design. In general, he noted, we're moving from the era of inductive reasoning, to one of deductive and affective reasoning. He mentioned a weather app giving probabilities in percentages, rather than a clear answer, and the need to prompt engineer in order to get results from LLMs, instead of just accepting whatever they say the first time. A new generation, he said, is generally becoming more trustful of AI for data on medical conditions, relationships, financial strategies, and more. 'There's just been an enormous trust put in this,' he said. 'It's all working for them on a very personalized basis. So we find that there are people getting their own information.' Human interactions, he said, like dating and marriage, are reducing, and people trusting the machines more can be good, or in his words, 'super-dangerous.' '(Humans need to) build critical thinking skills, build interpersonal skills, do things like this that bring them together with each other, spend time with each other in order to take full advantage of the AI, as opposed to ceding our agency to it,' he said. 'So while the last 15 years were largely about technical advances, and there's a lot of technical advances we're going to see today and invest in, I think it's even more urgent that we work on the human advances, and make sure that technology is actually bringing communities back together, having people know how to interact with each other and with the machine, so that we get the best answer.' And then he went back to that thesis on inductive versus deductive reasoning. 'It takes a humility of being able to understand that we're no longer about getting the answer, we're about getting to the next question,' Kenny said. For sure, there's a need to celebrate the human in the loop, and the inherent value of humanity. We can't give everything away to machines. All of the above tries to make some through lines in what we can give away and what we can keep. Maybe it's a little like that Marie Kondo thing, where if it sparks joy, we reserve it for human capability, and if we need help, we ask a machine. But this is going to be one of the balancing acts that we have to do in 2025 and beyond, as we reckon with forces that are, in human terms, pretty darn smart.