logo
#

Latest news with #EdTech

Why Cybersecurity is Now Critical for Schools
Why Cybersecurity is Now Critical for Schools

TECHx

time2 days ago

  • Business
  • TECHx

Why Cybersecurity is Now Critical for Schools

Home » Expert opinion » Why Cybersecurity is Now Critical for Schools Emad Fahmy, Systems Engineering Director at NETSCOUT, examines the cybersecurity risks of EdTech including outages, breaches and AI gaps threatening digital learning. EdTech is transforming education through AI tutors, VR classrooms, and digital tools, but as its use grows, so do the risks. Data breaches, outages, and system failures can quickly disrupt learning. The challenge today isn't adopting new technology: it's protecting it. EdTech's growing role and risks EdTech blends educational theory with software innovation to enhance teaching via immersive digital experiences, online lectures, collaborative tools such as Google Workspace, and accessibility solutions for diverse learning needs. Market research from Arizton projects the global EdTech market will reach USD 738 billion by 2029. New cloud-based learning management systems (LMSs) emerge almost yearly to support coursework, while student information systems (SISs) continue to evolve. But, as with any technology, network and application performance issues can disrupt learning and strain IT teams. A quick look at StatusGator shows several major EdTech outages in January 2025 alone, from login failures to full system crashes cutting off schools from critical platforms. In severe cases, outages can block emergency alerts, as seen during the CrowdStrike update that disrupted IT systems in U.S. schools. EdTech adoption is accelerating worldwide, driven by both government strategies and private innovation. In the UAE, the Ministry of Education partnered with Google in 2024 to launch a national AI literacy programme. The Digital School and Alef Education also introduced an AI tutoring pilot that personalises learning through machine learning. The PowerSchool breach is a wake-up call for digital learning Federal laws such as the Family Educational Rights and Privacy Act (FERPA), set privacy standards but do not require breach notifications; instead, state laws and contracts govern whether schools or EdTech providers must disclose breaches or outages. In December 2024, PowerSchool, a widely used SIS platform across North America, experienced a major data breach. Attackers reportedly exploited a compromised credential to access sensitive student and staff records via the company's support portal. The breach potentially affected tens of millions of individuals, including names, addresses, academic performance, medical history, and other personal identifiers. Without real-time visibility into threats or consistent reporting standards, such incidents often go undetected until substantial damage has already occurred. The underlying issues of centralised platforms, inadequate credential security, and lack of real-time alerting are not unique to one region. In the Gulf, where cloud-based tools are being integrated rapidly into national education systems, these risks are prompting pre-emptive action. Saudi Arabia's AI usage guidelines, for example, include clear restrictions on access to generative AI tools for students under 13 and require parental consent for use by those under 18, underscoring the role of policy in mitigating unintended exposure and misuse. Why schools are a top target for cyberattacks Schools have become a preferred target for cybercriminals, largely because their infrastructure often lags behind that of other public or private institutions. Phishing, ransomware, and credential theft remain prevalent, with schools offering rich repositories of personal data that can be exploited for years. The Internet Crime Complaint Center (IC3) continues to report increased frequency of cyberattacks targeting educational institutions. Unlike adults, students rarely monitor their credit reports, making them particularly susceptible to long-term identity fraud. At the same time, overstretched IT departments may lack the capacity to implement comprehensive security controls, particularly in schools without a dedicated chief information security officer (CISO). Globally, the risk landscape is growing alongside investment. In the Middle East and Africa, schools are deploying everything from smart classrooms to immersive VR labs, such as those now being piloted in UAE public schools. But the speed of implementation is not always matched by readiness. Where AI is concerned, the development of policy frameworks as seen in Saudi Arabia's three-part AI guidebook can provide foundational safeguards, but ongoing implementation, oversight, and adaptation remain critical. Technology migrations, misconfigurations, inconsistent policy enforcement, and third-party dependencies remain some of the most common causes of downtime and data exposure. Without integrated visibility across networks and applications, many schools remain reactive rather than proactive in the face of digital risk. By Emad Fahmy, Systems Engineering Director at NETSCOUT

AI First? Make Sure Your People Understand It First
AI First? Make Sure Your People Understand It First

Forbes

time4 days ago

  • Business
  • Forbes

AI First? Make Sure Your People Understand It First

AI: Educate first. getty AI-first thinking doesn't just spring out of a vacuum. Leaders and employees need to adopt an AI-first mindset that prepares everyone for the changes ahead. This makes training and education about AI more important than anything – and where any AI-first efforts are most likely to get bogged down. Among students, 65% say they had not had the opportunity to take an AI-specific or AI-inclusive courses at their universities, according to a student-run survey published in EdTech. Only three percent felt very confident that their education would help them secure a job in a field involving AI. AI education is still lacking for current employees as well. While the percentage of workers using AI for their jobs increased from eight percent in 2023 to more than one-third (35%) as of this spring, only 31% said their employer-provided training on AI tools, according to a survey released by Jobs for the Future. In addition, AI use appears to an individual endeavor, with a majority (60%) reporting using AI primarily for self-directed learning. The importance of education and training to prepare organizations for an AI future is emphasized by Adam Brotman and Andy Sack, in their latest book, AI First: The Playbook for a Future-Proof Business and Brand. An AI-first policy cannot move forward without education and training, said Brotman, former chief digital officer at Starbucks, and Sack, former adviser to Microsoft CEO Satya Nadella. 'An AI-first mindset requires a commitment to ongoing education about AI technologies and their potential applications," they wrote. "It encourages experimentation and learning from both successes and failures, ensuring that teams stay ahead of technology advancements.' Such programs should begin with programs 'to build proficiency across the organization. These programs should cover AI basics, applications, and potential impacts on various business functions.' Ultimately, AI education and training smooths the way for 'proper governance and process for scaling AI within your company," they added. "You can't effectively advise the company on an appropriate AI use policy or help prioritize potential AI pilots if you don't have a basic understanding of how the foundational AI systems work, versus still needing to improve, or the variety of capabilities and workflows that stem from AI." Brotman and Slack outline the progression for both individuals and their organizations – from experimenting with AI to building an AI-first culture: Notably, an AI-first mindset also borrows from the 'lean' approach to management, emphasizing 'continuous improvement and innovations by building products that customers want through interactive cycles of build, measure, and learning,' Brotman and Slack pointed out. AI-first lean thinking 'starts with identifying the core problem that needs solving and developing a minimum viable product to test hypotheses. Lean thinking is about reducing waste in processes, understanding customer needs through direct feedback, and pivoting strategies based on data and insights.'

Republicans Must Say No to the AI Regulation Moratorium
Republicans Must Say No to the AI Regulation Moratorium

Newsweek

time4 days ago

  • Business
  • Newsweek

Republicans Must Say No to the AI Regulation Moratorium

In the earliest days of Donald Trump's second term, there were exciting signs that the administration was going to chart what we might call a "human-first" course on technology. Voters who were angry over how smartphones, social media, app stores, and EdTech had metastasized into something resembling a conspiracy against children, and who were anxious that automation might take their jobs, helped the president retake the White House. But hopes for a human-first tech policy are already dimming. In its all-consuming efforts to beat China in the A.I. race, the Republican Party has fallen into its old libertarian habits of deferring to Big Tech's interests, failing to protect children and families from predatory uses of emerging technology, and deregulating the industry so that it can operate without any concern for consumer welfare. It's not too late, though. In the administration's earliest days, the president sided with longshoremen against efforts to make union members redundant via automation. Also, in his January 25, 2025 executive order, the president committed to A.I. policy that pursues "human flourishing." Vice President JD Vance declared at February's A.I. Action Summit in Paris that the administration would "fight for policies that ensure that AI" will lead to "higher wages, better benefits, and safer and more prosperous communities." This is the road that most Americans want the administration to take. But since then, the Republican Party has taken one huge step backward. Last month the House of Representatives approved an amendment to the "Big Beautiful Bill" that, if ratified by the Senate, would shield A.I. companies from state regulation and liability for ten whole years. Such a move shows astounding disregard for how ungoverned technologies can undermine human flourishing—and it would unbridle Big Tech's power. The moratorium would void a law in Utah, for instance, that prohibits mental health chatbots from targeting users with advertising, a policy that removes companies' incentives to exploit a suffering audience. It would also block a proposed law in Texas that would require a "human operator," i.e., a human driver, to accompany an autonomous long-haul truck as it transports its freight. And it would block several laws that have been introduced around the country, including in blue states like California and New York, that would require so-called "A.I. companions"—an Orwellian bit of Big Tech branding—to clarify that they are not human beings. SAN FRANCISCO, CALIFORNIA - JUNE 02: Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, 2025 in San Francisco, California. SAN FRANCISCO, CALIFORNIA - JUNE 02: Open AI CEO Sam Altman speaks during Snowflake Summit 2025 at Moscone Center on June 02, 2025 in San Francisco, must learn from Congress' past mistakes, such as when, in 1996, it passed the ignominious Section 230 of the Communications Decency Act. Section 230 was touted as necessary to guard the innovative potential of the nascent online service industry from death by regulation. But, by granting sweeping immunity for any content posted by third parties, platforms were disincentivized from making good-faith efforts to protect kids. Section 230 dug a legal moat around Big Tech from behind which the industry waged war on America's children. The ten-year moratorium on A.I. regulation portends a similar legacy. It indicates that Congress, especially Republican leadership, has failed to reckon with how immunizing technological power from liability threatens human flourishing. To its credit, by including human flourishing in its A.I. policy framework, the administration recognizes the possibility of promoting A.I. innovation without sacrificing other human goods. Human flourishing as an explicit policy objective underscores that "acceleration," as the techno-libertarian Right calls it, is an over-simplified paradigm, and that tech policy needs to pursue a broader suite of values, especially the good of the human person and the family. As we have argued elsewhere, the achievement of human flourishing in the age of A.I. (as in every age) depends on deliberate policy choices. Technological innovation, no matter how beneficial to economic prosperity or national security, should never come at the expense of the family or the human person. And there are ways to balance these interests. We have called upon the Trump administration, for instance, to establish a Working Group on Technology and the Family, that would directly assist in the formation of policy to guide technology toward family empowerment, and away from legislation—like the moratorium—that would put families in the crosshairs. In February 2019, the first Trump administration released an executive order that committed the federal government to securing "public trust" and "public confidence" in its A.I. policy. It acknowledged that protecting "American values" was a critical objective, even as it worked to advance "American leadership in AI." That is what an administration committed to human flourishing sounds like; and it is what the second Trump administration sounded like at its start. A ten-year moratorium on state regulation, by contrast, is just a retread of the tired libertarian playbook that trades American values and public trust for technological power and financial gain. Fortunately, a groundswell of opposition among Republicans senators has emerged, such as Josh Hawley (Mo.), Marsha Blackburn (Tenn.), Ron Johnson (Wisc.), and Rick Scott (Fla.), who publicly oppose the moratorium. Representative Marjorie Taylor Green (R-Ga.) has done likewise, and more may join them. So, the die is not yet cast. The word is not yet final. The future is still ahead. The Trump administration can still make a human-first A.I. policy. But the time for choosing is now. Michael Toscano is director of the Family First Technology Initiative at the Institute for Family Studies. Jared Hayden is a policy analyst for the Family First Tech Initiative at the Institute for Family Studies. The views expressed in this article are the writers' own.

Entrepreneur UK's London 100: Perlego
Entrepreneur UK's London 100: Perlego

Entrepreneur

time13-06-2025

  • Business
  • Entrepreneur

Entrepreneur UK's London 100: Perlego

Industry: EdTech Perlego is flipping the textbook game with its "Spotify for Textbooks" model — giving students unlimited access to 1.5m+ academic titles via subscription. Its AI tool, Dialogo, levels up learning with smart, verified insights, while keeping publishers paid. Their first US academic partnership with Westcliff University is already making waves, and with reader engagement up 217%, Perlego is proving that digital textbooks can actually work for students.

UAE: How parents can support their children through new AI school curriculum
UAE: How parents can support their children through new AI school curriculum

Khaleej Times

time13-06-2025

  • Business
  • Khaleej Times

UAE: How parents can support their children through new AI school curriculum

As the UAE strives to be a global leader in artificial intelligence‭ (‬AI‭) ‬development‭, ‬a new educational mandate will revolutionise learning‭, ‬impacting both students and parents‭. ‬Starting in the 2025-2026‭ ‬academic year‭, ‬AI will be integrated into the school‭ ‬curriculum from kindergarten to 12th grade‭, ‬covering foundational concepts‭, ‬ethical considerations‭, ‬and real-world applications‭.‬‭ ‬This move aligns the UAE with other forward-thinking nations like China‭, ‬which are also introducing AI education early on‭.‬ For UAE parents‭, ‬this presents both an exciting opportunity and a unique challenge‭. ‬While AI will‭, ‬undeniably‭, ‬shape their children's future‭, ‬many adults may feel they are playing catch-up with technology‭. ‬With a compulsory AI curriculum‭, ‬a crucial question arises‭: ‬How can parents effectively support their children and navigate this new frontier of AI education and EdTech tools‭? ‬This is the essence of‭ ‬'smart parenting'‭ ‬in the AI age‭.‬ Dubai-based parent of two and AI brand and content strategist Abha Malpani Naismith observes‭, ‬'As a mum and AI advocate‭, ‬introducing AI from age four is timely and essential‭. ‬Our children are growing up in a world reshaped‭ ‬by AI‭. ‬Teaching AI literacy ensures they are not just passive users but informed‭, ‬responsible creators and problem-solvers‭, ‬equipped to thrive in an AI-integrated future‭.‬' Understanding the curriculum The UAE's new curriculum will be comprehensive‭, ‬aiming to provide a deep understanding of AI‭. ‬It encompasses ethical awareness and practical applications‭, ‬not just coding or robotics‭. ‬Parents should familiarise themselves with the basics of AI concepts like machine learning and ethical implications to engage in meaningful conversations with their children and support their learning‭.‬ Dr Naomi Tyrrell‭, ‬AI trainer and consultant‭, ‬and mother of two adds‭, ‬'Children often learn new technology faster than parents‭. ‬This offers an opportunity for intergenerational learning‭. ‬Safety and ethical use are paramount‭, ‬so discussing AI use and ethics alongside online safety is crucial‭. ‬Parents must not solely rely on schools to inform children about all risks and dangers‭.‬' Engaging with children about AI Parental engagement is key‭. ‬Via open discussion of what children are learning in AI classes‭, ‬we can ask about projects‭, ‬tools‭, ‬and what they find interesting or challenging‭. ‬Because AI is evolving‭, ‬this dialogue must be ongoing‭. ‬Parents can encourage questions‭, ‬explore online resources together‭, ‬and make it a shared learning experience‭. ‬This reinforces school learning and strengthens the parent-child bond‭.‬ Malpani Naismith notes‭, ‬'Introducing AI in schools doesn't necessarily mean more screen time‭. ‬It's about age-appropriate exposure that builds curiosity‭, ‬critical thinking‭, ‬and responsible use‭. ‬Parents must manage screen time‭ ‬outside school‭, ‬ensuring children unplug‭, ‬play‭, ‬explore‭, ‬and build offline experiences‭.‬'‭ ‬She further suggests focusing on‭ ‬'valuable'‭ ‬screen time that involves using AI for problem-solving‭, ‬building‭, ‬or expressing creativity‭.‬ Age-appropriateness and developmental stages Dr‭. ‬Tyrrell explains‭, ‬'Parental support must be appropriate to children's ages and abilities‭. ‬Developing awareness of different AI types and models will help support children's learning‭. ‬Generative AI allows instant access to knowledge‭.‬'‭ ‬What a kindergartener needs to learn differs vastly from a 12th grader‭. ‬Parents should understand the scope of each stage and advocate for a balanced curriculum‭. ‬Concerns about complexity or different approaches can be discussed with the school‭.‬ Real-world applications and home learning One of the strengths of the UAE's curriculum is its focus on‭ ‬'real-world applications'‭ ‬of AI‭. ‬Children might be learning how AI is used in healthcare‭, ‬transportation‭, ‬or environmental conservation‭. ‬Parents can reinforce these concepts at home‭. ‬For example‭, ‬to discuss how smart devices use AI‭, ‬explore AI-powered apps together‭, ‬or even conduct simple AI-related experiments‭. ‬Encouraging children to think critically about how AI impacts their daily lives can deepen their understanding‭. ‬'To optimise children's AI learning‭, ‬parents should familiarise themselves with mainstream AI tools‭ ‬–‭ ‬learn what they can do well and what they are not so good at‭! ‬Learning how AI tools work‭ ‬–‭ ‬even at its simplest level‭ ‬–‭ ‬can be helpful to support children's learning as they are likely to ask questions‭,‬'‭ ‬says Dr Naomi Tyrrell‭.‬ 'The responsibility isn't just on schools‭; ‬it's on us as parents to grow with our children‭. ‬We need to stay informed‭, ‬understand the tools they're learning‭, ‬and create a healthy balance at home‭. ‬That means reflecting on our own screen habits‭, ‬asking schools the right questions about how they are bringing in AI into the curriculum‭. ‬And it's not enough for us to just be using AI‭, ‬we need to also work on our AI literacy‭ ‬—‭ ‬our ability to understand‭, ‬use‭, ‬and critically engage with artificial intelligence in a meaningful and responsible way‭,‬'‭ ‬says Abha Malpani Naismith‭.‬ Balancing technology and traditional methods While AI and EdTech offer great opportunities‭, ‬maintaining balance is crucial‭. ‬Over-reliance on technology can hinder other developmental aspects‭. ‬We must ensure children still engage in traditional learning methods like reading physical books‭, ‬handwriting‭, ‬and face-to-face interactions‭.‬ Dr‭. ‬Tyrrell suggests preparing children for change by discussing how technology has evolved‭. ‬'Explore AI's possibilities and what it means for your family‭, ‬community‭, ‬and the world‭. ‬Discuss critical questions together‭, ‬like access‭, ‬environmental implications‭, ‬and ethical concerns‭.‬' She also advises using parental controls‭, ‬limiting screen time‭, ‬and encouraging other off screen activities‭. ‬She also warns against over-reliance on AI‭, ‬which can essentially deskill us‭, ‬and our real-life social interactions‭.‬ Digital literacy and ethical awareness Digital literacy is a non-negotiable skill‭. ‬Parents must prioritise teaching their children how to navigate the online world safely and responsibly‭. ‬This includes understanding online safety‭, ‬protecting personal information‭, ‬and identifying misinformation‭.‬‭ ‬With the inclusion of‭ ‬'ethical awareness'‭ ‬in the AI curriculum‭, ‬parents should engage their children in conversations about responsible AI use‭, ‬data privacy‭, ‬and the potential biases within AI systems‭. ‬These are critical discussions that will shape how the next generation interacts with technology‭, ‬and‭, ‬importantly‭, ‬encourage critical thinking and understanding of how our data is used and problem-solve privacy concerns amongst others‭.‬ Choosing reliable EdTech The abundance of educational apps and online tools can be overwhelming‭. ‬Parents must be discerning when selecting EdTech resources‭. ‬Consulting with teachers and other parents can also provide valuable insights‭. ‬British schools in Cambridgeshire and Middlesex involved in a new AI pilot programme have highlighted significant gains in student confidence and engagement as part of a group of 20‭ ‬across Britain using specially-developed AI writing programme Writer's Toolbox‭. ‬Whilst reservations from teachers included the need for children to learn how to hand write properly‭, ‬the programme and the software relates to where the children are individually‭, ‬not just their age‭, ‬but also their progress and provides instant‭ ‬feedback‭. ‬This is encouraging self-confidence and motivation‭, ‬especially in boys‭, ‬participating schools found‭. ‬Dr Ian Hunter‭, ‬founder of Writer's Toolbox and former university professor‭, ‬says‭: ‬'We're encouraged by the early results of the British pilot‭. ‬Schools and educators have been immensely supportive‭. ‬I think one of the things the teachers are telling us is that‭, ‬in the midst of the genuine concerns around AI‭, ‬if we carefully construct purpose‭-‬built AI for the education sector‭, ‬it can help amplify the work of the classroom teacher and provide customised learning at scale‭.‬' AI and EdTech can be powerful tools for supporting diverse learning styles‭ ‬—‭ ‬particularly where platforms offer personalised learning experiences‭, ‬adapting to a child's pace and learning preferences‭. ‬'It could be argued that AI and EdTech tools‭ ‬'level the playing field'‭ ‬in education‭,‬'‭ ‬says Dr Naomi Tyrrell‭. ‬'Features like real-time translation‭, ‬speech recognition‭, ‬and closed captioning improve accessibility for students with additional language or literacy needs‭.‬' Concerns and misconceptions Malpani Naismith highlights the need to guide children in using AI wisely and nurture soft skills‭, ‬like empathy and critical thinking‭. ‬The UAE's AI agenda will significantly impact future opportunities‭. ‬By embracing AI education‭, ‬the country is preparing its youth for future careers‭. ‬Parents can support this by fostering a mindset of continuous learning and adaptability‭.‬ Dr Naomi Tyrell concludes‭: ‬'Common concerns parents may have about AI in education are that AI will replace teachers‭, ‬reduce critical thinking‭, ‬compromise data privacy‭, ‬or expose children to biased or inappropriate content‭. ‬Some may worry that reliance on AI will make learning impersonal or that children will become too dependent on technology and screens‭. ‬To address these concerns‭, ‬it's important to emphasise that AI tools used in the right way can enhance and not replace‭ ‬–‭ ‬they can provide more tailored support and free up time for‭ ‬more meaningful interaction with teachers and peers‭. ‬The ethics and risks are not being ignored‭ ‬—‭ ‬educators and developers are increasingly embedding ethical safeguards‭, ‬data privacy protections‭, ‬and bias-awareness into AI tools‭ ‬–‭ ‬because they know they have to‭! ‬Parents can play a key role by staying informed‭, ‬guiding their children's use of AI responsibly‭, ‬and maintaining open dialogue with schools and children about how these tools are being used to enhance‭, ‬rather than replace‭, ‬human-led learning‭.‬'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store