
SICE 2025 urges responsible AI use in higher education
Dual-campus conference collaboration
Day One of the conference took place at UoS, while Day Two was held at AUS in the presence of Her Excellency Sheikha Bodour bint Sultan Al Qasimi, President of AUS, bringing together academics, researchers and industry leaders to explore the expanding role of artificial intelligence (AI) in higher education and workforce development.
Opening remarks emphasise partnerships
The day began with a welcome address by Her Excellency Sheikha Bodour, followed by welcome remarks from Professor Esameldin Agamy, Chancellor, University of Sharjah, underscoring the importance of academic partnerships in advancing innovation across the UAE's education sector.
Sheikha Bodour urges thoughtful AI integration
Addressing the conference, Sheikha Bodour said:
"We have a shared responsibility as educators and innovators to adapt to artificial intelligence and to thoughtfully shape its integration in ways that uplift our students – and society..."
AUS leadership stresses frameworks for AI
Her comments had been foreshadowed on Day One by Dr. Tod Laursen, Chancellor of AUS, who said:
"SICE 2025 served as a timely platform to engage with how AI is redefining the role and responsibilities of higher education institutions..."
Keynote speakers explore strategic AI adoption
The Day Two program featured two keynote presentations, each offering distinct insights into the evolving role of AI in higher education.
Khadish Franklin, Managing Director and Head of Research Advisory Services at EAB, discussed how institutions can establish an AI posture to support strategic transformation. Dr Jassim Al Awadhi, Senior Director and Digital Transformation Principal in the telecom sector, examined AI's implications for graduate readiness and future labor market demands , while Dr Sami Nejri, invited speaker, explored AI's cognitive and interdisciplinary dimensions, encouraging institutions to rethink traditional academic boundaries.
Panel discusses higher education's role in bridging the AI skills gap
A panel discussion titled 'Bridging the AI Skills Gap: Higher Education's Role in Shaping the Future Workforce' brought together leaders from New York University Abu Dhabi, Mohamed bin Zayed University of Artificial Intelligence, the American University in Cairo, and Amazon Web Services. Moderated by Dr Fadi Aloul, Dean of the AUS College of Engineering, the discussion focused on the need to integrate AI-related competencies into curricula while maintaining academic rigor and relevance.
Research presentations highlight AI's transformative role
Over the course of the day, more than 30 peer-reviewed research papers were presented across six thematic tracks, covering topics such as AI in engineering and design education, blended learning and gamification, multilingual instruction, AI in legal writing, personalised learning models, faculty development and data-driven academic research. Presenters showcased a range of applications demonstrating how AI is reshaping pedagogy, assessment, engagement and institutional planning.
Poster session showcases faculty and student innovations
A poster session held in the AUS Main Building Rotunda provided faculty and student researchers with a platform to share projects focused on AI-generated content, classroom technologies, learning analytics and collaborative digital tools.
Expert insights on academic readiness for AI integration
Dr. James Griffin, Vice Provost for Undergraduate Affairs and Instruction at AUS, said: "The technical depth of the conference was especially impactful... These insights offer a practical roadmap for how universities can approach AI adoption with academic rigor and innovation."
AUS faculty honoured for excellence in research
During the conference, six distinguished AUS faculty members were recognized with the 2025 Excellence in Research Awards, honoring exceptional contributions in creative works, humanities and social sciences and STEM. In the Creative Works category, Faysal Tabbarah, Associate Professor of Architecture, received first prize for his innovative architectural research addressing regional and environmental contexts, while Dr. Sohail Dahdal, Head of the Department of Media Communication, earned second prize for his immersive storytelling projects that integrate artistic excellence with digital technology. In the Humanities and Social Sciences category, Dr. John Katsos, Professor of Management, was awarded first prize for his influential research and recognition in business and peace studies, including Nobel Peace Prize nominations and publications in leading journals, and Dr. Ahmed Ali, Professor of Translation and Head of the Department of Arabic and Translation Studies, received second prize for his impactful work in Arabic linguistics and translation. In the STEM category, Dr. Mostafa Shaaban, Associate Professor of Electrical Engineering and Director of the Energy, Water and Sustainable Environment Research Center, received first prize for his leadership in smart grids, energy resilience and electric vehicles, and Dr. Farid Abed, Professor of Civil Engineering, was honored with second prize for his advancements in structural engineering and sustainable construction materials.
SICE 2025 reinforces Sharjah's leadership in AI and education
SICE 2025 reflected a shared commitment by AUS and UoS to advancing academic dialogue on the integration of artificial intelligence in higher education. Through this collaboration, the conference reinforced Sharjah's position—and that of the wider UAE—as a leading center for research, innovation and cross-sector engagement in shaping the future of education.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


UAE Moments
an hour ago
- UAE Moments
7awi Media Group Launches AI Content Code of Conduct
June 23, 2025, Dubai, United Arab of Emirates — 7awi Media Group has introduced its official Code of Conduct for AI-Generated Content, reinforcing its commitment to editorial quality and responsible content practices. This initiative reflects 7awi's ongoing efforts to adapt to evolving content creation technologies while maintaining trust and transparency with its audiences across platforms including fashion, wellness, lifestyle, and more. As artificial intelligence (AI) becomes increasingly integrated into the media landscape, 7awi recognizes the urgent need for clear editorial and ethical guidelines to preserve credibility and protect audiences from misinformation. 'AI is not just a tool—it's a force that will shape the future of media across our region. At 7awi, we're committed to leading this transformation responsibly, ensuring that our AI journey is guided by ethics, creativity, and a deep respect for the trust our audiences place in us.' - Anas Abbar, Co-Founder & CEO, 7awi Media Group. The full Code of Conduct for AI-Generates Content, outlining the principles guiding AI content use across 7awi's brands, is available here: Code of Conduct This code will serve as a cornerstone of 7awi's evolving editorial strategy, empowering its teams to innovate responsibly while safeguarding public trust.


UAE Moments
an hour ago
- UAE Moments
7awi Media Group: Code of Conduct for AI-Generated Content
At 7awi Media Group, we believe in the power of storytelling and embrace technology that helps us tell those stories more effectively. As Artificial Intelligence continues to reshape the world of content, we see it not as a replacement but as a tool that supports our creativity, enhances our workflows, and allows us to serve our audiences more efficiently. That said, there are lines we won't cross. This AI Code of Conduct reflects our commitment to quality, credibility, and editorial responsibility across all 7awi platforms—whether in fashion and beauty, wellness, automotive, or lifestyle. It's our promise to our readers, our partners, and ourselves that no matter the tool, integrity comes first. 1. Truth First: Accuracy & Credibility No medical, scientific, or technical content is published without verifying it through reliable, recognized sources. Sensitive topics must be reviewed by qualified human experts. AI may assist, but never replace, human judgment in validating facts. 2. We're Transparent With Our Readers If AI helped shape a piece of content, we'll say so—especially when it matters. Misleading our audience by passing AI-generated material as fully human-written is off-limits. 3. Editorial Voice Matters Every piece must reflect the unique tone and editorial spirit of the 7awi platform it appears on. Awkward language, repetition, or robotic phrasing? Not acceptable. Disrespectful, biased, or discriminatory content—AI or not—has no place here. 4. Respect Intellectual Property We do not publish plagiarized material. Period. AI content is subject to the same copyright standards as any other. Quoting studies or reports? Sources must always be credited. 5. Originality Is Non-Negotiable Copy-pasting AI text with no added value is lazy, and it won't fly here. Our headlines must inform, not mislead. We don't do clickbait. 6. Every AI-Assisted Article Gets Human Eyes Nothing generated by AI goes live without a thorough editorial review. Our editors are trained to spot issues, elevate quality, and ensure every story meets our standards. 7. Privacy is Personal We never use real personal data in AI-generated content without explicit consent. Fictionalized content must never blur the line between reality and make-believe when it involves real individuals or organizations. 8. We Evolve With the Technology This Code isn't static. As AI tools evolve, so will our practices. Our content teams remain accountable for applying this policy, consistently and thoughtfully. 9. Empowering Our Teams We invest in training so our editors and writers understand AI's strengths—and its limitations. AI is a support system, not a shortcut. We use it to enhance quality, never to compromise it. 10. A Note on Accountability We're not perfect—and neither is AI. Mistakes will happen. But at 7awi, we own them. We're committed to correcting errors transparently and learning as we go. This is a journey, and we're here for it—with honesty, humility, and an unwavering focus on earning our audience's trust. At 7awi, we celebrate innovation, but never at the cost of trust. This Code is more than a set of rules—it's part of our editorial DNA. Let's use technology the right way. With integrity. With purpose. With people at the center of it all.


Gulf Business
2 hours ago
- Gulf Business
Why the Turing Test is still the best benchmark to assess AI
Image: Supplied 'A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.' Alan Turing We have come a long way since the beginning of modern AI in the 1950s and especially in the last few years. I believe we are now at the tipping point where AI is changing the way we do research and changing the way industry interacts with these technologies. Politics and society are having to adjust and make sure that AI is used in an ethical and secure way, and also that privacy concerns are addressed. Whilst AI has a lot of potential, there are still a number of issues and concerns. If we manage to address these, we can look ahead to good things from AI. Alan Turing (1912 – 1954) was a British mathematician and computer scientist and he's also widely known as the father of theoretical computer science and AI. He made a number of notable contributions, for instance, he introduced the concepts of a theoretical computing machine, also known as the Turing machine, which laid the foundation for what is now known as modern computer science. He worked on the design of early computers with the National Physics Laboratory and also later at the University of Manchester, where I'm based. He undertook pioneering work and this continues to be influential in contemporary computer science. He also developed the Turing test that measures the ability of a machine to exhibit intelligent behaviour that's equivalent or indistinguishable from that of a human. The Turing Test: Why its relevant The Turing test is still used today. Turing introduced it as a test for what's known as the imitation game in which a human interrogator interacts with two hidden entities — one human and the other a machine — through text-based communication, similar to ChatGPT. The interrogator cannot see or hear the participants and must rely just on the text conversation to make a judgment on whether it's a machine or a human. The objective for the machine is to generate responses that are indistinguishable from those of a human. The human participant aims to convince the interrogator of her/his humanity. If the interrogator cannot reliably distinguish between a machine and a human, then the machine is said to have passed the Turing test. It sounds very simple but it's an important test because it has become a classic benchmark for assessing AI. But there are also criticisms and limitations to the test. As we mark Alan Turing Day 2024, I can say that AI is moving closer to passing the Turing test – but we're not quite there yet. A recent paper stated that ChatGPT had passed the Turing test. ChatGPT is a natural language processing model and generates responses to questions that we pose that look like responses from a human. Some people would say ChatGPT has passed the Turing test and certainly for short conversations, ChatGPT is doing quite a good job. But as you have a longer conversation with ChatGPT, you notice there are some flaws and weaknesses. So, I think ChatGPT is probably the closest we get to passing the Turing test, at the moment. Many researchers and companies are working on improving the current version of ChatGPT and I would like to see that the machine understands what it produces. At the moment, ChatGPT produces a sequence of words that are suitable to address a particular query but it doesn't understand the meaning of these words. If ChatGPT understands the true meaning of a sentence – and that is done by contextualising a particular response or query — I think we are then in a position to say, yes, it has passed the Turing test. I would have hoped to pass this stage by now but I hope we will reach this point in a few years' time, perhaps around 2030. At the University of Manchester, we are working on various aspects of AI in healthcare — getting better, cheaper or quicker treatment is in the interest of society. It starts off with drug discovery. Can we find drugs that are more potent than drugs and have fewer side effects and ideally are cheaper to manufacture than the drugs currently available? We use AI to help guide us through the search space of different drug combinations. And the AI tells us, for example, which drugs we should combine and at which dose. We also work with the UK National Health Service and have come up with fairer reimbursement schemes for hospitals. In one case, we use what's called sequential decision making. In the other one, we use techniques that are based on decision trees. So, we use different methods and look at different applications of AI within healthcare. A particular area of cyber security that I'm working on is secure source code – it's the way we tell a computer what to do and is one of the fundamental levels we humans interact with a computer. If the source code (a sequence of instructions) is poor quality, then it can open up security vulnerabilities which could be exploited by hackers. We use verification techniques combined with AI to scan through source code, identify security issues of different types, and then fix them. We have shown that by doing that, we increase the quality of code and improve the resilience of a piece of software. We generate a lot of code and we want to make sure the code is safe, especially if for a business in a high stakes sector, such as healthcare, defence or finance. AI in sport There's a lot of scope and potential for AI in creativity and sport. In football, we have data about match action – where the ball is, who has the ball, and the positioning of the players. It's really big data and we can analyse it to refine a strategy when playing a particular opponent, by looking at past performance and player style, and use the data to adjust our strategy. This would be very tough without AI because of the sheer amount and complexity of the data. We are also looking at music education and helping people learn an instrument better by creating virtual music teachers. We can use AI combined with other technologies, such as virtual reality and augmented reality, to project a tutor. If you wear VR goggles, you can actually interact with the tutor. This is quite revolutionary and potentially opens up music to everyone on the planet. At the moment we're at the stage where AI is exceptionally good in doing specific tasks and we are making very good progress on general AI — AI behaving in a similar way to humans and that we can interact with. This is a game changer made possible by ChatGPT and other examples. This technology is being used by industry for completely new business ideas we haven't even thought of. A vision and strategy for AI is crucial. The UAE National Strategy for AI 2031 is a very good example of an ambitious vision covering education and reskilling, investment in research but also in the translation of research into practice. The strategy even looks at ethical AI development, making sure the AI is used ethically, securely and that privacy concerns are mitigated. I think the strategy has all the components that are needed to be successful and we can all learn a lot from this approach. The writer is the professor of Applied Artificial Intelligence and Associate Dean for Business Engagement, Civic & Cultural Partnerships (Humanities) at Alliance Manchester Business School, Read