
Vianeos unveils InCarCall and AI Content Moderation platform at CABSAT
Also made its debut at CABSAT is Vianeos' AI Content Moderation platform. This advanced solution uses artificial intelligence to automatically detect and filter inappropriate language and sensitive visuals in both live and on-demand video streams. The technology is aimed at helping video platforms deliver safer and more compliant viewing experiences, in line with both regulatory and brand requirements.
Alongside these two flagship launches, Vianeos is showcasing a comprehensive portfolio of scalable solutions at the event. These include targeted advertising tools, robust video delivery infrastructure, customizable white-label mobile communication apps, and streaming platforms tailored for the hospitality sector. Each solution is designed to help media companies expand their audience reach and deliver high-quality, reliable content experiences.
Commenting on the company's participation, Frédéric Fellague, Head of Products & Marketing at Vianeos, said: 'We're excited to connect with partners and innovators at CABSAT and showcase how Vianeos is shaping the future of content delivery and connected experiences. With groundbreaking solutions like InCarCall and AI Content Moderation, we're continuing to redefine what's possible for our clients across industries.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


UAE Moments
21 minutes ago
- UAE Moments
7awi Media Group Launches AI Content Code of Conduct
June 23, 2025, Dubai, United Arab of Emirates — 7awi Media Group has introduced its official Code of Conduct for AI-Generated Content, reinforcing its commitment to editorial quality and responsible content practices. This initiative reflects 7awi's ongoing efforts to adapt to evolving content creation technologies while maintaining trust and transparency with its audiences across platforms including fashion, wellness, lifestyle, and more. As artificial intelligence (AI) becomes increasingly integrated into the media landscape, 7awi recognizes the urgent need for clear editorial and ethical guidelines to preserve credibility and protect audiences from misinformation. 'AI is not just a tool—it's a force that will shape the future of media across our region. At 7awi, we're committed to leading this transformation responsibly, ensuring that our AI journey is guided by ethics, creativity, and a deep respect for the trust our audiences place in us.' - Anas Abbar, Co-Founder & CEO, 7awi Media Group. The full Code of Conduct for AI-Generates Content, outlining the principles guiding AI content use across 7awi's brands, is available here: Code of Conduct This code will serve as a cornerstone of 7awi's evolving editorial strategy, empowering its teams to innovate responsibly while safeguarding public trust.


UAE Moments
21 minutes ago
- UAE Moments
7awi Media Group: Code of Conduct for AI-Generated Content
At 7awi Media Group, we believe in the power of storytelling and embrace technology that helps us tell those stories more effectively. As Artificial Intelligence continues to reshape the world of content, we see it not as a replacement but as a tool that supports our creativity, enhances our workflows, and allows us to serve our audiences more efficiently. That said, there are lines we won't cross. This AI Code of Conduct reflects our commitment to quality, credibility, and editorial responsibility across all 7awi platforms—whether in fashion and beauty, wellness, automotive, or lifestyle. It's our promise to our readers, our partners, and ourselves that no matter the tool, integrity comes first. 1. Truth First: Accuracy & Credibility No medical, scientific, or technical content is published without verifying it through reliable, recognized sources. Sensitive topics must be reviewed by qualified human experts. AI may assist, but never replace, human judgment in validating facts. 2. We're Transparent With Our Readers If AI helped shape a piece of content, we'll say so—especially when it matters. Misleading our audience by passing AI-generated material as fully human-written is off-limits. 3. Editorial Voice Matters Every piece must reflect the unique tone and editorial spirit of the 7awi platform it appears on. Awkward language, repetition, or robotic phrasing? Not acceptable. Disrespectful, biased, or discriminatory content—AI or not—has no place here. 4. Respect Intellectual Property We do not publish plagiarized material. Period. AI content is subject to the same copyright standards as any other. Quoting studies or reports? Sources must always be credited. 5. Originality Is Non-Negotiable Copy-pasting AI text with no added value is lazy, and it won't fly here. Our headlines must inform, not mislead. We don't do clickbait. 6. Every AI-Assisted Article Gets Human Eyes Nothing generated by AI goes live without a thorough editorial review. Our editors are trained to spot issues, elevate quality, and ensure every story meets our standards. 7. Privacy is Personal We never use real personal data in AI-generated content without explicit consent. Fictionalized content must never blur the line between reality and make-believe when it involves real individuals or organizations. 8. We Evolve With the Technology This Code isn't static. As AI tools evolve, so will our practices. Our content teams remain accountable for applying this policy, consistently and thoughtfully. 9. Empowering Our Teams We invest in training so our editors and writers understand AI's strengths—and its limitations. AI is a support system, not a shortcut. We use it to enhance quality, never to compromise it. 10. A Note on Accountability We're not perfect—and neither is AI. Mistakes will happen. But at 7awi, we own them. We're committed to correcting errors transparently and learning as we go. This is a journey, and we're here for it—with honesty, humility, and an unwavering focus on earning our audience's trust. At 7awi, we celebrate innovation, but never at the cost of trust. This Code is more than a set of rules—it's part of our editorial DNA. Let's use technology the right way. With integrity. With purpose. With people at the center of it all.


Gulf Business
2 hours ago
- Gulf Business
Why the Turing Test is still the best benchmark to assess AI
Image: Supplied 'A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.' Alan Turing We have come a long way since the beginning of modern AI in the 1950s and especially in the last few years. I believe we are now at the tipping point where AI is changing the way we do research and changing the way industry interacts with these technologies. Politics and society are having to adjust and make sure that AI is used in an ethical and secure way, and also that privacy concerns are addressed. Whilst AI has a lot of potential, there are still a number of issues and concerns. If we manage to address these, we can look ahead to good things from AI. Alan Turing (1912 – 1954) was a British mathematician and computer scientist and he's also widely known as the father of theoretical computer science and AI. He made a number of notable contributions, for instance, he introduced the concepts of a theoretical computing machine, also known as the Turing machine, which laid the foundation for what is now known as modern computer science. He worked on the design of early computers with the National Physics Laboratory and also later at the University of Manchester, where I'm based. He undertook pioneering work and this continues to be influential in contemporary computer science. He also developed the Turing test that measures the ability of a machine to exhibit intelligent behaviour that's equivalent or indistinguishable from that of a human. The Turing Test: Why its relevant The Turing test is still used today. Turing introduced it as a test for what's known as the imitation game in which a human interrogator interacts with two hidden entities — one human and the other a machine — through text-based communication, similar to ChatGPT. The interrogator cannot see or hear the participants and must rely just on the text conversation to make a judgment on whether it's a machine or a human. The objective for the machine is to generate responses that are indistinguishable from those of a human. The human participant aims to convince the interrogator of her/his humanity. If the interrogator cannot reliably distinguish between a machine and a human, then the machine is said to have passed the Turing test. It sounds very simple but it's an important test because it has become a classic benchmark for assessing AI. But there are also criticisms and limitations to the test. As we mark Alan Turing Day 2024, I can say that AI is moving closer to passing the Turing test – but we're not quite there yet. A recent paper stated that ChatGPT had passed the Turing test. ChatGPT is a natural language processing model and generates responses to questions that we pose that look like responses from a human. Some people would say ChatGPT has passed the Turing test and certainly for short conversations, ChatGPT is doing quite a good job. But as you have a longer conversation with ChatGPT, you notice there are some flaws and weaknesses. So, I think ChatGPT is probably the closest we get to passing the Turing test, at the moment. Many researchers and companies are working on improving the current version of ChatGPT and I would like to see that the machine understands what it produces. At the moment, ChatGPT produces a sequence of words that are suitable to address a particular query but it doesn't understand the meaning of these words. If ChatGPT understands the true meaning of a sentence – and that is done by contextualising a particular response or query — I think we are then in a position to say, yes, it has passed the Turing test. I would have hoped to pass this stage by now but I hope we will reach this point in a few years' time, perhaps around 2030. At the University of Manchester, we are working on various aspects of AI in healthcare — getting better, cheaper or quicker treatment is in the interest of society. It starts off with drug discovery. Can we find drugs that are more potent than drugs and have fewer side effects and ideally are cheaper to manufacture than the drugs currently available? We use AI to help guide us through the search space of different drug combinations. And the AI tells us, for example, which drugs we should combine and at which dose. We also work with the UK National Health Service and have come up with fairer reimbursement schemes for hospitals. In one case, we use what's called sequential decision making. In the other one, we use techniques that are based on decision trees. So, we use different methods and look at different applications of AI within healthcare. A particular area of cyber security that I'm working on is secure source code – it's the way we tell a computer what to do and is one of the fundamental levels we humans interact with a computer. If the source code (a sequence of instructions) is poor quality, then it can open up security vulnerabilities which could be exploited by hackers. We use verification techniques combined with AI to scan through source code, identify security issues of different types, and then fix them. We have shown that by doing that, we increase the quality of code and improve the resilience of a piece of software. We generate a lot of code and we want to make sure the code is safe, especially if for a business in a high stakes sector, such as healthcare, defence or finance. AI in sport There's a lot of scope and potential for AI in creativity and sport. In football, we have data about match action – where the ball is, who has the ball, and the positioning of the players. It's really big data and we can analyse it to refine a strategy when playing a particular opponent, by looking at past performance and player style, and use the data to adjust our strategy. This would be very tough without AI because of the sheer amount and complexity of the data. We are also looking at music education and helping people learn an instrument better by creating virtual music teachers. We can use AI combined with other technologies, such as virtual reality and augmented reality, to project a tutor. If you wear VR goggles, you can actually interact with the tutor. This is quite revolutionary and potentially opens up music to everyone on the planet. At the moment we're at the stage where AI is exceptionally good in doing specific tasks and we are making very good progress on general AI — AI behaving in a similar way to humans and that we can interact with. This is a game changer made possible by ChatGPT and other examples. This technology is being used by industry for completely new business ideas we haven't even thought of. A vision and strategy for AI is crucial. The UAE National Strategy for AI 2031 is a very good example of an ambitious vision covering education and reskilling, investment in research but also in the translation of research into practice. The strategy even looks at ethical AI development, making sure the AI is used ethically, securely and that privacy concerns are mitigated. I think the strategy has all the components that are needed to be successful and we can all learn a lot from this approach. The writer is the professor of Applied Artificial Intelligence and Associate Dean for Business Engagement, Civic & Cultural Partnerships (Humanities) at Alliance Manchester Business School, Read