Latest news with #AIInstitute


Irish Examiner
23-05-2025
- Business
- Irish Examiner
Workplace Wellbeing: Embracing AI's work-enhancing capabilities to help us work smarter
There's a new sense of anxiety in the workplace. It's called FOBO, the fear of becoming obsolete, it's the worry that artificial intelligence (AI) and new technologies will eventually make us all redundant. A 2024 survey of 14,000 workers in 14 countries found that half believed their skills would no longer be required in five years. Another study last year reported 46% of employees in the US feared machines would perform their jobs within the next five years, with another 29% expecting to be superseded even sooner. In Ireland, Government research revealed that approximately 30% of employees worked in occupations at risk of being replaced by technology. Historically, such concerns might have been limited to factory workers, but the research shows that modern-day FOBO affects almost everyone, including those working in finance, insurance, information technology, and communications. 'Almost all businesses, from the smallest start-ups to the largest organisations, are using AI-driven technology now,' says Maryrose Lyons, founder of the AI Institute, which runs training programmes. 'It's impacting most careers. The main ways are through generating content and ideas, automating repetitive administrative tasks and enhanced data analysis.' Realising that AI has infiltrated their workplace in these ways unsettles some people, making them question their professional significance. According to career and counselling psychologist Sinéad Brady, it can undermine their sense of identity. While we cannot predict how AI will develop or be integrated into the workplace, Lyons argues that it is an accessible tool for most in its current format. 'For many of us, what we do at work plays an important part in how we see ourselves and how we imagine others see us,' she says. 'If we think that a machine or computer programme can do what we do, we can begin to doubt our own value. This doubt can cause huge anxiety.' The ever-escalating pace of change can further exacerbate this anxiety. 'We all have a different capacity for change,' says the work and organisational psychologist Leisha Redmond McGrath. 'Some love it while others prefer stability. But what's true for most of us is that we cope better with change if we feel we have some control over it. It's when we believe there's nothing we can do — that change is a wave coming at us, but we don't know when or how it will hit — that we feel most fearful.' Face up to FOBO So, what can we control when it comes to FOBO? Brady suggests facing the fear and reframing how we perceive this new technology. 'We've done it before,' she says. 'Many of us were afraid of computers when they were first introduced to the workplace, but we faced that fear. 'When Word and Excel, for example, took away some aspects of some jobs, they didn't make us obsolete. We learned to use them as tools in our work. We can do the same with AI.' While we cannot predict how AI will develop or be integrated into the workplace, Lyons argues that it is an accessible tool for most in its current format. 'Just as you learned to master the likes of Excel, Outlook, and other software platforms when you first entered the workforce, you now have to learn AI,' she says. 'The American professor Ethan Mollick, a leading academic who studies the effects of AI on work, estimates that it takes an average of 10 hours of using AI tools before they start to come naturally.' Brady points out that AI can enhance productivity and performance. Maryrose Lyons: 'If AI frees up six extra hours in your week, use them to engage in critical thinking, researching and coming up with ideas or building relationships with other humans, none of which AI can do." 'By removing the need for some tasks, it gives us extra time for more challenging creative work,' she says. 'These days, I use AI to spellcheck and edit documents. When preparing talks, I ask it to present me with a counterargument so that I can address those points in my talk. 'Using AI in these ways makes me quicker and better at my job than someone who isn't using it.' Brady also encourages us to concentrate on the human skills that AI will never replicate: 'I don't think AI will ever be able to communicate effectively, think creatively, or critically solve problems,' she says. 'A good tactic to counter FOBO would be to lean into those aspects of our work.' Lyons gives some examples of how this might work in practice. 'If AI frees up six extra hours in your week, use them to engage in critical thinking, researching and coming up with ideas or building relationships with other humans, none of which AI can do,' she says. 'Have more off-site meetings with clients or sit down with an AI tool to brainstorm new ideas.' Fight or flight For those who are overcome by FOBO, despite the reassurances, Redmond McGrath looks at the psychological reasons behind it. 'It's terrifying to think you could lose your job and not have money to pay bills,' she says. 'If you identify with your work, it can feel threatening to learn that you might be usurped by technology. There's something called amygdala hijack that can occur when we experience threat in this way. 'A primitive part of our brain is activated, and we go into fight or flight mode, which can make us more sensitive and less rational.' Leisha Redmond McGrath: "'It's terrifying to think you could lose your job and not have money to pay bills." To prevent such negative reactions to FOBO, she suggests focusing on the 'building blocks' of wellbeing. 'Make sure you get enough rest, sleep, movement, and exercise,' she says. 'Eat well. Spend time on your relationships with others and with yourself. Connecting with nature or something bigger than yourself will give you a sense of perspective. And if you're feeling overwhelmed, talk to someone about it. It will calm your nervous system and you'll be more likely to figure out more rational and proactive ways of responding to FOBO, especially if you're someone whose sense of identity and purpose has been bound by your work.' Talking to coworkers means you might also learn what they are doing to adapt to technology. 'Instead of trying to figure out the way forward on your own, which is daunting, or putting your head in the sand, which isn't advisable, finding out what others are doing and how employers and professional bodies are supporting people like you to retrain could help you capitalise on the positive benefits of technology,' says Redmond McGrath. Don't be afraid to ask younger colleagues for support, too. Having grown up with technology, Redmond McGrath says they are often better able to use it and will likely be happy to share their expertise with you. Career and counselling psychologist Sinéad Brady: 'Ask ChatGPT to do something small and inconsequential for you. That could be the entry point that gets you over your initial fear.' Whatever you do, try not to be afraid of technology. 'It's just a tool and it's possible to play with it,' says Brady. 'Ask ChatGPT to do something small and inconsequential for you. That could be the entry point that gets you over your initial fear.' While noting the many benefits, Brady strikes a note of caution. 'The information it provides you with is based on data that isn't always accurate and that can be biased,' she says. 'AI and all modern technology are only ever as good as the information fed to them, which is why we should always question it for accuracy, assess it for quality, and not rely on it too much.' Despite AI's limitations, Lyons urges people to to overcome their FOBO and explore what it offers. 'There are so many tools that are being used in all sorts of jobs and they are changing how people work for the better,' she says. 'It could be career-ending to ignore these tools. My advice is to engage and find out how this new technology can help us perform better and gain more satisfaction from our work.'
Yahoo
17-04-2025
- Science
- Yahoo
Popular AIs head-to-head: OpenAI beats DeepSeek on sentence-level reasoning
ChatGPT and other AI chatbots based on large language models are known to occasionally make things up, including scientific and legal citations. It turns out that measuring how accurate an AI model's citations are is a good way of assessing the model's reasoning abilities. An AI model 'reasons' by breaking down a query into steps and working through them in order. Think of how you learned to solve math word problems in school. Ideally, to generate citations an AI model would understand the key concepts in a document, generate a ranked list of relevant papers to cite, and provide convincing reasoning for how each suggested paper supports the corresponding text. It would highlight specific connections between the text and the cited research, clarifying why each source matters. The question is, can today's models be trusted to make these connections and provide clear reasoning that justifies their source choices? The answer goes beyond citation accuracy to address how useful and accurate large language models are for any information retrieval purpose. I'm a computer scientist. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the Reasons benchmark to test how well large language models can automatically generate research citations and provide understandable reasoning. We used the benchmark to compare the performance of two popular AI reasoning models, DeepSeek's R1 and OpenAI's o1. Though DeepSeek made headlines with its stunning efficiency and cost-effectiveness, the Chinese upstart has a way to go to match OpenAI's reasoning performance. The accuracy of citations has a lot to do with whether the AI model is reasoning about information at the sentence level rather than paragraph or document level. Paragraph-level and document-level citations can be thought of as throwing a large chunk of information into a large language model and asking it to provide many citations. In this process, the large language model overgeneralizes and misinterprets individual sentences. The user ends up with citations that explain the whole paragraph or document, not the relatively fine-grained information in the sentence. Further, reasoning suffers when you ask the large language model to read through an entire document. These models mostly rely on memorizing patterns that they typically are better at finding at the beginning and end of longer texts than in the middle. This makes it difficult for them to fully understand all the important information throughout a long document. Large language models get confused because paragraphs and documents hold a lot of information, which affects citation generation and the reasoning process. Consequently, reasoning from large language models over paragraphs and documents becomes more like summarizing or paraphrasing. The Reasons benchmark addresses this weakness by examining large language models' citation generation and reasoning. Following the release of DeepSeek R1 in January 2025, we wanted to examine its accuracy in generating citations and its quality of reasoning and compare it with OpenAI's o1 model. We created a paragraph that had sentences from different sources, gave the models individual sentences from this paragraph, and asked for citations and reasoning. To start our test, we developed a small test bed of about 4,100 research articles around four key topics that are related to human brains and computer science: neurons and cognition, human-computer interaction, databases and artificial intelligence. We evaluated the models using two measures: F-1 score, which measures how accurate the provided citation is, and hallucination rate, which measures how sound the model's reasoning is − that is, how often it produces an inaccurate or misleading response. Our testing revealed significant performance differences between OpenAI o1 and DeepSeek R1 across different scientific domains. OpenAI's o1 did well connecting information between different subjects, such as understanding how research on neurons and cognition connects to human-computer interaction and then to concepts in artificial intelligence, while remaining accurate. Its performance metrics consistently outpaced DeepSeek R1's across all evaluation categories, especially in reducing hallucinations and successfully completing assigned tasks. OpenAI o1 was better at combining ideas semantically, whereas R1 focused on making sure it generated a response for every attribution task, which in turn increased hallucination during reasoning. OpenAI o1 had a hallucination rate of approximately 35% compared with DeepSeek R1's rate of nearly 85% in the attribution-based reasoning task. In terms of accuracy and linguistic competence, OpenAI o1 scored about 0.65 on the F-1 test, which means it was right about 65% of the time when answering questions. It also scored about 0.70 on the BLEU test, which measures how well a language model writes in natural language. These are pretty good scores. DeepSeek R1 scored lower, with about 0.35 on the F-1 test, meaning it was right about 35% of the time. However, its BLEU score was only about 0.2, which means its writing wasn't as natural-sounding as OpenAI's o1. This shows that o1 was better at presenting that information in clear, natural language. On other benchmarks, DeepSeek R1 performs on par with OpenAI o1 on math, coding and scientific reasoning tasks. But the substantial difference on our benchmark suggests that o1 provides more reliable information, while R1 struggles with factual consistency. Though we included other models in our comprehensive testing, the performance gap between o1 and R1 specifically highlights the current competitive landscape in AI development, with OpenAI's offering maintaining a significant advantage in reasoning and knowledge integration capabilities. These results suggest that OpenAI still has a leg up when it comes to source attribution and reasoning, possibly due to the nature and volume of the data it was trained on. The company recently announced its deep research tool, which can create reports with citations, ask follow-up questions and provide reasoning for the generated response. The jury is still out on the tool's value for researchers, but the caveat remains for everyone: Double-check all citations an AI gives you. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Manas Gaur, University of Maryland, Baltimore County Read more: Why building big AIs costs billions – and how Chinese startup DeepSeek dramatically changed the calculus What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools AI pioneers want bots to replace human teachers – here's why that's unlikely Manas Gaur receives funding from USISTEF Endowment Fund.


Arab News
09-02-2025
- Business
- Arab News
Boston Dynamics founder not concerned about robot takeover, warns against overregulation
RIYADH: The idea that robots could take over the world is not a 'serious concern,' said the founder of advanced robotics company Boston Dynamics, as he warned against excessive regulation at a Riyadh technology conference on Sunday. 'There's some fear that robots are going to somehow get out of hand and take over the world and eliminate people. I don't really think that's a serious concern,' Marc Raibert said during the fourth edition of the LEAP summit. While regulation is necessary, Raibert believes that excessive restrictions could slow progress. He expressed his concern about 'overregulation stopping us from having the benefits of AI and robotics that could develop because robots can solve problems that we face in addition to causing problems.' He added that while regulating mature applications makes sense, limiting the technology too early could hinder its potential. His comments were made during a fireside chat titled 'The Future of Robotics and AI,' in which he highlighted the role of artificial intelligence-powered robots in elderly care and assistance for people with disabilities. 'We have a couple of teams working on physical designs, but more importantly on the intelligence and perception needed to be able to do those kinds of tasks,' Raibert said. Beyond industrial use, robotics is expected to play an important role in healthcare, supporting patient care, people with disabilities, and elderly assistance, according to Raibert, who founded the leading robotics company in 1992. 'I think cognitive intelligence, AI, is going to help us make it a lot easier to communicate with the robot, but also for the robot to understand the world, so that they can do things more easily without having everything programmed in detail,' he added. Raibert also introduced a project at his AI Institute called 'Watch, Understand, Do,' which aims to improve robots' ability to learn tasks by observing human workers. The initiative focuses on on-the-job training, where a robot can watch a worker perform a task — such as assembling a component in a factory — and gradually replicate it. While this process is intuitive for humans, it remains a technical challenge for robots, requiring advancements in machine perception and task sequencing. He pointed out that while humanoid robots are gaining attention, true human-like capabilities go beyond having two arms and two legs. He emphasized that intelligence, problem-solving skills, and the ability to interact effectively with the environment will define the next generation of AI-driven robotics. Raibert discussed the differences between robotics adoption in workplaces and homes, explaining that industrial environments offer a structured setting where robots can operate more efficiently. He noted that robots are likely to become more common in workplaces before being integrated into homes. However, integrating robots into homes presents additional challenges, including safety, cost, and adaptability to unstructured environments. He said while home robots will eventually become more common, their widespread adoption will likely follow the expansion of industrial and commercial robotics. As part of LEAP, the Saudi Data and Artificial Intelligence Authority is gathering global AI leaders at its DeepFest platform during the fourth edition of the summit. With more than 150 speakers, 120 exhibitors, and an expected attendance of over 50,000 people from around the world, DeepFest showcases a range of cutting-edge AI technology. The event explores emerging technologies, fosters collaboration, exchanges expertise, and builds partnerships, contributing to innovation and strengthening cooperation among experts across diverse industries.