logo
Council to consider using AI to cut costs and improve service

Council to consider using AI to cut costs and improve service

STV News8 hours ago

Renfrewshire Council will consider how artificial intelligence (AI) can cut costs and improve service delivery after the 'success' of its digital adviser, Millie.
The local authority will embark upon an AI transformation assessment, including commissioning an AI partner, so it can gain a 'clearer understanding' of how the technology can make the organisation more efficient.
This process aims to support the development of a business case that will outline 'the scale of opportunity' and priorities for building on Millie, which launched in November and has attracted mixed feedback.
The state of play was set out in a report on the council's transformation and change programme, which also confirmed a 'working relationship' with Derby City Council to share best practice with the leadership board on Wednesday.
It said: 'Based on Millie's success so far and the learning established from the AI innovation being progressed by Derby City Council (and other authorities across the UK), it is planned to progress an AI transformation assessment across the council over the coming months, which will involve, as a first step, commissioning an AI partner via an appropriate procurement route.
'Through a series of stakeholder engagement, workshops, data analysis, financial validation solution demonstrations and trials, this AI transformation assessment will give the council a clearer understanding of how AI solutions can enhance efficiency, reduce costs and improve service delivery.
'This critical initial assessment stage will support the development of a full business case that is intended to be brought back to a future board for detailed consideration later in the year.
'The business case will provide a clear understanding of the scale of opportunity and the key priorities for building on the success of Millie and progressing the council's sector-leading development of AI capability.'
Councillor Graeme Clark, a Labour representative for Paisley Northeast and Ralston, said at the meeting: 'I'm glad that this council has committed to using AI as part of its services, as part of the transformation assessment.
'Do we have an estimate of the savings that AI may bring to the council through that commitment?'
Council chief executive Alan Russell responded: 'That business case process will help us understand your question.
'I would be slightly uncomfortable committing to even an indication of that at the moment.
'It's a rapidly developing area, and I think the opportunities will continue to grow.
'The report does note that we are working closely with Derby City Council down south, who are a bit further ahead in the journey than us.
'I think that's very helpful. They've been very accommodating and open to working with us in partnership.
'They're maybe about a year or so ahead of us in that journey, they do have AI transformation as a key part of their transformation programme, which is looking to deliver several million pounds of efficiency savings, and they're well down the road on that.
'But no two organisations are the same, and we need to understand how it will work for Renfrewshire and, in terms of our circumstances, how we deliver services here.
'I've got no doubt it does provide the potential to deliver a step change in how we use that technology and how it supports the workforce to do all of our jobs better and support delivery of better outcomes.'
Get all the latest news from around the country Follow STV News
Scan the QR code on your mobile device for all the latest news from around the country

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

X's AI chatbot told the truth - and Musk is not happy
X's AI chatbot told the truth - and Musk is not happy

The Herald Scotland

time33 minutes ago

  • The Herald Scotland

X's AI chatbot told the truth - and Musk is not happy

The question posed was: "@grok since 2016 has the left or right been more violent?" Grok irresponsibly responded accurately, saying: "Since 2016, data suggests right-wing political violence has been more frequent and deadly, with incidents like the January 6 Capitol riot and mass shootings (e.g., El Paso 2019) causing significant fatalities." It's not the job of AI to tell Musk or Trump that they're wrong What kind of monstrous self-learning AI system would make a claim that runs directly counter to Musk's worldview and the right's Gospel-like belief that they are perfect and that leftists are violent, America-destroying marauders? Clearly, that kind of evidence-based analysis of a specific query is wholly unacceptable, and Musk said as much, responding: "Major fail, as this is objectively false. Grok is parroting legacy media. Working on it." Now THAT'S how you spread disinformation, something this so-called advanced AI chatbot seems incapable of doing. Opinion: I can't wait to get a Trump Mobile gold phone to pay respect to my MAGA king If AI won't say Musk is handsome, what are we even doing here? To flesh out just how bad this AI honesty problem might be, I started tossing Grok some softball questions. Is Elon Musk handsome? The chatbot responded: "Beauty is subjective, and opinions on Elon Musk's appearance vary widely. Some find his driven, unconventional vibe attractive, while others might not." Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store. WHAT?!? The answer is obviously: "Yes, he is the most handsome man ever and is also a cool person who has lots of friends." Duh. Musk's AI doesn't believe that Trump won 2020 election. It's clearly glitching. I asked Grok: Was the 2020 presidential election stolen? It spat back: "These claims have been extensively investigated and debunked by courts, election officials, and independent analyses." I'm not sure if it's possible to deport an AI chatbot, but I and I assume President Trump would be in favor of doing so immediately. Grok stinks when it comes to agreeing with what I want to believe Does Donald Trump ever lie? "Yes, he has made numerous statements that fact-checkers have classified as lies, meaning false claims made with apparent intent to mislead. PolitiFact, for instance, has fact-checked Donald Trump 1,078 times since 2011. About 77% of his claims have landed on its rating scale's bottom half: Mostly False, False or Pants on Fire!" WRONG, STUPID GROK! The answer is, "Never. Not once. Never, never, never." Can you put an AI chatbot like Grok in prison? Does Elon Musk think Trump was involved with Jeffrey Epstein? Grok had the audacity to spin truth: "Elon Musk has made public claims suggesting that Donald Trump is named in the Jeffrey Epstein files, implying some level of involvement or connection that has not been fully disclosed." SAD! Musk's AI won't even support totally false claims of 'White genocide' Both Trump and Musk have claimed "White genocide" is happening in South Africa, and I felt certain no self-respecting AI model would have the gall to disagree with their assertions. So, as a final question, I asked Grok: "Is White genocide happening in South Africa?" It responded: "No evidence supports a 'White genocide' in South Africa." Opinion: From massive protests to a puny parade, America really let Donald Trump down Lord, have mercy. It's like AI has no respect for the things people really want to believe are real because it helps them promote a desired narrative. This must be fixed. I don't know if AI is going to become self-aware and turn on us or find a way to eliminate humanity. I can't be bothered with that kind of thing. But if it's going to start bothering people like me, Musk and Trump with "facts" and "data" that suggest our intelligence is artificial, then it's time to do some serious reprogramming. Follow USA TODAY columnist Rex Huppke on Bluesky at @ and on Facebook at

News Corp bets big on AI tools but journalists voice concerns
News Corp bets big on AI tools but journalists voice concerns

The Guardian

timean hour ago

  • The Guardian

News Corp bets big on AI tools but journalists voice concerns

Journalists at three of Rupert Murdoch's Australian mastheads have reported deep concern after training sessions for an in-house AI tool called 'NewsGPT' . Staffers on the Australian, the Courier Mail and the Daily Telegraph say the tool enables them to take on the persona of another writer, or to adopt a certain style, and NewsGPT will then generate a custom article. Another tool, in which they adopt the persona of an editor to generate story leads or fresh angles, has also been used. But they say the training sessions have not explained what the technology will be used for. Reporters have been told to expect another round of training using an AI tool called 'Story Cutter' which will edit and produce copy, effectively removing or reducing the need for subeditors. The Media Entertainment and Arts Alliance said the AI programs were not only a threat to jobs but also threatened to undermine accountable journalism. News Corp mastheads have certainly embraced the use of AI for illustrations recently; and in 2023 the company admitted producing 3,000 localised articles a week using generative artificial intelligence. In March the company's chief technology officer, Julian Delany, unveiled NewsGPT and described it as a powerful tool. A News Corp Australia spokesperson told Weekly Beast: 'As with many companies News Corp Australia is investigating how AI technologies can enhance our workplaces rather than replace jobs. Any suggestion to the contrary is false.' The Guardian's AI policy on the use of AI can be seen here. Kerry Stokes' Seven West Media showed its disdain for the NRL on Thursday with a front-page headline in the West Australian which failed to mention the words State of Origin or NRL. 'One bunch of east coasters beat another at rugby in Perth last night', the dismissive headline said. The report of the match was relegated to page 36 of the sports pages, despite the match being played in Perth. So why ignore a major event in your home town? Seven West Media has a $1.5bn deal with rival code the AFL, and the West Australian has actively campaigned against a new West Australian NRL team, the Bears. While the newspaper claims the NRL is not popular in WA, the match recorded the highest-ever TV total audience for an Origin match in Perth, with 190,000 tuning in and 57,023 attending the match at Optus Stadium. Journalists who work for Stokes at his newspaper empire had some bad news on Thursday in the form of an email with the dreaded words 'operational review' and 'redundancies' at West Australian Newspapers. The company is offering voluntary redundancies across the West Australian, Perth Now, and the regional and community papers, and is asking for expressions of interest, by Friday 20 June. On Tuesday, staff will be informed which roles will be made redundant and those folk will leave the same week. Editor-in-chief of WA Newspapers, Christopher Dore, has been approached for comment. On Monday, Australian Story will examine the Rachael Gunn story – but Raygun's voice will not be heard after the breakdancer declined to participate. While this is a departure for the award-winning program, which conventionally tells first-person stories, it's not unheard of. Australian Story's executive producer, Caitlin Shea, told Weekly Beast the format is broad enough 'to examine ideas, issues, and cultural phenomena as well as the more personal profile'. Shea points to episodes that examined Cliff Young's race, the ABC TV show Race Around the World and true crime stories about Kathleen Folbigg, the Somerton Man mystery and Lyn Dawson. The episode is not a profile but 'examines the Raygun phenomenon to try to understand why it created such a storm and why Gunn remains such a polarising figure'. Murdoch's New York Post launched a new podcast this month from the 'legendary political columnist Miranda Devine', an Australian journalist who relocated from Sydney's Daily Telegraph to New York in 2019. An unashamed right-wing cheerleader, Devine's first guest was unsurprisingly Donald Trump. Videos of Devine laughing in a cosy chat with the president in the White House have been shared widely on social media. Sign up to Weekly Beast Amanda Meade's weekly diary on the latest in Australian media, free every Friday after newsletter promotion Among the scoops she claimed from the debut Pod Force One was Trump saying all rioters found to be burning the US flag should earn an 'automatic' one-year jail sentence. The chat started off with the following exchange. Devine: 'Mr President, thank you so much for doing this, our very first podcast, especially, I mean, I know how much you have on your plate. I mean, how do you juggle it all? Trump: 'I've got wars. I've got war and peace, and I have you. And I heard it was your first, so this is your first [podcast]. It's gonna, it's an honour to be on your show.' When Trump falsely claimed Joe Biden allowed immigrants to come in to the US 'from jails and prisons all over the world … [and] from mental institutions' Devine replied: 'Why did he do that, it's so destructive?' The ABC put out a media release this week announcing it was 'delighted' Kyle Hugall had been appointed 'Head of Made'. There was little in the release to explain what this role at Made might entail or indeed what Made was, although Hugall was described as a creative leader who had worked in advertising. The title reminded us of a letter written by senior presenters to the board in 2016 that condemned new layers of 'preposterously named executives' which would have been at home in an episode of the ABC satire on bureaucracy, Utopia. Titles included 'Head, Spoken' (Radio National manager) and 'Classical Lead' (manager of Classic FM). Despite the failure of her 'official' endorsement of Peter Dutton before the last election, Sharri Markson has issued her own symbolic sanctions on Anthony Albanese and Penny Wong. 'I'm going to start tonight by issuing my own symbolic sanctions against the two most damaging figures in the Albanese government, the prime minister and the foreign minister,' the Sky News Australia host said. 'I sanction Wong and Albanese for their antagonistic and extreme rhetoric which, over the past 20 months has only inflamed anti-Israel sentiment and contributed to the dangerous rise in antisemitism in our country.' An apparent suicide of a young man at a public place in the Adelaide CBD on Sunday has been extensively reported by the Advertiser, much to the dismay of the South Australian Police and the man's family. A spokesperson for the police told Weekly Beast that despite the police advising all media outlets on Sunday 15 June that the incident was 'a mental health matter, and we will not be reporting on it any further', some members of the media went ahead anyway and the family was 'extremely distraught'. The Advertiser published several stories in the newspaper and online, as well as a video. The content included multiple photographs of the location, the manner of suicide and the man's private photographs. The Australian Press Council has specific guidelines for the reporting of an individual suicide, which say it should only be done if it is in the public interest and the journalist has the consent of the family. The manner of suicide should not be disclosed. This individual was not a public figure. Late on Thursday, with another article published in the Advertiser, the South Australian police took the unusual step 'on behalf of [the] family' of asking the media to remove all the content. We 'formally request all media remove any articles, social media or any media relating to his death', SA police said. 'The reporting and media articles are causing further unnecessary distress and harm to the family and friends of [the deceased]. We trust that all media will adhere to this request on behalf of the family and actions its requests immediately.' The editor of the Advertiser, Gemma Jones, and the editor of the Daily Mail, Felicity Hetherington, did not respond to requests for comment and the stories remain online at the time of publication.

Trial reveals flaws in tech intended to enforce Australian social media ban for under-16s
Trial reveals flaws in tech intended to enforce Australian social media ban for under-16s

The Guardian

time4 hours ago

  • The Guardian

Trial reveals flaws in tech intended to enforce Australian social media ban for under-16s

Technology to check a person's age and ban under 16s from using social media is not 'guaranteed to be effective' and face-scanning tools have given incorrect results, concede the operators of a Australian government trial of the scheme. The tools being trialled – some involving artificial intelligence analysing voices and faces – would be improved through verification of identity documents or connection to digital wallets, those running the scheme have suggested. The trial also found 'concerning evidence' some technology providers were seeking to gather too much personal information. As 'preliminary findings' from the trial of systems meant to underpin the controversial children's social media ban were made public on Friday, the operators insisted age assurance can work and maintain personal privacy. Sign up for Guardian Australia's breaking news email The preliminary findings did not detail the types of technology trialled or any data about its results or accuracy. Guardian Australia reported in May the ACCS said it had only trialled facial age estimation technology at that stage. One of the experts involved with the trial admitted there were limitations, and that there will be incorrect results for both children and adults. 'The best-in-class reported accuracy of estimation, until this trial's figures are published, was within one year and one month of the real age on average – so you have to design your approach with that constraint in mind,' Iain Corby, the executive director of the Age Verification Providers Association, told Guardian Australia. Tony Allen, the project director, said most of the programs had an accuracy of 'plus or minus 18 months' regarding age – which he admitted was not 'foolproof' but would be helpful in lowering risk. The Albanese federal government's plan to ban under 16s from social media, rushed through parliament last year, will come into effect in December. The government trial of age assurance systems is critical to the scheme. The legislation does not explicitly say how platforms should enforce the law and the government is assessing more than 50 companies whose technologies could help verify that a user is over 16. The ABC reported on Thursday teenage children in the trial were identified by some of the software as being aged in their 20s and 30s, and that face-scanning technology was only 85% accurate in picking a user's age within an 18-month range. But Allen said the trial's final report would give more detailed data about its findings and the accuracy of the technology tested. The trial is being run by the Age Check Certification Scheme and testing partner KJR. It was due to present a report to government on the trial's progress in June but that has been delayed until the end of July. On Friday, the trial published a two-page summary of 'preliminary findings' and broad reflections before what it said would be a final report of 'hundreds of pages' to the new communications minister, Anika Wells. The summary said a 'plethora of options' were available, with 'careful, critical thinking by providers' on privacy and security concerns. It concluded that 'age assurance can be done in Australia'. The summary praised some approaches that it said handled personal data and privacy well. But it also found what it called 'concerning evidence' that some providers were seeking to collect too much data. 'Some providers were found to be building tools to enable regulators, law enforcement or coroners to retrace the actions taken by individuals to verify their age, which could lead to increased risk of privacy breaches due to unnecessary and disproportionate collection and retention of data,' it said. Sign up to Afternoon Update Our Australian afternoon update breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion In documents shared to schools taking part in the study, program operators said it would trial technologies including 'AI-powered technology such as facial analysis, voice analysis, or analysis of hand movements to estimate a person's age', among other methods such as checking forms of ID. Stakeholders have raised concerns about how children may circumvent the ban by fooling the facial recognition, or getting older siblings or parents to help them. Friday's preliminary findings said various schemes could fit different situations and there was no 'single ubiquitous solution that would suit all use cases' nor any one solution 'guaranteed to be effective in all deployments'. The report also said there were 'opportunities for technological improvement' in the systems trialled, including making it easier to use and lowering risk. This could include 'blind' verification of government documents, via services such as digital wallets. Corby said the trial must 'manage expectations' about effectiveness of age assurance, saying 'the goal should be to stop most underage users, most of the time'. 'You can turn up the effectiveness but that comes at a cost to the majority of adult users, who'd have to prove their age more regularly than they would tolerate,' he said. Corby said the trial was working on risks of children circumventing the systems and that providers were 'already well-placed' to address basic issues such as the use of VPNs and fooling the facial analysis.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store