logo
#

Latest news with #CambridgeAnalytica

Inside AI Assisted Software Development and why tools are not enough (Part 1): By John Adam
Inside AI Assisted Software Development and why tools are not enough (Part 1): By John Adam

Finextra

time10 hours ago

  • Business
  • Finextra

Inside AI Assisted Software Development and why tools are not enough (Part 1): By John Adam

The recent squeeze on funding and margins is by no means only being felt in the financial services and fintech sectors. But it's fair to say the pinch is particularly hard and the necessity to quickly and effectively innovate is simultaneously more pressing than ever. The good news is, new AI tools can speed up delivery and improve the quality of software projects without adding to headcount. But even if that general statement is true, just using tools is not enough. Especially in a regulated industry like financial services. If there is no pre-approved list of tools and how and where they are applied in an SDLC (software development lifecycle), organisations have governance, observability, measurability and consistency issues. If 'real' gains are not measured by benchmarking against 'before', do they really exist? Tree falling in a forest metaphor. Certainly not in a way that can be scaled across or up an organisation. There is no clear business case, just intuition. Are tools and where and how they are being used compliant with organisational policy and regulatory frameworks? Has anyone read the privacy policies? I'm personally convinced that a big AI company having its Facebook/Cambridge Analytica moment falls under 'when, not if'. And when the first big AI privacy scandal does break, you don't want your organisation published in a list in a newspaper. To benefit from and scale the gains of an AI-assisted SDLC, organisations need a framework for structured, consistent integration + governance, observability and measurability. Just tools isn't enough. Realistic gains from an AI-assisted SDLC It's important to note that at the time of writing, we are in a period of rapid change in AI tooling. A good framework operates at a level or two higher than specific tools and allows for them to be interchangeable with upgrades. The market most of us operate in is at a point in its cycle where resources are at a premium. Most of the organisations I work with are expected to deliver more with less compared with pre-2023. In that context, banking the productivity gains achievable with AI tooling is non-negotiable. Organisations are demanding it in the demand for greater, better output despite fewer resources. Getting it right is also non-negotiable and that means marrying increased productivity with measurability, observability and governance, which I cover in-depth in Part 2 of this article. As an introduction to building a proper framework, I'll start by explaining the realistic improvements AI can provide to each stage of the SDLC: Product prototyping Developers use prototypes to test idea viability and functionality, and to gather user and investor feedback. Historically, the average prototype required 2 to 6 weeks of teamwork to complete. But by amplifying developers' work via low-code/no-code prototyping and AI-generated code and other AI tools, a clickable prototype can now be completed in days or even hours. UX/UI design UX (user experience) and UI (user interface) designers collaborate closely with developers to design website and app interfaces. Using AI tools that can quickly generate multiple design mock-ups and UI components based on foundational style guides and example concepts, designers can visualise ideas and user flows in various contexts to improve design clarity and direction long before designs touch a developer's desktop. Clarity improves the quality of initial designs and reduces designer-developer back-and-forth, meaning larger projects that took 4 to 6 months to complete now require far less effort and time. Even UXR (User Experience Research) is accelerated and refined. User interviews are, by necessity, long and complex, and result in large, qualitative datasets. AI tools can highlight patterns and repetition in datasets and transcripts in seconds—shining a spotlight on insights, false positives or even biased questions that human researchers may have overlooked. Architecture Software architects plan higher-level design, bridging technical and business requirements. Their diagrams include the sum of a products' components and their respective interactions; until recently, the initial design phase alone took 1 to 2 weeks. Using AI, architects can quickly draw up diagrams to easily visualise these relationships and standardise dependency versions across services. AI can also be trained to use PR comments to report architectural violations, and libraries can be unified to encourage stability across features. Better consistency and immediate feedback mean architects can work faster and create fewer iterations of a product before diagrams meet stakeholder expectations. Coding AI-powered tools for coding have a variety of use cases. My team uses a mix of tools and GenAI to: ensure comprehensive project documentation, automate code documentation and README generation, scan for duplicate code and suggest improvements, improve understanding of complex, inconsistent or unfamiliar code bases, unify code styles and standards across different microservices, and perform code completion and check for bugs and inconsistencies based on defined standards. Paired with manual oversight to catch any mistakes, we've accelerated writing and testing code by a minimum of 20% across projects. GenAI makes complex codebases easily understandable—meaning team members can flexibly move to work on unfamiliar projects and diminish time spent on internal comms by about 25%. One tool we use is SonarQube, which reviews code without executing it. It runs automatically in GitLab CI/CD (Continuous Integration/Continuous Delivery and Deployment) pipeline to find bugs, report security vulnerabilities, and enforce code standards to unify style and mitigate potential misunderstandings down the line with better code readability. Testing and QA (Quality Assurance) As they write code, developers write and run unit tests to detect initial bugs and security issues that eat up between 10% and 20% of their time. The SDLC is slowed further by code reviews and PRs, or feedback from experienced colleagues. Tests are postponed by days, sometimes weeks, if various code reviews are required and dependent on busy colleagues. GenAI can augment developers' efforts by writing unit tests, conducting code reviews and PRs in real time, and automatically generating and solving for edge cases to overcome bottlenecks like a lack of expertise or teammates' availability. AI augmented QA can reduce redundancy, unify access to code, and consolidate fragmented knowledge across a project to make a QA team more efficient. And AI-driven tools like Selenium, for example, can automate web app test writing and execution, accelerating product releases and improving product reliability. Automated testing is especially compelling in the context of projects with tight deadlines and few resources. For example, my team's AI toolkit for QA testing includes Llama 3.3 LLM to generate test cases and analyse code and Excel-based legacy documents, IntelliJ AI Assistant to automatically standardise test case formatting, and GitLab to run and test scripts automatically in the CI/CD pipeline. QA is one of the most impactful applications of AI tools in the SDLC and can commonly slash the resources required by up to 60%, while increasing test coverage. Deployment When a product is deployed to end users, AI can be added to the CI/CD to forecast use patterns and improve caching strategies, as well as automatically prioritise and schedule tasks for parallel execution. With AI oversight, the number of repetitive tasks is automatically reduced and resource allocation anticipated, improving latency and product release cycles without added manual effort. And AI-driven caching accelerates and simplifies rollbacks (reverting a newly deployed system to a more stable version of itself) by analysing previous deployments and predicting the necessary steps, reducing further manual effort by DevOps teams, for instance. My team uses Dytrance during deployment, which monitors and analyses system status, and sends self-healing recommendations in real time. Maintenance and Monitoring At this stage, teams work to fix bugs, keep the system secure and functioning well, and make improvements based on user feedback, performance data and unmet user needs. AI can automatically perform root cause analysis for error monitoring, and suggest solutions for maintenance and debugging. Tools my team uses include AWS Cloud Watch and Azure Monitor with AIOps, which automatically collect, analyse, and suggest responses based on monitoring data, accelerating issue response and system updates by 10x. The big picture The acceleration of the individual stages of software development is incentive enough for some teams to add tools and GenAI models to their workflows; especially at stages like QA and coding, where use cases are various and results potent. But by taking a step back and considering AI's impacts on the SDLC holistically, the argument in favour of AI implementation can be turned into a real business case. A business case that can be used to accelerate AI transformation across an organisation: Backed by a strong framework, organisations implementing AI across their SDLC see a 30%+ acceleration across projects in the first 6 months. The keyword being 'strong.' Organisations need a framework that guides leadership to select tools and govern their use, measures outcomes to understand the amount of value different tools offer, and encourages adoption in teams' workflows. Without it, teams are unable to measurably extract the full potential from new tools and efforts, and risk breaching internal and third-party governance in areas such as data privacy. Keeping my word count and your patience in mind, I split my deep dive into a framework for AI governance, measurement and adoption into a separate article: Here is Inside an AI-assisted software development framework: using tools is not enough Part 2.

Protecting Your Digital Property Rights. Brittany Kaiser on the Alpha Liquid Podcast
Protecting Your Digital Property Rights. Brittany Kaiser on the Alpha Liquid Podcast

Business Upturn

time21 hours ago

  • Business
  • Business Upturn

Protecting Your Digital Property Rights. Brittany Kaiser on the Alpha Liquid Podcast

Miami, FL, June 19, 2025 (GLOBE NEWSWIRE) — The latest episode of the Alpha Liquid Podcast hosted by Matthew Mousa features a powerful new episode featuring Brittany Kaiser, the renowned whistleblower behind the Cambridge Analytica scandal and a leading advocate for digital rights through blockchain technology. In the latest episode, 'How Blockchain Protects Your Digital Property Rights', Kaiser (@OwnYourDataNow) takes listeners on a thought-provoking journey into the world of personal data, digital sovereignty, and the urgent need for AI regulation. The episode is now available on all major podcast platforms. Listen now: Key discussion topics: The story behind Cambridge Analytica and whistleblowing in the age of big data How blockchain can protect your human rights and digital identity human rights and digital identity The looming challenges of AI agents and the legislation we need now Deep dives into digital self-custody, data monetization, and Bitcoin's true role in global game theory This episode is a must-listen for digital rights advocates, and anyone navigating the rapidly evolving frontier of blockchain and AI. Listen Now on YouTube and Spotify or wherever you get your podcasts. About Alpha Liquid Podcast The Alpha Liquid Podcast delivers high-level insights at the intersection of crypto, markets, and innovation. Hosted by Matthew Mousa, each episode features unfiltered conversations with builders, researchers, and investors shaping the future of digital finance. Sponsored by the Alpha Liquid Terminal— Alpha Liquid Terminal, the first and only all assets investment platform that merges crypto, RWAs, equities, private markets and derivatives enhanced by AI analysts, agentic execution, utilizing a secure multi-asset and multi-party compliance vault system. Visit to learn more and join the waitlist for a chance to win ALTx tokens in our upcoming airdrop. Subscribe at:Alpha Liquid Podcast – YouTubeFollow us on:Alpha Transform Holdings: Overview | LinkedInAlpha Liquid Terminal (@AlphaSigmaFund) / X Alpha Liquid Podcast | Podcast on Spotify

Watch – Carole Cadwalladr on how tech giants hijacked truth and power
Watch – Carole Cadwalladr on how tech giants hijacked truth and power

Daily Maverick

time2 days ago

  • Politics
  • Daily Maverick

Watch – Carole Cadwalladr on how tech giants hijacked truth and power

British author and investigative journalist Carole Cadwalladr joins Redi Tlhabi to expose how powerful tech platforms manipulate our data, fuel disinformation, and endanger democracy. From Cambridge Analytica to AI and surveillance capitalism, this urgent conversation asks: Are we sleepwalking into a future where truth itself is algorithmically controlled? And what can we still do to fight back? Support journalism that protects democracy. Become a Maverick Insider. Subscribe to Daily Maverick YouTube channel. @dailymaverickchannel. DM

Trump tariffs derailed by law firm that received money from his richest backers
Trump tariffs derailed by law firm that received money from his richest backers

Yahoo

time01-06-2025

  • Business
  • Yahoo

Trump tariffs derailed by law firm that received money from his richest backers

Donald Trump's tariff policy was derailed by a libertarian public interest law firm that has received money from some of his richest backers. The Liberty Justice Center filed a lawsuit against the US president's 'reciprocal' tariffs on behalf of five small businesses, which it said were harmed by the policy. The center, based in Austin, Texas, describes itself as a libertarian non-profit litigation firm 'that seeks to protect economic liberty, private property rights, free speech, and other fundamental rights'. Related: Trump officials to ask supreme court to halt bid by 'activist judges' to block tariffs Previous backers of the firm include billionaires Robert Mercer and Richard Uihlein, who were also financial backers of Trump's presidential campaigns. Mercer, a hedge fund manager, was a key backer of Breitbart News and Cambridge Analytica, pouring millions into both companies. He personally directed Cambridge Analytica to focus on the Leave campaign during the UK's Brexit referendum in 2016 that led to the UK leaving the European Union. For its lawsuit against Trump's tariffs, the Liberty Justice Center gathered five small businesses, including a wine company and a fish gear and apparel retailer, and argued that Trump overreached his executive authority and needed Congress's approval to pass such broad tariffs. The other group who sued the Trump administration over its tariffs was a coalition of 12 Democratic state attorney generals who argued that Trump improperly used a trade law, the International Emergency Economic Powers Act (IEEPA), when enacting his tariffs. In such a polarized time in US history, it may feel odd to see a decision celebrated by liberal and conservatives. But Trump's tariffs have proven controversial to members of both parties, particularly after Wall Street seemed to be put on edge by the president's trade war. The US stock market dipped down at least 5% after Trump announced the harshest of his tariff policies. Recovery was quick after Trump paused many of his harshest tariffs until the end of the summer. Stocks started to rally on Thursday morning after the panel's ruling. The judges said that the law Trump cited when enacting his tariffs, the IEEPA does not 'delegate an unbounded tariff authority onto the president'. The decision is on a temporary hold after the Trump administration appealed. Related: Why has a US court blocked Donald Trump's tariffs – and can he get round it? While the ruling does not impact specific tariffs on industries such as aluminum and steel, it prevents the White House from carrying out broad retaliatory tariffs and its 10% baseline 'reciprocal' tariff. The White House is appealing the ruling, which means the case could go up to the US supreme court, should the high court decide to take on the case. Members of both groups who sued the Trump administration celebrated the ruling. Jeffrey Schwab, senior counsel for the Liberty Justice Center, said in a statement that it 'affirms that the president must act within the bounds of the law, and it protects American businesses and consumers from the destabilizing effects of volatile, unilaterally imposed tariffs'. Oregon's Democratic attorney general, Dan Rayfield, who helped the states' lawsuit, said that it 'reaffirms that our laws matter'. In a statement, Victor Schwartz, the founder of VOS Selections, a wine company that was represented by the Liberty Justice Center in the suit, said that the ruling is a 'win' for his business. 'This is a win for my small business along with small businesses across America – and the world for that matter,' he said. 'We are aware of the appeal already filed and we firmly believe in our lawsuit and will see it all the way through the United States Supreme Court.'

On Why Leakers Are Essential To The Public Good
On Why Leakers Are Essential To The Public Good

Scoop

time30-05-2025

  • Politics
  • Scoop

On Why Leakers Are Essential To The Public Good

For obvious reasons, people in positions of power tend to treat the leaking of unauthorised information as a very, very bad thing. But, the history of the last 100 years has been changed very much for the better by the leaking of unauthorised information. For obvious reasons, people in positions of power tend to treat the leaking of unauthorised information as a very, very bad thing, and – to maintain the appearance of control – they will devote a lot of time and energy into tracking down and punishing those responsible. Just as obviously, the history of the last 100 years has been changed – very much for the better – by the leaking of unauthorised information. The obvious examples include: (a) the Pentagon Papers that revealed (among other things) the secret US saturation bombing of Cambodia (b) the 'Deep Throat' leaks of criminal presidential actions during the Watergate scandal that helped bring down US President Richard Nixon (c) the leaked Panama Papers documents that revealed the techniques of systematic tax evasion rife in offshore tax havens (d) the thousands of secret US diplomatic cables leaked by Chelsea Manning that revealed the covert methods used by the US to influence the foreign policy decisions taken in dozens of countries (e) the NSA leaks by Edward Snowden that exposed a number of US and British clandestine and illegal spy operations (f) the Cambridge Analytica mis-use of personal data scandal, which came to light via leaks by former CA employee Christopher Wylie to journalist Carole Cadwallader at the Observer. Closer to home, one need only mention the public good served by the numerous investigations conducted by journalist Nicky Hager. Hager's work has regularly put to good use any number of tip-offs and shared insights from a large number of highly motivated leakers, whistle blowers and informers who had inside knowledge of matters affecting the public, but without the public's knowledge or approval. Even the anodyne Operation Burnham inquiry ended up by vindicating the Hit & Run book written by Hager and co-author Jon Stephenson . Point being, journalism would not be able to function without a thriving ecosystem of leaking and whistle-blowing, informants and tip-offs. This unofficial and unauthorised sharing of information provides a vital counter-balance to the media's dependence otherwise, on official sources and p.r. machines. Why does it seem necessary to revisit the ancient and honourable history of leaking? Unfortunately, we seem to be in the throes of another witch hunt led by Public Service Commissioner Sir Brian Roche – to find and to punish the public servants responsible for recent leaks of confidential information to the media. One can't be entirely sure of the science, but it seems likely that the leaks of unauthorised information are a direct and proportionate response to the bull-dozing of the democratic process by the coalition government. When urgency is being taken to crush pay equity and to ram through regulatory reform that has serious constitutional implications…then it seems inevitable that people with access to sensitive information will do all they can to alert the public, and to block the path of the bulldozer. Does leaking undermine the public's faith in institutions and the political process? Hardly. Currently, David Seyumour and his coalition cronies are doing a pretty good job of that, all by themselves. Does it help to make a distinction between 'leaking' and 'whistle-blowing?' Not really. Call it whistle-blowing and the revelations gain a sense of virtue, in that the information can be argued to be something that the public needs to know, but has no legitimate means of finding out. This balance between unauthorised revelations and the public good surfaced again just before Budget Day, when – on the grounds of commercial sensitivity – the courts blocked RNZ's publication of a leaked document about education policy. The court action was controversial, and with good reason. Whenever public money is involved, surely secrecy driven by 'commercial sensitivity' should be the very rare exception and not (as tends to be the case) the default position. Moreover…the government can hardly cry foul. Routinely, successive governments have drip-fed policy revelations to the media before Budget Day, in order to achieve the maximum amount of political coverage. Sauce for the goose etc. Subsequently, a Public Services Commission memorandum warning of an imminent crackdown on public servants found to be leaking information was itself leaked to the media, by persons unknown. While widely condemned, some of those recent leaks have had a silver lining. The revelation for example, that the Police would no longer investigate shoplifting offences involving amounts below $500 aroused the fury of some retailers, and quickly led to a Police backdown. In that case, the leaking of Police information led directly to a better policy outcome. More of that, please. Spot The Dfference One supposed difference between leakers and whistleblowers is that whistleblowers are supposed to first raise their concerns with their bosses – such that public disclosure then becomes the last resort, rather than the first step. Hmm. In the real world, telling your superiors that you have deep moral misgivings about a policy they are managing is likely to be a career-damaging step, if not a direct path to dismissal. Contractors who want their contracts renewed would be well advised to keep their mouths shut, and/or to leak information in ways that cover their tracks. For obvious reasons, there seems to be no political appetite for strengthening the protections available to whistleblowers. Even the Public Service Association has been careful to condemn leaking under any circumstances. PSA national secretary Fleur Fitzsimons reminded public servants that they are obliged to carry out the policies of the government of the day, even if they personally disagree with them. Really? Being chided by your union to play by the rules is IMO, symptomatic of a wider problem: which has to do with the erosion of public service neutrality and the related tradition of public servants offering frank and informed advice. No doubt, the ongoing politicisation of the public service is more serious under some Ministers than others. Point being thorough: leaking is a symptom of the subversion of public service autonomy, and cracking down on it is likely to cloud our understanding of its causes. Basically….by limiting the motivation to one of personal objections held by individual public servants, the PSA did not address the more complex cases where a public servant – by helping to enact policies likely to result in harm – may feel morally compelled to disclose the relevant information. In which case…as mentioned, the whistle blowing procedures offer them little in the way of practical self-protection. Surely, transparency in government should not require martyrs. The rest seems pretty obvious. Yes, media outlets do need to be agreeing among themselves about a common response to any significant government crackdown. After all, media outlets enjoy'news break' benefits from the information leaked to them. For that reason alone, there is an obligation to protect sources by with-holding any identifying information, however it has been obtained and whatever threats get leveled at the outlets that publish leaked information. Other countries have gone further down that road. Yet the risk is that in the name of finding and punishing leakers, the ability of the Fourth Estate to carry out its watchdog role will be compromised. If so, public servants and journalists would not be the only casualties of ant crackdown conducted by the government. Henry Thomas, ace whistle blower Here we have a bulldozer and a whistleblower, both at once. The cane reeds (aka 'quills') that ancient bluesman Henry Thomas blew into – on his classic tracks like 'Fishin' Blues' and 'Going Up The Country' – belong to an Afro-American tradition dating back to the pre-Civil War era. Here's Henry Thomas doing 'Bull-Doze Blues' a track that later became a hit for 1970s blues revivalists Canned Heat, quills and all.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store