Home

One in Four Organizations Fall Victim to AI Data Poisoning Exposing Them to Risks of Sabotage and Fraud, According to Research From IO

  • Research also reveals that Shadow AI, the unsanctioned use of AI tools, is fueling data security risks, giving rise to increased human fallibility as more than a third of organizations (37%) admit to employees using AI tools without permission

  • Rushed AI adoption is also leaving businesses exposed, with 54% admitting they moved too fast and now struggle to scale back or secure responsibly

NEW YORK, NY / ACCESS Newswire / September 17, 2025 / IO (formerly ISMS.online), the global platform for scaling information security and privacy compliance with confidence, today warns that US businesses are already under threat from weaponized artificial intelligence. More than one in four surveyed organizations in the UK and US (26%) have fallen victim to AI data poisoning in the past year, wherein hackers corrupt the data that trains AI systems, planting hidden backdoors, sabotaging performance, or manipulating outcomes to their advantage. The consequences are far-reaching, and poisoned models can quietly undermine fraud detection, weaken cyber defenses, and open the door to large-scale attacks, putting both businesses and the public at risk.

The IO State of Information Security Report, conducted among 3,001 cybersecurity and information security managers in the UK and USA, worryingly found that 20% of organizations also reported experiencing deepfake or cloning incidents in the last 12 months. In line with this, 28% of respondents highlight deepfake impersonation in virtual meetings as a growing threat for the next 12 months, showing how AI is increasingly being weaponized to target people directly and undermine trust in everyday business interactions.

Beyond deepfakes, AI-generated misinformation and disinformation tops the list of emerging threats for the next 12 months, cited by 42% of security professionals concerned about scams and reputational harm. Generative AI-driven phishing (38%) and shadow AI misuse are also on the rise, with more than a third (37%) of respondents reporting that employees use generative AI tools without permission or guidance, creating risks of data leaks, compliance breaches, and reputational damage.

Shadow IT in general - downloading or accessing unapproved software or services - is already an issue for 40% of organizations, and generative AI is exacerbating the problem, especially when it is used without human oversight. 40% of those who are currently facing challenges in information security cited tasks being completed by AI without human compliance checks as a key challenge. If businesses are not fast enough to address this problem, employees may well continue to find insecure workarounds and shortcuts, putting sensitive data at risk.

Chris Newton-Smith, CEO of IO, said: "AI has always been a double-edged sword. While it offers enormous promise, the risks are evolving just as fast as the technology itself. Too many organizations rushed in and are now paying the price. Data poisoning attacks, for example, don't just undermine technical systems, but they threaten the integrity of the services we rely on. Add shadow AI to the mix, and it's clear we need stronger governance to protect both businesses and the public."

AI adoption has surged, and more than half of organizations (54%) admit they deployed the technology too quickly and are now struggling to scale it back or implement it more responsibly. In line with this, 39% of all respondents cited securing AI and machine learning technologies as a top challenge they are currently facing, up sharply from 9% last year. Meanwhile, 52% state that AI and machine learning are hindering their security efforts.

Although the statistics show that AI may not yet be on the side of the defender, encouragingly, 79% of UK and US organizations are using AI, machine learning, or blockchain for security, up from just 27% in 2024. A further 96% have plans to invest in GenAI-powered threat detection and defense, 94% will roll out deepfake detection and validation tools, and 95% are committing to AI governance and policy enforcement in the year ahead.

Newton-Smith added: "The UK's National Cyber Security Centre has already warned that AI will almost certainly make cyberattacks more effective over the next two years, and our research shows businesses need to act now. Many are already strengthening resilience, and by adopting frameworks like ISO 42001, organizations can innovate responsibly, protect customers, recover faster, and clearly communicate their defenses if an attack occurs."

-Ends-

About IO
At IO, we believe compliance should fuel progress, not hold it back.

That's why we've built a modern compliance platform designed to help organizations simplify, strengthen, and scale their information security, privacy, risk and AI governance. Supporting over 100 global standards, including ISO 27001, ISO 27701, ISO 42001, GDPR, and CCPA, IO gives teams everything they need to stay secure, aligned, and audit-ready in one place.

Our approach is built around people, process, and platform, because lasting compliance isn't achieved through automation alone. With structured workflows, guided support, and smart integrations that fit how your business already works, IO makes it easier to embed compliance into everyday operations.

From first-time certifications to mature multi-framework global programs, IO helps reduce duplicated work, surface the right insights, and build confidence across your organization. It's compliance that fits and scales with you.

Trusted by thousands of businesses worldwide, IO is here to turn compliance from a box-ticking chore into a strategic advantage.

Research methodology
The research was conducted by Censuswide, among a sample of 3001 Cybersecurity and Information security Managers+ (18+) in the UK and USA. The data was collected between July 23, 2025 - August 7,2025. A separate study was conducted among a sample of 1020 respondents who work in information security across the UK and USA between March 22, 2024 - April 2, 2024. Censuswide abides by and employs members of the Market Research Society and follows the MRS code of conduct and ESOMAR principles. Censuswide is also a member of the British Polling Council.

CONTACT
Sarah Hawley
sarahhawley@origincomms.com
+1 480.292.4640

SOURCE: IO



View the original press release on ACCESS Newswire