Saturday, June 7, 2025
HomePositive VibesMost Companies Aren’t Ready for AI Security Risks

Most Companies Aren’t Ready for AI Security Risks


AI use is growing throughout all industries, with 78% of companies worldwide utilizing artificial intelligence. Despite companies’ quick adoption of AI, recent research from BigID, an AI security and data privacy platform, found that most companies’ security measures aren’t up to par for the risks AI brings

Published on Wednesday, BigID surveyed 233 compliance, security and data leaders to find that AI adoption is outpacing security readiness, with only 6% of organizations implementing advanced AI security strategies. 

Ranking as the top concerns for companies are AI-powered data leaks, shadow AI and compliance with AI regulations. 

SUCCESS+ Subscription offer

69.5% of organizations identify AI-powered data leaks as their primary concern

As the uses of AI expand, so does the potential for cyberattacks. Increasing amounts of data, from financial records to customer details, and security gaps can make AI systems tempting targets for cybercriminals. The possible consequences due to AI-powered data leaks are widespread, from financial loss to private information breaches, yet according to BigID’s report, nearly half of organizations have no AI-specific security controls.

To help prevent data leaks, BigID recommends regular monitoring of AI systems, as well as who has access to them. Systematic checks for any unusual activity along with implementation of authentication and access controls can help keep AI systems running as designed. 

For an added layer of security, organizations can consider changes for the actual data used in AI. Personal identifiers can be taken out of data or replaced with pseudonyms to keep information private, or synthetic data generation, creating a fake data set that appears exactly like the original, can be used to train AI while keeping an organization’s data safe. 

Nearly half of surveyed organizations worry about shadow AI

Shadow AI is the unmonitored use of AI tools from employees or external vendors. Most often, shadow AI is seen in employee use of generative AI, including commonly used platforms like ChatGPT or Gemini. As AI tools become more accessible, the risk for shadow AI grows, with a 2024 study from LinkedIn and Microsoft showing 75% of knowledge workers use generative AI in their jobs. Unauthorized use of AI tools can lead to data leaks, increased difficulty in regulation compliance and bias or ethical issues. 

The best defense against shadow AI starts with education. Creating clear policies and procedures for AI usage throughout a company, along with regular employee training, can help to protect against shadow AI. 

80% of organizations are not ready or are unsure on how to meet AI regulations

As the uses for AI have grown, so have mandated regulations. Most notably, the EU AI Act and General Data Protection Regulation (GDPR) are the leading European regulations for AI tools and data policies. 

While there are no explicit AI regulations for the U.S. at this time, BigID recommends companies comply with the EU AI Act, enact auditability for AI systems and begin to document decisions made by AI to prepare for more regulations around AI usage. 

As the potential of AI evolves, more companies are prioritizing digital help over human employees. Before your company jumps on the bandwagon, make sure to take the proper steps to safeguard against the new risks AI brings. 

Photo by DC Studio/Shutterstock

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments