Friday, August 29, 2025
HomePositive VibesAI in the Workplace: What Are the Risks?

AI in the Workplace: What Are the Risks?


In 2024, accounting software provider Sage paused its generative AI assistant after users reported it had leaked sensitive financial data between accounts. Though it was meant to streamline small business operations, the tool instead spotlighted how fragile compliance can become when oversight fails.

Everyone’s racing to adopt AI—but organizations should first stop to ask how to manage it safely. 

Here are the risks companies should ignore when adopting AI and what they need to know as they adopt it.

"SUCCESS Starter Kit" bundle in store offer

What happens when AI becomes your next hire?

Microsoft’s Future of Work report highlights a pivotal shift: AI agents are no longer just tools—they’re digital colleagues. The report predicts that AI agents will soon be “managed” like employees, performing tasks autonomously while still requiring human oversight.

That changes everything. If an AI agent makes a mistake, who owns it? Who’s responsible?

“The onus is first on the organization to establish compliance principles and governance principles… not a regulatory framework,” says Reena Richtermeyer, a partner at CM Law who advises clients on AI governance.

She also emphasizes that internal structures—not just external laws—determine how safely AI is deployed. And right now, she adds, those structures are often missing or incomplete.

Governance by guesswork

AI can start out as helpful, but it may quietly become a liability if no one’s paying attention. Because of this, a human still needs to manage AI’s training and implementation.

“Someone is training those models.… There’s some, I think, accountability in the training of the agents,” says Carolyn Troyan, CEO of Leadership 360, who works with companies that are navigating workforce transformation. “And then I think it’s accountability on the leader and how they’re using those agents in the most effective way possible.”

That dynamic—leaders managing software that makes decisions—means supervision starts long before rollout. “Stakeholders need to be at the table with the engineering team or whoever’s implementing the AI solution at the beginning,” Richtermeyer says, “so that the goals, anticipated outcomes and oversight [are] happening at that point.”

But most organizations aren’t there yet. A 2023 survey by KPMG found that just 6% of organizations report “having a dedicated team in place for evaluating risk and implementing risk mitigation strategies as part of their overall generative AI strategy.”

Katrina Young, a digital transformation consultant, says that the ethical layer is especially easy to overlook. “It’s understanding the cultural and ethical oversight without having a human in the loop… making sure that they’re continuously maintained and updated.”

When AI goes wrong in public

Some companies are learning these lessons the hard way. Klarna, a Swedish fintech company, made global headlines in 2024 when it replaced 700 customer service reps with an AI assistant powered by ChatGPT. At the time, this sounded like a smart way to boost productivity—but didn’t take long for customers to voice their frustration. Many said the AI assistant wasn’t solving their problems. Because of this, Klarna reversed course and began rehiring human agents.

Young isn’t surprised. “There are no rules or guidance about it,” she says. “So the company equally would need to bring stakeholders in to understand, you know, ‘Do we have policies that are now using agents as that next step in supporting HR?’”

She adds that misuse, whether intentional or careless, can even cross legal lines. “I’ve seen it happen with robotic process automation… making logs of staff… activities,” she says. “It depends on the use case.… If it’s a use case for constructive dismissal, that could pose legal challenges.”

The rise of shadow AI

The Microsoft report also points to a trend that’s making oversight even harder: Nearly 80% of AI users are bringing their tools into the workplace, a phenomenon dubbed “Bring Your Own AI.” This means AI adoption isn’t always happening at the organizational level—instead, it’s happening quietly, department by department, or even individual by individual. This can quickly turn into a nightmare for anyone who’s trying to build consistent policies across teams.

“We have to understand how to utilize our tools,” Richtermeyer says. “Just because we have a powerful tool doesn’t mean that we can simply say… ‘We no longer have responsibility.’”

Leadership redefined

As AI reshapes how we do work, many leaders must learn to manage technology that they don’t fully understand.

“I think as a leader, deciding what work can be best done by an agent versus a person is probably one of the most important skills they can learn,” Troyan says on the subject. “Co-piloting [is] needed between a human and the AI to make sure that, ethically, everything’s happening correctly… that there’s oversight.”

And that’s just one layer. She adds that companies should be thinking now about how automation will affect workforce development. “[What] keeps me up at night is… how are you going to train those entry-level employees… if they have no job experience?” she asks. “If that work goes away based on AI, how then do we train that next generation of workers?”

These blind spots aren’t just theoretical—they can hit operations hard. A 2024 report from Deloitte’s AI Institute stated that “organizations feel far less ready for the challenges Generative AI brings to risk management and governance—only 23% rated their organization as highly prepared.”

That lack of visibility can turn small mistakes into high-stakes legal exposure. Companies that don’t know what their AI is doing—or who’s monitoring it—can’t defend those outcomes after issues arise.

Responsibility isn’t optional

Ultimately, no matter how advanced the tools get, AI systems still need human oversight.

“[AI] can never, ever take over,” Young says. ”We will always need human-led AI.”

You can’t bolt on accountability later. It has to be part of the system from the beginning. Companies that fail to plan for compliance now may find themselves trying to explain mistakes to regulators, customers, investors and their own employees later on—but by then, it may be too late.

Photo by Nuva Frames/Shutterstock.com

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments