Employers face growing exposure to compliance breaches and reputational harm due to widespread hidden AI use in the workplace, according to a major global study published by KPMG in partnership with the University of Melbourne.
The Trust in AI report, which surveyed over 48,000 workers across 47 countries, found that 57% of employees admit to concealing how they use artificial intelligence at work. Many are passing off AI-generated content as their own, often in violation of internal policies and without appropriate oversight.
While 58% of workers said they now intentionally use AI to support their job tasks, just 47% report having received any formal AI training. The report highlights a clear disconnect between the growing integration of generative AI tools and the governance mechanisms needed to manage them responsibly.
Nicole Gillespie, co-author of the study and professor of management at the University of Melbourne, said the findings reveal a troubling level of “inappropriate, complex, and non-transparent” AI usage by employees. Much of this covert behaviour is driven by fear of being left behind.
“There’s pressure to use these tools just to keep up,” Gillespie explained. “If employers prohibit Gen AI, people hide what they’re doing. But there’s also a seductive element. Once they see the benefits, it’s tempting to keep going, even when it breaks the rules.”
Lack of AI training linked to data and trust risks
A significant number of employees are using AI without checking the accuracy of outputs or understanding the consequences. Two-thirds said they do not evaluate the accuracy of AI responses. Nearly half have uploaded company data into public AI tools, and more than half admitted to making mistakes in their work due to AI.
Sam Gloede, global trusted AI transformation leader at KPMG International, warned that such practices expose businesses to “significant risk” from errors and data leaks to loss of stakeholder trust. “It’s not just technical. Trust is a strategic asset,” she said.
The report links higher AI trust levels to stronger AI literacy and governance. In emerging economies like India, Nigeria and Saudi Arabia, 82% of workers reported trusting AI, compared to 65% in advanced economies. Those same markets also showed greater access to training and education.
Despite increasing reliance on AI, only two in five workers worldwide have received AI-specific education. Gillespie stressed that foundational and role-based training are essential if businesses are to foster responsible, open use of AI tools. “We need more than just instruction,” she said. “We need safe spaces for experimentation, learning, and collaboration.”
Transparent AI usage must be prioritised
With generative AI continuing to reshape the workplace, leaders are being urged to address covert AI use through better literacy programmes and clear governance. Both Gillespie and Gloede believe these measures are crucial to avoiding a crisis of trust.
“Boards now see trust as central to their strategy,” said Gillespie. “To realise the benefits of AI, organisations must invest in transparent, ethical usage, and help employees feel safe using it in the open.”