Major U.S. employers including Starbucks, Walmart, and AstraZeneca are among those using AI to monitor and analyze messages sent between employees.
According to a report from CNBC, the companies are all using an AI data platform to gather intelligence from workplace conversations.
Aware, the start-up behind the platform, collects unstructured data from collaboration platforms such as Zoom, Slack, and Qualtrics. Other notable customers include Delta Air Lines, T-Mobile, AstraZeneca and Nestle.
This type of software has a variety of applications in the workplace including real-time monitoring of employee sentiment; spotting cases of harassment, noncompliance, and other inappropriate and illegal actions; and gauging employee responses to new HR policies.
Jeff Schumann, CEO, Aware, says customers like Walmart and Starbucks use their software for governance and compliance, to reduce the risk associated with communication between employees.
He estimates around 80% of Aware’s customers use the platform for this purpose, with a typical customer employing around 30,000 employees. Aware has increased its revenue by an average of 150% per year since 2019.
The machine-learning models that power the AI-enabled employee surveillance contain over six billion messages and 20 billion interactions, covering data from over three million employees.
CNBC heard from AstraZeneca that it uses one of Aware’s products to gather and review data that can be used to aid legal requests or investigations. This data includes messages shared between employees on collaboration software. The pharmaceutical multinational doesn’t use the software to track sentiment.
Delta Air Lines also spoke to CNBC and stated they use AI-enabled surveillance software to track sentiment from employees as a means of employee feedback and to keep legal records of all exchanges.
Speaking of the capabilities of AI-powered analysis, Schumann emphasizes that this software allows employers the opportunity for real-time feedback that vastly improves on annual engagement surveys.
“If you were a bank using Aware and the sentiment of the workforce spiked in the last 20 minutes, it's because they're talking about something positively, collectively. The technology would be able to tell them whatever it was,” he says.
He added that the software, which processes over 100 million pieces of content per day, can track real-time toxicity.
Previous studies have found employees who are electronically monitored by their employers indicated higher levels of stress (56%) than those who are not monitored in this way (40%).
Other research has found that whilst 83% of employers cite ethical concerns with employee monitoring, 78% use surveillance software. The number drops to 30% for employers who gather and store data from messaging logs.
Some of Aware’s tools keep employee data anonymous, with privacy a major concern for employees about workplace surveillance. But for other tools, such as the product AstraZeneca uses to aid legal investigations, companies can set up role-based access, where names become visible.
Schumann states this is for extreme cases such as bullying, harassment, or major compliance issues like insider trading. In such cases, AI models can examine when pre-determined and prohibited topics or key phrases are sent as messages and identify the employee in question to the relevant parties.
Whilst many forms of employee monitoring are legal in the U.S., and similar software has been in place for workplace email surveillance for many years, CNBC heard from several experts on the potentially damaging effect of AI-powered employee surveillance.
“It results in a chilling effect on what people are saying in the workplace," says Amba Kak, Executive Director, the AI Now Institute at New York University. "These are as much worker rights issues as they are privacy issues."
Kak also adds that claims that any surveillance software can protect employee privacy do not hold water. “No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems," she explains.
Jutta Williams, Co-Founder, Humane Intelligence - an AI accountability nonprofit - argues it becomes a thought crime. “This treating people like inventory in a way I've not seen,” she says.
Both Williams and Kak refer to general AI-enabled monitoring software rather than the Aware platform specifically.