A leaked email to employees has revealed Amazon’s guidelines to employees on using third-party GenAI tools at work.
The memo, as reported by Business Insider, mandates employees not to use third-party software due to data privacy concerns.
“While we may find ourselves using GenAl tools, especially when it seems to make life easier, we should be sure not to use it for confidential Amazon work,” the email warns. “Don’t share any confidential Amazon, customer, or employee data when you’re using 3rd party GenAl tools. Generally, confidential data would be data that is not publicly available.”
It’s not the first time Amazon has had to issue a reminder to employees. In January 2023, a company lawyer wrote to workers not to share “any Amazon confidential information (including Amazon code you are working on)” with ChatGPT.
This warning was issued over concerns that such third-party tools may claim ownership over the data shared by employees, resulting in future output that includes or resembles confidential information. “I’ve already seen instances where its output closely matches existing material,” said the lawyer at the time.
It’s a timely reminder for employers that firm guardrails are needed for employees to understand how they can and cannot use (third-party) AI tools at work.
Workplace AI policies are not standard practice
Research from Salesforce reveals that seven in ten employees are using AI without training on safety or ethical use and over half of employees are using GenAI without approval from their company. Only 17% of U.S. industries have loosely defined policies on AI. The issue is particularly prevalent in industries such as healthcare, where 87% of global workers say their company lacks clear policies on AI use.
Employers and HR teams have to get more visibility on how their employees are using AI and offer support, training, and guidance to make sure they are using it safely.
Company insiders at the likes of Google and Microsoft, industry leaders including Elon Musk, and officials such as President Biden have all raised concerns that those behind AI platforms are prioritizing speed over safety. There have even been calls to pause the development of further AI systems to provide time to establish proper safety protocols. However, AI products have continued to be launched or upgraded to more powerful models.
Yes, many companies are reaping the rewards of AI, including streamlining recruitment practices and improving other HR practices such as performance reviews and employee listening. However, having guardrails in place is particularly important as the faults and flaws of GenAI tools are exposed.
On February 21, Google paused the ability of its tool, Gemini, to generate images of people after attempts to ensure that generated images did not depict harmful racial stereotypes ended up making it difficult for the tool to generate images of white people. It described its own tool as “missing the mark” in a statement released on X.
It’s not the only AI tool to suffer issues in recent weeks. An investigation from OpenAI into users describing ChatGPT as responding with gibberish discovered a bug that left language processing as “lost in translation.”
Even Amazon’s own AI tool, Q, faced criticism back in December with an employee claiming “severe hallucinations” and that it could hypothetically reveal confidential data or deliver bad legal advice – though these claims were refuted by an Amazon spokesperson.
There are limitations or risks of any model being used, particularly in the workplace, be it the risk of hallucinations during content generation, or compliance breaches from employees.
HR must set up transparent AI guidelines
Amazon’s memo is a reminder to employers and HR teams to create, communicate, and train employees on clear and transparent AI usage policies.
“Employees use our AI models every day to invent on behalf of our customers – from generating code recommendations with Amazon CodeWhisperer to creating new experiences on Alexa,” an Amazon spokesperson writes to Bezinga in an email.
“We have safeguards in place for employee use of these technologies, including guidance on accessing third-party generative AI services and protecting confidential information.”
As the leaked Amazon email states, it certainly “seems to make life easier.” But whether employers actively embrace AI or remain in the dark on employee usage, they should not neglect to be cautious and create the AI safeguarding policies that many workplaces are still clearly lacking.