Artificial Intelligence is the buzzword of the moment. The emerging technology has swiftly transformed the business landscape, ushering in swathes of efficiencies and opportunities for even the most technologically-resistant businesses.
Yet, alongside its promising implementation, AI brings with it a labyrinth of legal intricacies that businesses must deftly navigate to thrive in this evolving digital era.
In fact, according to AP News reporting, over 400 AI-related bills are due to be debated this year, including nearly 200 targeting deepfakes and chatbots, such as ChatGPT.
“AI does, in-fact, affect every part of your life whether you know it or not,” Suresh Venkatasubramanian, a Brown University Professor who co-authored the White House’s Blueprint for an AI Bill of Rights, recently told CNN. “Now, you wouldn’t care if they all worked fine. But they don’t.”
So, the U.S. is far from futureproofing its legal approach to the implementation of AI.
However, from data privacy concerns to discrimination risks and intellectual property complexities, the current landscape surrounding AI implementation demands careful attention and proactive measures from businesses.
Safeguarding sensitive information
One of the foremost legal considerations for businesses venturing into AI adoption is data privacy and security. AI systems rely heavily on vast troves of data, often containing sensitive customer information.
To comply with regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., businesses must fortify their data protection measures. Encryption, access controls, and regularly updated privacy policies are paramount to mitigating legal risks and maintaining consumer trust.
Without a clear policy, companies risk data breaches, mishandling of information and ethical violations. An AI use policy sets clear guidelines and boundaries, ensuring that both the organization and its employees use AI responsibly and safely.
Key elements of an AI use policy include guidelines for maintaining the confidentiality of sensitive information and ensuring data privacy; standards for ethical AI use, avoiding biases and ensuring fairness in AI-driven decisions; clear instructions on security measures to prevent unauthorized access or leaks of sensitive data and finally, very clear boundaries on what AI can and cannot be used for within the workplace.
Ensuring fairness and accountability
AI presents a chance to instil bias-free (or close-to) processes for talent attraction and retention. It’s estimated as many as 83% of employers currently use algorithms to help in hiring; that’s 99% for Fortune 500 companies, according to the Equal Employment Opportunity Commission.
However, if trained on biased datasets, algorithms can perpetuate discrimination and bias, leading to serious legal liabilities.
For instance, if an AI-driven hiring tool systematically discriminates against a particular group, the business could face discrimination claims and lawsuits.
Issues such as these were prevalent in the first wave of adoption of such technology. For example, Amazon ditched its hiring algorithm project after it was found to favor male applicants close to a decade ago.
The AI was trained to assess new resumes by learning from past resumes — largely male applicants. While the algorithm didn’t know the applicants’ genders, it still downgraded resumes with the word “women’s” or that listed women’s colleges, in part because they were not represented in the historical data it learned from.
To mitigate this risk, businesses must cleanse training data rigorously and employ fairness-aware AI models, likely through the use of strategic partnerships with suppliers.
It’s also worth noting that transparency in AI decision-making is crucial for ensuring fairness and accountability. By documenting AI processes and establishing clear lines of responsibility, businesses can navigate potential legal challenges and regulatory scrutiny with confidence.
Clarifying ownership and rights
The emerging generation of AI-driven content raises some big intellectual property questions. Currently, the waters around who owns the content produced by AI bots such as ChatGPT are severely murky.
When it comes to intellectual property, the model for ChatGPT “is trained on a corpus of created works and it is still unclear what the legal precedent may be for reuse of this content, if it was derived from the intellectual property of others,” according to Bern Elliot, an Analyst at Gartner.
For one, these AI tools tend not to include citations or offer attributions to the original sources and IP used or synthesized. However, if ideas are used but not ‘copied’, the use would not implicate copyright or other protected IP, according to Neal Gerber Eisenberg’s IP practice.
From a legal standpoint, under current U.S. law, the work must be the result of original and creative authorship by a human author. Yet does that mean that workers inherently own the materials they create through the use of AI? This, it seems, comes down to the specific wording in their contracts.
Regardless, this type of practice is potentially legally risky and businesses must tread carefully to avoid infringing existing intellectual property rights when utilizing AI to create or modify content. Legal counsel is indispensable in navigating these complexities effectively.
Mitigating risks in AI-integrated products and services
AI systems integrated into products or services can expose businesses to product liability claims.
Establishing clear contractual relationships and responsibilities is crucial for allocating liabilities effectively. Comprehensive product liability insurance and consultations with legal experts can help mitigate risks associated with AI-driven products and services.
Meeting regulatory demands
As AI decisions become integral to business operations, accountability and transparency are paramount. Regulatory bodies and stakeholders increasingly demand insight into AI decision-making processes.
By documenting AI processes and prioritizing transparency, businesses can align with emerging regulations and mitigate legal challenges.
In short, be careful
Unfortunately, navigating the complex legal landscape of AI implementation is still a minefield for U.S. businesses.
Yet in being ready for what could be a wave of new legislation in 2024, businesses must prioritize data privacy, address discrimination concerns, clarify intellectual property rights, manage product liability risks and ensure accountability and transparency in AI systems.
By doing so, they can harness the transformative potential of AI, while also safeguarding their legal interests and reputation in an increasingly AI-driven business landscape.