GPT-4o to Google I/O | It's been a gigantic week for AI. But I'm still anxious about its future & your employees are too

It's been a gigantic week for AI. But I'm still anxious about its future & your employees are too

For fans of AI, this week has been like the Oscars, the Superbowl, and Coachella rolled into one.

OpenAI kicked off on Monday by wowing onlookers with the launch of GPT-4o, now capable of reasoning not just with input from text, but also audio, images, and video. Their chatbot can respond to conversational voice commands and even provide real-time translation between two people who speak different languages.

Not to be outdone, Google followed up by announcing Project Astra on Tuesday at its annual I/O conference. Astra is described as an “AI agent for everyday life,” which can also interact and respond to live video and audio. 

Then the likes of Mastercard and Microsoft released articles detailing their own work with AI, and how internal AI-powered tools are helping with everything from accelerated interview scheduling to chatbots for employees that save HR consultants hundreds of thousands of wasted hours in answering basic queries.

Enough then, to have Silicon Valley tech bros salivating and AI experts chomping at the bit to see what other parts of their livelihoods could be automated.

But while these developments are no doubt exciting innovations, for many the rapid progress in the field of AI is less awe-inspiring and more anxiety-inducing.

AI is exciting. But the risks of mismanagement also bring anxiety

First up, it’s only fair I caveat I am coming from a position of bias. As an editor, the content industry is already being turned on its head by the introduction of AI. At the beginning of 2023 when AI was at its peak, I was encouraged by my then-employer to begin experimenting with AI to see how it could automate or accelerate aspects of content production.

I was assured it would never replace my work. I was pushed to test, and increasingly create content, with AI. And then, in November 2023, less than a year later, I was laid off.

Granted, other factors at play meant downsizing was necessary. It’s a tricky time to be a media organization with layoffs rife in the industry over the past twelve months. But it was clear my employer felt that AI, rather than my work, was the future of its content strategy. Ouch.

While I do come at this topic with a jaded view of AI technology and its risks, as well as its rewards, I’m far from alone in having anxieties about the rapid rise of this technology and its responsible use, both in its impact on labor as well as the possibility of bias and error.

There is no doubt that AI technology is powerful. A survey from CNBC and SurveyMonkey Workforce in December 2023, 72% of respondents who use AI agreed that this type of automation significantly increases productivity. The real-world examples above also show this, including within the context of HR. AI bots at Microsoft led to a 26% faster response rate to initial HR inquiries. AI scheduling at Mastercard helps it schedule job interviews 90% faster.

But I’m not the first ‘Ben’ to believe that with great power comes great responsibility. The words of Spider-Man's famous uncle also apply to AI. How we handle the introduction of this technology really, really matters. No, really.

Employers must prioritize people-centric AI communication & adoption

In the same CNBC-SurveyMonkey survey, 60% of employees who use AI regularly reported they worry about its impact on their jobs. They, just like me, are right to be worried.

Earlier this week, International Monetary Fund Managing Director, Dr. Kristalina Georgieva, warned an impending "tsunami" is hitting the global labor market, with a staggering 60% of jobs in advanced economies such as the US and UK expected to be impacted.

And you could wallpaper a house with the number of open letters signed by industry experts and leaders cautioning that we do not yet understand the ramifications of AI on our society and that it must be introduced responsibly with the lens of ethics and bias firmly in mind.

Many organizations have historically chosen speed and ‘winning the race’ ahead of putting people first. It’s why many employers have to invest so much time in untangling the systemic bias embedded in their technology and processes or make monumental job cuts because their gamble for growth didn’t pay off.

Some are getting it right. Mastercard’s EVP for People Operations and Insight and Chief Talent and Organization Effectiveness Officer, for example, detail in their statement:

AI is an exciting tool, and that’s important to remember — it’s a tool that people use. We view AI as a partner on our journey to improve the employee experience, and we work hard to create opportunities to use it but also to nurture conversations about it.

We host ongoing discussions about the trends, technologies and safeguards we’ve put in place to ensure our employees know our AI strategy and the current use cases for AI that create value for our business… coupled with our commitment to ethical AI and avoiding bias in AI through education of our data privacy and responsibility principles and AI guidelines.”

Unfortunately, not all employers are as clear. One 2023 Information Systems Audit and Control Association survey found only 10% of companies have a formal AI policy. So, for the sake of your people who - just like me - are unsure, uncertain, and worried about what rapid AI innovation will mean for them and society at large, get clear about your stance on AI. Explain what you plan to test and how you will do it safely, ethically, and without bias. And above all, make sure your AI adoption is human-centric.

Be the first to comment.

Sign up for a FREE myGrapevine account to have your say.