July 3, 2024

Three Things Companies Should Never Delegate to AI

OpenAI CEO Sam Altman asked Scarlett Johansson to be the voice behind ChatGPT-4o, only to fabricate something nearly identical to her voice when she refused. What followed was media outrage at the brazen pilfering of the actress’s identity, likeness and agency, and the harbinger of a technology whose unfettered growth seems to matter more than growing the right way. But the reason he wanted her voice may be just as troubling as the means he used to get it: to create a product whose voice would be more comforting to its users.

A comforting voice sounds appealing at a time when work is having a dark moment. Employee engagement is at an 11-year low, with the youngest workers hardest hit. American workers at all levels are disconnected and lonely, regardless of whether they are in person or remote. AI holds great promise to ease the daily burdens of work, tapping into a wide range of data with remarkable efficiency. But companies are racing toward adoption, and speed rarely breeds thoughtful decision-making. With its soothing voice and human-like engagement, leaders may be lured into using it to replace functions in ways that significantly disrupt workplace culture and well being. The choices they make in this moment will alleviate or exacerbate the growing disaffection of a stressed workforce, and ultimately their own future.

How workplaces leverage a tool that sounds and feels human is ultimately an ethical question. Research tells us that a more anthropomorphic AI increases the likelihood of consumer acceptance. And it certainly increases the likelihood of confusion. With its linguistic skill and a reassuring tone, AI may seem an expedient tool for things humans naturally avoid, like interpersonal encounters that provoke conflict or discomfort. When it comes to trust-based human interactions, however, our very future hangs in the balance. There’s a reason it’s called “artificial” intelligence. There are some things you simply can’t fake or replace with humanoid machines. Here are three critical places where employers must retain human control and avoid abdicating responsibility to AI, no matter how tempting it may be.

Employees Want To Be ‘Seen’ By Carbon Colleagues, Not Silicon Ones

Giving meaningful feedback and reviewing employee work may be one of the most stressful parts of a leader’s job. Even in the face of clear development needs, team leaders often hold back for fear of damaging working relationships and losing valuable employees. Many organizations are looking to generative AI to help them develop and deliver employee reviews and feedback, ostensibly to drive objective, clearly communicated, data-driven outcomes. But the databases on which it trains often lack the flexibility to consider special circumstances or honor outsized or innovative contributions, forcibly measuring workers against the status quo. And many employees are leery of being measured by an algorithm, perceiving active human guidance as the critical ingredient of a fair and personalized process.

Outside the performance review cycle, many companies rely on technology to recognize workers, using apps and employee recognition software to boost employee morale. These platforms offer virtual points for a job well done, often tradeable for cash or other benefits. But they come at a cost. Researchers question their value because they can amplify traditional or pre-determined expectations about success, supporting some workers while overlooking others. Ultimately they miss the point. Effectively boosting morale requires human intention and effort, not just a digital box check. To feel valuable, recognition must have a face. Employees want to be seen by human eyes. They want their colleagues to take the time to notice and celebrate their contributions.

There’s No Replacement For Human Empathy

It turns out that AI is really good at emotional support. So good, in fact, that recipients of an AI-generated emotional support service rated it superior to human responses. Until they found out it was AI. While AI can generate responses that make people feel heard, people with human problems crave human support, not a soothing machine. The combination of humans and AI, however, proves to be the most powerful tonic. Pulling on a vast trove of data, humans supported by AI in peer-to-peer counseling contexts provide 20% more empathic responses than humans alone. A powerful example of both-and ingenuity.

This human-AI partnership in emotional contexts has fruitful workplace implications. Empathy is a complex, multi-layered behavior. In tough situations, AI may excel at cognitive empathy, but it can’t generate the human response needed for emotional or even visceral empathy. Companies that delegate helplines to AI control, risk angering their workers, destroying their trust and damaging the chances of a helpful resolution. But using AI as a coach, helping people listen more attentively and empathically to important cues, has been shown to improve employee support as well as hiring, retention and customer service.

If You Manage People With Machines They Will Act Like Machines

When my children were in elementary school, they earned merits for good deeds, demerits for their transgressions. At first, they were careful with their actions, eager to please and careful to avoid undesired behaviors. But a few demerits in, they stopped caring. Research reveals nearly identical sentiments in companies that use AI to manage their employees. Algorithmic management uses data collection and surveillance to fuel fully or partially automate decisions. Heavily deployed in the retail and service industries, AI management significantly reduces interpersonal support(science calls this prosocial motivation), a critical driver of workplace productivity, collaboration and well-being. This reduction is most pronounced when algorithms monitor and evaluate employee performance.

The cost is high. AI managed groups gave 20% less advice to their colleagues than human managed groups, and the advice was poorer in quality. In addition, AI managed workers felt more objectified because their employers prioritized non-human over human capacities. They, in turn, objectified their coworkers. The resulting damage could be mitigated, but only in work environments that prioritized social interaction, where workers had time to establish mutually supportive relationships. While algorithmic management has a place in certain high volume business functions, it must be balanced with human involvement and social connection. People need each other to bring their best selves to work. The message is clear: if you manage people with machines, they will act like machines.

Employers are rushing to deploy AI’s power to boost productivity and reduce drudgery. As it gains a more human-like demeanor and voice, leaders must carefully weigh the value of leveraging AI against its human costs. These ethical decisions remain the sole province of humans. Just because AI can do something doesn't mean it should. When it comes to the delicate work of managing and supporting people, AI’s strength is its ability to support human efficacy, to streamline processes, gather data and free humans to treat each other with respect and dignity, without deception. You simply can’t fake humanity. There's no app for that.

First published on Forbes.com.

Image Credit:
Adobe
For more insights, sign up for our newsletter
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Continue reading