Decades before AI accelerated, robotics and artificial intelligence ethics were explored and debated by academics, ethicists, and science-fiction writers. But, as a speculative and theoretical idea, it seemed better fodder for post-apocalyptic theatre than practical technological application.
That all changed once AI could be applied to a Google-sized data set, with unprecedented ramifications. This also now sets the stage for future questions about the role of AI in the world.
Since there is scant regulation, we are dependent on internal teams to police their actions. We have to ask ourselves whether it’s ethical to build AI for a specific purpose or with particular capabilities. While a lot of the moral dilemmas of AI tend to be fantastical — for example, will a driverless car choose its occupants’ lives over the lives of pedestrians or other vehicles? Many critical questions deal with the limited data sets and lack of diversity that train these in the first place. In the driverless car example, what if they don’t recognise a person in a wheelchair?
It Matters for Recruiting, Too
Of course, ethical AI isn’t just for big Silicon Valley tech to grapple with. This year, video interviewing and assessment company HireVue discontinued a controversial product that used candidates’ facial analyses to tell an employer more about them.
While HireVue claimed an algorithmic audit done by a third party showed it didn’t harbour bias, CEO Kevin Parker told Wired that the product “wasn’t worth the concern.” Experts have continually questioned the idea of using AI to determine someone’s ability.
In reality, the perception at least is that it is biased and leads to an awful candidate experience, especially for minority applicants. When I talked to friends outside of our industry about what some of these tools claimed to do, they were more frank in their impression: It’s creepy AF.
What can the average candidate do about this? Outside of giving feedback to the organisation doing the hiring, not much. Claims of bias, either on a widespread or individual basis, are difficult to pinpoint. Outside of the outcomes of discrimination, there’s virtually no policing of AI practices other than working groups affiliated with and often inside organisations themselves.
The algorithms are often opaque and deemed proprietary by those who question them. Moreover, while the Organization for Economic Cooperation and Development (OECD) has developed AI guidelines, there’s rarely any company incentive to comply with them.
A Moment for Ethical AI
Big Tech backlash, coupled with some high-profile missteps, could mean more regulation is on the way. While regulation isn’t always the answer — and is often rejected by tech innovators — there likely needs to be some limits set.
For example, knowing how your data will be used in plain text is the bare minimum. Yet, it is not required today. Having a clear understanding of how the AI works, how a specific supplier continually tests for bias, and whether it mediates its algorithmic approach seems entirely reasonable.
But something that can be done today in recruiting is for TA leaders to be more discerning.
But something that can be done today in recruiting is for TA leaders to be more discerning buyers. Be sceptical of market claims about bias-free AI tools. Don’t purchase AI tools when you don’t understand how they may work or impact your hiring process.
Those on the receiving end of AI-driven outcomes, whether it’s labelling a person as an unsuitable candidate in a job interview or not labelling someone at all in a picture, have minimal amounts of power. Whether on Google’s billions of users or in a much smaller context in hiring, people that choose and implement AI tools have the power.