Fides Search recently released their latest research paper titled Bias Uncoded: How to integrate AI and other technologies into your D&I agenda.
The paper investigates how AI and other technologies can be applied to help mitigate unconscious bias within organisational processes. It identifies the functions most likely to benefit from technological innovation, outlines the diversity & inclusion (D&I) technology that already exists and evaluates the potential costs and benefits for organisations to invest in these solutions.
“Bias Uncoded…” addresses four key questions:
- What is diversity & inclusion (D&I) technology?
- Why are D&I technologies coming to market right now?
- What types of D&I technologies exist and how are they applied?
- Who are some of the players in the different D&I technology categories and what do they bring to the table?
Fides Search summarises major use cases and risks behind investing in D&I technology.
Use Cases
Data Analytics
Data analytics can give an organisation an in-depth look into the diversity profiles of its workforce. It empowers firms to accurately identify specific areas of the business where unconscious bias is most prolific, benchmark their recruiting processes against industry standards and present insight to drive change.
Data analytics has the most potential for identifying the barriers for gender balance in organisations. A good example, highlighted by Fides Search, involves analysing the data around women returning from maternity leave to investigate whether return to work policies have the desired effect.
Recruiting
AI and machine learning can have a substantial impact when it comes to removing bias in recruitment. D&I technology is capable of removing information that could potentially trigger bias amongst recruiters and hiring managers.
D&I solutions continue to grow in popularity over the last few years, and as a result, organisations are beginning to see visible uplifts in the number of diverse trainee and junior employees.
Employee development and advancement
Adding automation to employee development and advancement processes can help remove biased judgment regarding basic business processes. Employees are exposed to more challenging work, this presents an opportunity for career development and contributes to the overall business growth.
Risks to be aware of
1) Data
AI won’t be an effective tool unless it has enough accurate, unbiased information to learn from. Firms should choose the ways in which they apply AI wisely, and use multiple different data sources to generate conclusions from.
Another requirement needed for this technology to be effective is large amounts of data, and the amount of data currently gathered regarding ethnicity, disability, and social mobility is minimal.
“Machine learning is amazing at taking a complex set of input parameters and example scenarios and finding correlations between them, In the case of Amazon, these were unfortunately biased and spurious. However, we should not discard this technology, but instead, recognize that any new employee requires supervision.
At EVA.ai our machine learning is fed only relevant professional career data, which cannot be used to identify individuals or their personal characteristic. The machine learning is supervised by a large number of agents, and its results are separately checked for bias.
We should treat machine learning as a new junior employee. We should control access to sensitive data, supervise them carefully and measure their performance, fairness and productivity”, says Charlie.
2) Monitoring
Subsequently, there must be a monitoring process in place to check results regularly. Monitoring the solution you implement is essential to ensuring that unwanted evolution doesn’t happen in your systems.
It is imperative that everything fed into the machine, and everything it produces, is checked by an individual or team of people who are skilled in understanding its workings.
3) Transparency
Most tools that you will consider purchasing or building will run off relatively complex algorithms. Unless you’re a developer or data scientist, these are likely to be difficult to comprehend without the required skill set.
If you don’t know how a tool reached a certain recommendation, you can’t trust that it’s free of bias and fit to be embedded in your organisation.
4) Team Diversity
For any D&I technology you’re considering, there should be a diverse team of individuals behind it. There is a greater risk of bias infecting your decision-making processes if the data is curated by a non-diverse team.
5) Maintaining human involvement
With the proliferation of chatbots, automated job applications and video interviewing to remove human biases in a recruitment process, there is a risk that the absence of human interaction could make individuals feel devalued by an organisation.
You want to ensure you’re finding the right balance between removing human judgment from the process where bias is most prolific, and also incorporating team members enough for prospective candidates to get a sense of the culture of your firm and remain invested in the organisation.
“With EVA.ai we develop partnerships between our users and AI, building a trusting relationship where recruiters comfortably delegate duties to the bot while feeling confident that any issues will be immediately escalated to them. We call this exchange “hand-over / hand-back”. Our measure of good AI-human partnerships is the ease of hand-over and users’ confidence in this process”, says Charlie.
6) Generating buy-in
The successful implementation of D&I technology hinges on the ability to convince law firm leaders to commit to both purchasing and investing the time to utilize such resources.
Eva.ai is featured as D&I technology vendor that “undergoes a rigorous verification process with an organisation through real user testing, based on over 1 million interactions with its training data.”
Read the full report for a more in-depth look into D&I technology. To discuss this topic further, contact authors Emily Clews (eclews@fidessearch.com) or Gopi Jobanputra (gopi@fidessearch.com)