While artificial intelligence is bound to change the business world, it also creates many situations that we are not prepared for. For example, if the selection of candidate’s resumes is done by an algorithm, how can it be ensured that it is not discriminatory? Overview.
Debate is raging at the moment about how to deal with the sudden arrival of machine learning applications that make decisions without human intervention. “We have seen the biases observed in Google’s image association systems, in older algorithms such as COMPAS – the algorithm implemented in the United States that was supposed to assess the risk of criminal recidivism of persons accused of crimes – which is extremely unfavourable for racial minorities,” says UQAM computer science professor Marie-Jean Meurs.
“The legal framework on the costs of algorithmic bias is not necessarily inadequate, but the problem is how it is used,” says the professor. She deplores the fact that lawyers do not necessarily use the regulations in force because they often do not link artificial intelligence issues with the legal tools at their disposal.
Ms. Meurs argues, however, that much legislative work is underway, citing as an example the new Automated Decision Making Directive, which all federal institutions will have to comply with by April 2020, introducing the requirement to carry out an impact assessment before using an automated learning system.
Biases hard to avoid
Algorithmic bias can be introduced in the learning group (the variables given to an algorithm to “learn” the relationships between them) or in the premise (starting hypothesis), and they can rule out applications from women or visible minorities, for example.
“[To counter these slippages], you have to be aware of your prejudices and biases,” says Ms. Meurs. “Large companies have the means to have very diverse learning groups and to hire people to think about the composition of these learning sets. Let’s hope it gets better in the next few years”.
“There is a responsibility of the designers. Institutions rely on speeches from specialists who are far too marketing-based. The least we can do is tell people that we do things very well, but sometimes we get it wrong and you have to stay in control,” she concludes.