In late 2018 it became known that Internet giant Amazon had had to abandon an AI tool that they had been developing over several years. The tool was meant to automate the process of sifting through the large numbers of job applications that the company received, in order to quickly find the most suitable candidates for a position. Somewhere around that point, real people were meant to take over, to further process the applications, interview applicants and sign contracts with the top new hires.
This idea turned out not to work. When hiring people for technical positions, the system was biased and often preferred men over women as candidates. The system, it seemed, had a built-in gender bias! Despite several attempts at improving the solution, it didn’t seem possible to create the useful tool the company had hoped for. So it was eventually abandoned.
There are more examples of discriminating AI behaviour. But such traits in an artificial intelligence is, of course, not inherent in the technology, but depends on the information fed to the system when it is trained. An AI model is developed using massive amounts of historical data and this is the information used by the model to make decisions in future situations.
The data available to train a model, in the case of Amazon’s tool, was collected from earlier decisions made by humans. The fact that so few women were recommended for the technical positions was due to the model learning that this is the was it usually is. The technology has no built-in anti-bias feature, even if it is not, in itself, biased. It simply learns from human decisions, and if these decisions are biased and fed to the model without being carefully selected, this is what the machine learns to be the right thing.
Creating a bias-free artificial intelligence demands feeding bias-free data to it. When Microsoft’s chat bot Tay was released on Twitter a few years ago, the account had to be closed down again after no more than 24 hours, as it turned out she had learned a little too much of the bullying language of Twitter and had become something very different from the sociable machine she was meant to be.
AI becomes what we train it to be. Bias-less machines that help us make informed decisions without the influence of preconceived ideas will be real only when we know how to select the proper data for training the model and using a bias-free method for validating it.
Vast amounts of data are created in today’s connected world, and hopes are high for what this data can help us achieve. Computing power has become so cheap that it is possible to use AI solutions even in smaller units – there is no a need for super-computers to process those large amounts of information.
But it is not as simple as presenting all this data to an AI system and tell it: Learn this! Sensible people must still be there to supervise it and make sure the model becomes a well-behaved tool. Our biased minds are reflected in the machines’ behaviour and it is our job to teach AI right from wrong. The fact that men have been appointed most of the technical jobs at Amazon in the past does not mean they are more suitable, it means people are biased, and this bias has created the data that has taught AI that this is the way it is supposed to be.