Artificial Intelligence (AI) is moving at a pace, creating a really interesting online debate the other day about its applications. Specifically, the question was raised about self-driving cars and how they make decisions. For example, when a self-driving car crashes, who should die?
This is a question explored in depth by researchers from the MIT Media Lab, who analysed more than 40 million responses to an experiment they launched in 2014, revealing very different attitudes across the world.
The basic idea was to take the age-old moral dilemma of who should die in a crash, and update it. The dilemma is one that has been tested before: if you’re driving a train, you can either continue and kill five engineers on the track, or swerve and kill four pedestrians. What choice do you take?
In the new example, you are programming a self-driving cars AI decision-making and the car can see that it will kill a mother and daughter on a pedestrian crossing ahead. Does it continue or swerve and kill the passenger in the car?
There are many more scenarios presented to participants in the research with the complete results published in Nature magazine. Generally, people preferred to save humans rather than animals, spare as many lives as possible, and tended to save young over elderly people. There were also smaller trends of saving females over males, saving those of higher status over poorer people, and saving pedestrians rather than passengers.
“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now,” the team said in its analysis.
“Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”
You can find out more about their work at the Moral Machine.
Meantime, if you didn’t see it, Amazon shut down one of their AI systems recently as it wasn’t working properly. It was meant to help with hiring and screening applications for jobs, but turned out to be biased against women. This was caused by the computer models being based on observing patterns in resumes submitted to the company over the previous ten-year period. Most came from men, a reflection of male dominance across the tech industry, and so the system assumed that men should be prioritised for hiring.
Just goes to show: systems are only as good as the people who programme them. This is why we need future jobs based upon trainers, explainers and sustainers.
Chris M Skinner
Chris Skinner is best known as an independent commentator on the financial markets through his blog, TheFinanser.com, as author of the bestselling book Digital Bank, and Chair of the European networking forum the Financial Services Club. He has been voted one of the most influential people in banking by The Financial Brand (as well as one of the best blogs), a FinTech Titan (Next Bank), one of the Fintech Leaders you need to follow (City AM, Deluxe and Jax Finance), as well as one of the Top 40 most influential people in financial technology by the Wall Street Journal's Financial News. To learn more click here...