I have several posts now that have made the case that our implicit bias seeps into our software in playful, harmful, and sometimes offensive ways. It is difficult for even the most well meaning human being to avoid this even with a reasonable level of compassion and sensitivity we tend to view the world through the lens we are given and not from the viewpoint of our more vulnerable citizens.
This should be of concern because, at some point, our society is going to ask an algorithm to make a series of moral choices that will affect us and the lives of people we care about, so ensuring we treat people in a just and equitable manner needs to be premium consideration even in programming. This is especially true when we consider that choices could be executed with the speed, scale and reach of the cloud.
Crowd sourcing morality
It was interesting then to see MIT create the Moral Machine as a way to get us thinking about the choices that are going to confront self-piloting vehicles in the very near future. For example if your car is forced to make a decision to save you, but in doing so would it be forced to injure, maim or kill a pedestrian. Is your life the most important consideration? Does it matter how old the person is, or if they are law abiding?
These kind of thought experiments have been around for years and our own responses to them are, I am sure, as varied as humanity itself. Moral Machine has the goal of highlight the following ideas (emphasis mine):
Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence.
My curiosity got the better of me and decided to take the test, and my results were as follows:
- Saving more lives was important (numerically speaking)
- I was even when protecting passengers vs. pedestrians
- I had a slight preference for folks upholding the law
- Avoiding intervention (altering the car trajectory)
- As it related to gender I tended to save women
- I did not show any inclination to saving pets
- I seemed to prefer to save older than younger people (which surprised me)
- I did not appear to have a preference for fit people
- I did not seem to care about the social value of the individual (Doctors, executives vs. homeless)
The Moral Machine also lets you design your own scenarios around the types of people (criminals, professionals, children, pets, etc.), legal complications (crossing illegally), and how the ultimate decision ends (death, injury or uncertain).
I do find this fascinating, but the basis of these choices may require unqualified representatives to develop laws that confirm and codify their own bias, seeing how those choices are made will reveal much about our society.
Comments are closed.