In this post, Principal Consultant/ADM Larry Duff discuss some ethical challenges in Artificial Intelligence.
Artificial intelligence has been a dream of computer scientists for many years. I remember my early days of programming I had a Commodore Pet. I was excited that I had a book of programs, I typed them in and saved to my tape drive. One of those programs was ELIZA. I could type in questions and “she” would answer me, I had my own Hal 9000! If I understood the code I typed in (I was only 10) I would have seen it was a rudimentary natural language processor with canned responses. It was just a cheap knock off of a program written earlier at MIT, which for its day was advanced.
It’s not just computer scientists that dream of the computer that helps them, the average person’s imagination has been peaked for years, whether they think of it in those terms or not. They have been going to movies about Artificial Intelligence for many years.
- 2001 A Space Odyssey – Hal 9000 (1968)
- Star Wars – C3PO (1977)
- Blade Runner – Nexus-6 (1982)
- Terminator – SkyNet (1984)
- Star Trek Generations – Data (1994)
- The Matrix (1999)
- Resident Evil – Red Queen (2002)
- I, Robot – VIKI (2004)
We use the words, we dream of it, but what really constitutes Artificial Intelligence? According to Stanford Professor John McCarthy who coined the term ‘Artificial Intelligence’ in 1955,
“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”
At what point do we jump from an algorithm to true intelligence? Let’s look at this from a different angle, what makes you or I intelligent? How about another definition:
intelligence: the ability to learn or understand or to deal with new or trying situations : reason;
For software, I’d break it down to “learning and performing actions that aren’t in the original programming.” Today there are no true artificial intelligence machines, but we continue to get closer with development of neural networks. Will Quantum Computing become so powerful that the algorithm driven predictive software is indistinguishable from true Artificial Intelligence? We had a lot of examples above of Artificial Intelligence in the fantasy world, we are still a long way off from those examples.
With the advances to processing power outpacing Moore’s Law the last few years, the time is right for companies like Microsoft, Alphabet, IBM, and many others to ramp up their push into Artificial Intelligence. AI is being used in real world applications today, some are critical to our society. Some things behind the scenes that are affecting you that you may not know:
- Qualifying for a loan from bank
- Assessing accuracy of medical diagnosis
- Determining who gets recommended for a job
How about things as a consumer that you are seeing every day?
- BigBlue – IBMs Chess Computer
- Personal Assistants – Siri, Alexa, Cortona
- Media Matching – Pandora, Netflix, Spotify
- Smart Home – Next, Wink, Honeywell
- Autonomous Vehicles
Famous Science Fiction writer Isaac Asimov wrote the book I, Robot (which was made into a movie referenced above) and the subsequent Robot books series. Within the story was the big ethical question which was answered by a set of laws for Robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
At what point do we need a set of laws like this for Artificial Intelligence? Who will set these laws? Who will enforce these laws? Who is responsible for AI programs when they go bad? In the movie Terminator they blamed the engineer who built SkyNet and thought eliminating him would end judgement day.
We are the beginning of a potentially very deep rabbit hole of the ethics of Artificial Intelligence. Let’s say you ask your favorite personal assistant for a restaurant recommendation. It recommends Jane’s Bistro, but not because it’s highly rated, but because Jane’s Bistro paid to be at the top of recommendations. You didn’t ask for the Top Rated, so technically they did nothing wrong. But is what they did ethical, or is it buyer beware? There are ways this could get out of control, doctors, lawyers, contractors. What happens when one of these recommendations is horrible and causes harm. Does the maker of the personal assistant bear any responsibility? At some point soon, if not already there is going to be liability attached to these “Artificial Intelligence” machines.
Let’s go deeper, an autonomous automobile that hits another car. Who is libel? The owner of the car, the manufacturer… how about the software engineer that wrote the program?
When we move these projects toward artificial intelligence what new roles do we introduce? Will we need to introduce an ethics team? How do you teach ethics to Software Engineers? Do you even try to? As part of Microsoft’s AI research they even have created a separate committee to help called Microsoft’s AI and Ethics in Engineering and Research (AETHER) Committee.
There is a twin issue to ethics that is just as hard to solve, bias. Where you live, how you grew up, your friends, co-workers all make you form opinions which lead to conscious and unconscious biases. Biases will slip into the code for AI, like it or not. It’s been theorized that biases are also introduced in AI through the data collection that is ongoing. How can you program out bias when you don’t even know what bias will be the ghost in the machine?
Did you know that Civil Engineers must typically have a Professional Engineer license in the state they live? When that new building or bridge is designed they take the responsibility for correctness. Increasingly Mechanical Engineers and even Electrical Engineers are being licensed. Should we be looking at licensing software engineers? What do you license them on around AI, bias free code… ethical code?
When you let loose that next chat bot you might need to think of a whole new set of requirements. You have to determine your next steps, I hope I gave you food for thought about those new requirements. I can help you getting technically ready. Check out Microsoft’s AI Platform. Start with a Bot, add some Machine Learning, then jump into Cognitive Services. If you’re looking for a place to start, try Bot Builder SDK for .NET samples. All the samples are available on GitHub and really easy to get running. Looking forward to your next solution!
Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.
0 comments