Home Thinking Aloud Artificial Intelligence And Global Ethics

Artificial Intelligence And Global Ethics

1047
0

smart robot

by Shane Cragun, founding partner at SweetmanCragun Group and coauthor of “Reinvention: Accelerating Results in the Age of Disruption

Two years ago, Boston Dynamics released a video showing a 6’ tall, 320 lb. humanoid robot named Atlas running freely through the woods. Imagine the reaction of those out picnicking trying to escape worldly cares. This is one of many artificial intelligence innovations being developed throughout the world that would take your breath away, but often fly under the radar.

On the eve of AI commercialization, we’d like to put forth a bold proposition. And that is, if some of tomorrow’s AI innovations are used unwisely, there will be unintended consequences that could have been avoided if we had entered into informed dialogue.

Stephen Hawking, Elon Musk, and Bill Gates are on the same page when talking about the pros and cons of AI. In recent discussions, all three pondered the implications on humanity if some artificial intelligence applications actually become more intelligent than humans.

The Pentagon is currently studying the repercussions of AI missile systems being developed that can make life and death decisions on their own. Perhaps there’s a chance that, in some cases, we won’t be able to control our own AI creations in the future. What happens if army robots call in drone strikes in civilian areas based upon incorrect quantitative data algorithms?

But there are other unsettling implications of AI applications to ponder. One is watching artificial intelligence solutions replace jobs at such a rate that the United States becomes a welfare state.

A recent Los Angeles Times article suggested there is great likelihood that robots – or driverless trucks – will eventually replace 1.7 million truckers over the next decade. This technology is being tested now, as I experienced a few months ago while driving north on Highway 101 from Los Angeles to San Jose.

I approached an 18-wheeler in the fast lane that was going a bit slower than the traffic around it and flashed my lights. The truck immediately and too quickly swerved to the right to change lanes, but it went too far and off onto the shoulder, causing smoke and dust to fly. It then settled into the right lane.

My initial thought was to call the 1-800 number on the back trailer and report that the company had a potentially drunk driver on the road. But as I pulled up to the side of the big semi-truck, I noticed that the driver’s seat was empty. There was no human driving the truck. It felt like a “twilight zone” moment and left me feeling queasy.

To those currently working on AI solutions and applications, with the hopes of commercializing them globally, I’d like you to think deeply on the following question: Just because you have the technological prowess to create your awesome AI solution, when you take a step back and ponder its implications, do you think you should?

There are countless examples of the notion, “just because we can, it doesn’t mean we should:”

  • Nuclear weapons: The U.S. and Russia have a combined 13,800 nuclear bombs, many 10x more powerful than those dropped on Japan. Entire civilized countries could be wiped out and whole continents contaminated with radiation.
  • Human cloning: This capability exists, but the ethical question participants are asking is this: is it morally correct to program the type of child we want in absolutely every facet of their DNA and their being?
  • Geo-engineering: These solutions will be able to manipulate large-scale, global environmental processes that affect the entire earth’s climate.
  • Cyberbug drones: As small as an insect, these drones will be able to enter any location unnoticed, and record sounds, gather information, and create privacy nightmares.

Let’s assume AI replaces almost 2 million hardworking truck drivers in America. Who wins? Truck driving companies and shareholders win financially. And, we assume, Silicon Valley oligarchs responsible for AI creations will applaud themselves for their technological prowess. But millions of Americans will go on the dole.

Elon Musk just announced a $1 billion crusade to influence the right use of AI because of his fear around the path this is all taking. And he has inside knowledge. He suggests an AI apocalypse could happen if new technologies aren’t used wisely.

Robotics and artificial intelligence is a wonderful thing when it is paired with humans in the attempt to offer better solutions and address global problems. A team approach, if you will. Think doctors and AI robots operating on patients in tandem and the improvement in surgery outcomes.

Let’s remember that humans will always possess “genuine intelligence” that their “artificial intelligence” friends won’t. And humans will always have a few things machines won’t – a heart, a conscience, and a sense of the human condition.

Perhaps there’s an additional question that those involved in AI commercialization ought to ask themselves before committing to launch: to what degree will our new AI product improve the human condition around the world and benefit humanity as a whole?

Stephen Hawking said that, in the end, “artificial intelligences will be either the best or worst thing for humanity.” Let’s begin engaging in honest discussions while we still have choices and can still influence the best outcomes.

 

Shane Cragun

Shane Cragun is a founding partner at SweetmanCragun Group, a global management consulting, training, and coaching firm, and co-author of “Reinvention: Accelerating Results in the Age of Disruption” with Kate Sweetman. Cragun has worked as an internal change agent within a Fortune 500 High Tech Firm, a line executive at FranklinCovey, and a global external management consultant. He also co-authored “The Employee Engagement Mindset“, has presented a TEDx talk in Silicon Valley, and has spoken at business conferences worldwide.