In an earlier column in this space, I wrote about the dominant technology trends of 2017 and beyond. Today, I want to address what will be one of the most dominant technology trends in human history.
Artificial intelligence is being mentioned everywhere in the media, as it is dramatically developing and being deployed everywhere in the U.S and in many other developed countries. In the last 20 years, AI has gone from science fiction to science and economic reality.
In 1997, IBM’s Big Blue computer beat Grandmaster Garry Kasparov at chess. In 2011, it beat the top two human champions of “Jeopardy.” This was considered especially significant because Big Blue had to be able to both have the contextual knowledge of providing questions to answers and to have a command of a vast amount of facts in all aspects of humanity. The third and perhaps most impressive AI achievement was a year ago when Google’s AlphaGo defeated a professional Korean grandmaster in the Chinese game of Go.
Prior to that five-game competition, Google explained that the number of potential moves in a Go game was the same as the number of atoms in the universe. That means that intuition is needed to compete. AlphaGo won the series 4-1. Going into the match, it was considered practically impossible that AlphaGo could win. The reason is we have moved from computers being programmed with data to the reality that computers are being programmed with algorithms and are now learning on their own.
Up to this point in the column, I have used the term artificial intelligence, as that is the generally used name for what is now more correctly known as machine learning. Machine learning is where the true development is going on. Machines are learning on their own.
Around the time of the Go competition last year, I looked up the word “intelligence” and saw its definition did not include the word “human.”
The definition of “artificial” was “man-made.” So, in its initial stages, humans were, in fact, creating machines that would think. This explained why the conversation about artificial intelligence was such a simple and predictable one and often included a fearful question: Are we creating a technology that will result in us having our lives ruled by computers and “robot overlords?”
This has been the consistent duality of discussion of this subject. It is a worthy one to have, but it frames the topic too narrowly and is now out of date. In addition, dualities are arbitrary constructs that humans create to simplify a complex universe. Using the term artificial intelligence puts a false name to the technology, ranking it as less than humans’ because machines are simply machines and therefore artificial. That is a case of old words describing something brand new. This is like, some 125 years ago, describing automobiles as “horseless carriages.” And of course we still measure engines’ output in horsepower.
The definition of machine learning is: “The ability of a machine to improve its performance based upon previous results”
That is now what machines do. As consumers, we were initially introduced to machine learning with Apple’s digital assistant Siri — not a great first impression. The current consumer leader in the category is Amazon’s Echo device with its Alexa digital assistant. Alexa keeps learning. It is real learning, not artificial. So machine learning creates ever more intelligence. Algorithms, not data, are what humans give to machines, enabling them to learn on their own.
Why is this so important?
There are several deep answers to that question. But, for now, here is the immediate concern for businesses and the economy.
A study from Oxford University forecast that at least 47 percent of jobs in America will be replaced by machine learning in the next 10 to 15 years. This should be the topic of discussion about jobs in our country, not the lame 20th century political dialogue currently taking place. Think about the consequences of half of the current jobs disappearing by 2030. If we don’t start to think about that reality and prepare for it, we will have major disruption and conflict.
The other reality is that all jobs are at risk. Those of us in white-collar occupations are perhaps complacently thinking that mainly factory and service jobs are at risk. Yes, robots first appeared in factories and, yes, a burger-flipping robot is being rolled out in short-order restaurants in California.
A recent report by Shelly Palmer, a leading expert in technology listed the top five occupations most and least likely to be largely replaced by machine learning in the future.:
-Report writers, journalists, authors and announcers
-Accountants and bookkeepers
The least likely:
-Pre-school and elementary school teachers
-Mental health professionals
So, what to do? First, it must be said that technology is neither good nor bad, it just is, and this is the major technology of today and tomorrow. Second, we know this is coming, so we have time to adapt, both individually and collectively.
The deeper issue is how humanity moves toward a future in which the human-machine relationship changes practically everything we now accept as reality in the marketplace and the economy. For the first time in our history, we will have to adapt to the psychological reality of co-existing with an equal or superior intelligence, an intelligence that, yes, could replace us but also is one that could free us to redefine how humans might live, with work less and less a part of our lives.
Admittedly this is a simple ,somewhat alarmist view of this emerging technology. In future columns I will look at why Machine Learning may well reshape society, economics and actually influence human evolution.
More on that in the future. I am now going to go have a talk with Alexa.
[Note: A good portion of this column was originally published in the Sarasota Herald-Tribune ]