Advances in artificial intelligence have spawned a spirited debate about the future of man’s relationship to machines and the potential to change the character of human existence as we know it. We sat down with Joel Mokyr, Robert H. Strotz Professor of Arts and Sciences and professor of economics and history at Northwestern University, to discuss how we should think about artificial intelligence (AI) in the evolution of technology and what it could mean for both the economy and society.
Q: What is artificial intelligence?
A: AI is imbuing computers with something that mimics human intelligence, so they can recognize patterns from data and learn. Four or five years ago, Geoffrey Hinton and his team at the University of Toronto developed what is known as deep learning, which has made AI a reality.
You might think of this as allowing computers to double back on themselves to use solutions they have already developed as a basis for solving new problems and editing and writing their own software. This iterative process was a huge breakthrough (after many years of disappointment with AI) and created a massive potential for new applications.
Q: How unusual was it that AI happened to develop this way?
A: This kind of dynamic happens quite a bit with technology. Electricity was already known to have great potential in the early 18th century, and yet widespread applications did not emerge until the era of Thomas Edison in the 1880s.
Another example of a technology that is known to have large potential but cannot be made operational until a number of hard problems have been solved is quantum computing, where computer hardware can be designed to address “big data” problems using a revolutionary methodology. Quantum computing has the potential to lead to very fast simulations, which may help in fields such as computational physics.
Q: In your writing, you make an important distinction between the knowledge (epistemic) base that gives rise to applications of technology versus the applications (or techniques) themselves. Is AI a knowledge base or an application?
A: First, maybe I should explain what a knowledge base is. A knowledge base is a body of related facts, natural laws and regularities, theories and principles. Knowledge bases drive innovation and applications, although historically many techniques were found accidentally.
AI is a knowledge base. It is powerful in part because the knowledge base that underpins it is what I could call “tight.” Knowledge is tight when many people believe in it with a great degree of confidence. Thus, the imbedded knowledge can feed technological progress faster than knowledge bases that are more controversial.
The math and physics underlying AI are quite tight. Engineers don’t have to spend a lot of time arguing about how to use it because the knowledge is already pretty agreed upon (as tends to be often the case with mathematics). Applications are “recipes” that are based on this knowledge.
Siri is a great example of an application of very advanced programming. She recognizes my voice — although she still trips over my accent sometimes. The AI in Netflix makes movie recommendations. In the past few years, AI has made a lot of progress, and it seems to me that it is the rapid pace at which AI applications are developed that spurs so much concern, rather than the idea itself.
Q: What has contributed to the remarkable pace of AI development?
A: The evolution of technology doesn’t run in just one direction. Knowledge leads to innovation. But applications also provide their own insight, tools, and inspiration, which lead to more scientific knowledge and eventually innovation. There is, as it were, a positive feedback loop from theory to applications and back. In that way, technology pulls itself up by the bootstraps.
Until 1903, for example, the idea that humans could fly machines heavier than air was controversial, with many top scientists saying only dirigibles would work. But when the Wright Brothers flew, scientists immediately got to work on crafting the theory that explained it.
The result was the field of aerodynamics, and widespread experimentation with flying technology accompanied it. Nothing will make a science tighten faster than seeing a successful application emerge.
Q: Speaking of controversy, we want to ask about the issue of man versus machine that comes up quite often when talking about AI. Will humans be replaced by intelligent machines? Could they control us at some point? As machines become more intelligent, will humans lose some of their idiosyncrasies?
A: People worry about a sort of Frankenstein phenomenon. We create the machines that we cannot control and that may destroy us, which is a common literary theme. This has been a fear for centuries. Nowadays, we just need to look to the examples of nuclear power or genetically modified organisms to see examples of legitimate concern about new technologies.
Some technologies are scarier than others. When the first vaccinations became available in 1798, people were injected with the blood of cows to prevent smallpox. Some people were afraid that they would grow to look and sound like cows. Napoleon had to make smallpox vaccinations mandatory because his citizens were reluctant to be vaccinated.
It is interesting to see AI concerns emerge even among some of the smartest innovators such as Bill Gates and Elon Musk. Personally, I’d say let’s get there first.
Q: You sound pretty optimistic that AI is just another step in the long technological journey.
A: It is true. I am a techno-optimist in a debate with techno-pessimists.
There are two types of the latter. There are those like the philosopher Nick Bostrom at Oxford and the great physicist Stephen Hawking who think there is a danger that robots and AI will take over and humans will be relegated to an inferior species. The other type, such as my colleague Robert Gordon, thinks none of this new technology will amount to anything and the best is behind us.
These two groups can’t both be right. But both can be wrong.
One great concern with AI is that machines will mimic human intelligence — but innovation doesn’t work that way. Human creativity and discovery are driven by intuition, curiosity, emotion, and ambition — not only intelligence. Above all, machines may be better at giving fast and accurate answers, but will they be able to pose the questions?
Machines may have intelligence, but not testosterone. They’ll remain tools of human beings, but they will not behave like us. We cannot create human-like beings, only things that do what humans do, only better. In that sense, a robot is no different from a steam hammer.
Q: Does anything worry you about AI?
A: The real threat is not the technology but the people who will use the technology and their purpose. AI is a tool in the service of people, no different from firearms or nuclear power. It can be used or abused.
The people owning and manipulating the robots may grab hold of all the world’s wealth, or establish a new type of totalitarianism — more like we see emerging in Turkey and Russia. These autocrats will be very different from historical dictators like Hitler and Stalin — more into controlling social media and repressing critical thought and less into executions. There is a serious concern that this new technology can threaten democracy as we know it.
Q: In your essay The Past and the Future of Innovation, you talk about the changing nature of work and the long decline in the factory system. Will AI-enabled technologies reduce the number and types of jobs, or change them? Will “work” become a thing of the past?
A: Economic historians like to point out that the idea of a “job” is a relatively recent invention. People in the 18th century worked, but the vast majority didn’t have a “job.” The notion of a job as we think of it today came into existence in the 19th century with the Industrial Revolution and the factory system. By the 20th century most people had jobs.
People are now turning to the gig economy. In 50 years, we might move in the direction where relatively few people have a job as we know it today. People will work where and when they want. In a sense, we would be going back in the direction of the 18th century model, only with much more choice
If we play it right, individual welfare will be much higher. Gone would be cramped office cubicles, rush hours and the resources we waste on commuting, not to mention the accompanying pollution.
Q: Are there other disruptive technologies that you consider as important or more important than AI?
A: It’s hard to do a ranking, but I’d keep my eye on additive manufacturing (a.k.a. 3D printing) and materials science. Manufacturing in the future may be very different from what we had behind us.
In addition, what physics was for the 20th century, biology will be for the 21st. Advanced techniques in genomics — such as gene-editing through novel CRISPR-Cas 9 techniques — will let us design the plants or animals we want.
We will need that, because of climate change. As I see it, climate change is a much greater existential threat than AI, and one way of coping is to develop plants and animals that can withstand these new weather regimes. We have yet to wean ourselves from hydrocarbon fuels; meanwhile, the sea level is rising and oceans are becoming more acidic.
Incidentally, climate change is an example of knowledge that is still relatively “untight” — there is still controversy about whether the planet is indeed warming due to anthropogenic greenhouse gases. The scientific community seems to be moving toward that consensus, but it is hard for the experts to get many politicians to go along, especially in the United States.
Q: It would seem economic evolution is taking us squarely into the arena of public goods and socialized services (e.g., climate, health), where the market incentive structure may not work. Is greater inequality the result?
A: It is undeniable that inequality is increasing when measured in terms of money income and wealth.
However, the pricing of new technology offers an interesting counterpoint. Much of the products of new technology we have today is free, like Google Maps or Wikipedia, though, of course, there is advertising.
It’s actually unclear that in our age being rich provides you with proportionally better access to better technology. Someone much wealthier than myself gets the same MRI when we go to the hospital. They may fly in a private jet, but the time it takes them to get to London is roughly the same.
It’s always better to be rich than poor, but the gap in the distribution of access to high-tech resources is narrower than the gap in incomes.
Finally, think about the politics of inequality. In an open, democratic society, the rich cannot persist in denying services to the poor that the majority feel everyone should have a right to access. Countries such as China or Russia that cannot adapt to these realities will not remain stable in the long run.
Login in below to access content exclusive to clients of The GailFosler Group.
Not a client yet? For more information on the benefits of becoming a client, please contact us.