In a recent provocative article in The Atlantic entitled “How the Enlightenment Ends”, Henry Kissinger provides much-needed context and definition to the issues and concerns around the explosive growth of artificial intelligence (AI) that includes warnings that we ignore at our peril. To cut to the chase, he makes the case that—philosophically, intellectually, in every way—human society is unprepared for the rise of AI.
He applies a very broad range to what fits the AI category and it includes a lot more than various types of robots, the most transformational of which he calls the “self-learning” variety–machines that acquire knowledge by processes particular to themselves and apply that knowledge to ends for which there may be no category of human understanding. Yes, we already have these, and he has questions about them the implications of which we haven’t yet even begun to pose, much less answer, such as: Would these machines learn to communicate with each other? How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe inspiring to them?
He views this period as the equivalent of the invention of the printing press in the 15th century, the technological advance that most altered the course of modern history, which allowed the search for empirical knowledge to supplant liturgical doctrine, and the Age of Reason to gradually supersede the Age of Religion.
But this period of innovation is even more dramatic, for it goes far beyond automation as we have known it. He notes that automation deals with means; it achieves prescribed objectives. By contrast, AI deals with ends; it establishes it own objectives.
Kissinger sees three areas of special concern: (1) that AI may achieve unintended results, mainly by misinterpreting human instructions due to lack of context; (2) that in achieving intended goals, AI may change human thought processes and human values; it knows only one purpose-to win; and (3) that AI may reach intended goals, but be unable to explain the rationale for its conclusions. The bottom line is his most difficult yet important question about the world into which we are headed: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?
The most compelling questions for me about the threats he describes are: Who decides? Who is responsible for the actions of AI? How should liability be determined for their mistakes? These questions are about accountability, and for that discussion he suggests a high level presidential commission, which I would liken to the President’s Council on Bioethics under President George W. Bush, with leadership comprised of the best philosophers and other disciplines in the field of the humanities along with religious advisors and advisors in the sciences and relevant technologies to begin to formulate a national vision. And as he also suggests, the developers of these particular AI technologies should begin right away to incorporate these questions and answers into their developmental paradigm and engineering.
I have written several times over the years about the grand themes that will dominate the 21st century. My own view has consistently been that, despite the specter of radical Islam and the usual issues of war and peace, there was one issue that would trump them all. It is the looming cultural, philosophical, and religious conflict on the question of the meaning of human nature because of the growing capability for man to transform his very nature due to the advances in the biosciences and neurosciences. Well, after reading Kissinger’s very compelling essay, I’m about ready to add the advancement of AI technology as a close second on my list.
Bob Reynolds says
This is a very interesting article and I appreciate your including it in your newsletter. AI is clearly a subject on which most people are very uninformed (that includes me) and have no idea as to what should or can be done and who has responsibility for making these decisions. Thanks very much.