Life 3.0, Being Human in the Age of Artificial Intelligence
MIT Physics Professor Max Tegmark has written an enthralling page turner about where we stand in the field of AI and where we could end up if we let things go uncontrolled. Tegmark is one of the founders of The Future of Life Institute, a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI).
“What jobs should be automated? How should our legal systems handle autonomous systems? How likely is the emergence of suprahuman intelligence? A.I. is the future of science, technology, and business. What has A.I. brought us? Where will it lead us?” The story of A.I. is the story of intelligence—of life processes as they evolve from bacteria (1.0) to humans (2.0), where life processes define their own software, to technology (3.0), where life processes design both their hardware and software. We know that A.I. is transforming work, laws, and weapons, as well as the dark side of computing (hacking and viral sabotage), raising questions that we all need to address: What jobs should be automated? How should our legal systems handle autonomous systems? How likely is the emergence of suprahuman intelligence? Is it possible to control suprahuman intelligence? How do we ensure that the uses of A.I. remain beneficial?
These are the issues at the heart of this book and its unique perspective, which seeks a ground apart from techno-skepticism and digital utopia. After reviewing current issues in AI, Tegmark in his book then considers a range of possible futures that feature intelligent machines and/or humans. The fifth chapter describes a number of potential outcomes that could occur, such altered social structures, integration of humans and machines, and both positive and negative scenarios like Friendly AI or an AI apocalypse.
He looks far ahead and explores the looming prospect of “recursive self-improvement”—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark argues that the risks of AI come not from malevolence or conscious behavior per se, but rather from the misalignment of the goals of AI with those of humans.