Life 3.0, Being Human in the Age of Artificial Intelligence


MIT Physics Professor Max Tegmark has written an enthralling page turner about where we stand in the field of AI and where we could end up if we let things go uncontrolled. Tegmark is one of the founders of The Future of Life Institute, a volunteer-run research and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI).

“What jobs should be automated? How should our legal systems handle autonomous systems? How likely is the emergence of suprahuman intelligence? A.I. is the future of science, technology, and business. What has A.I. brought us? Where will it lead us?” The story of A.I. is the story of intelligence—of life processes as they evolve from bacteria (1.0) to humans (2.0), where life processes define their own software, to technology (3.0), where life processes design both their hardware and software. We know that A.I. is transforming work, laws, and weapons, as well as the dark side of computing (hacking and viral sabotage), raising questions that we all need to address: What jobs should be automated? How should our legal systems handle autonomous systems? How likely is the emergence of suprahuman intelligence? Is it possible to control suprahuman intelligence? How do we ensure that the uses of A.I. remain beneficial?

These are the issues at the heart of this book and its unique perspective, which seeks a ground apart from techno-skepticism and digital utopia. After reviewing current issues in AI, Tegmark in his book then considers a range of possible futures that feature intelligent machines and/or humans. The fifth chapter describes a number of potential outcomes that could occur, such altered social structures, integration of humans and machines, and both positive and negative scenarios like Friendly AI or an AI apocalypse.

He looks far ahead and explores the looming prospect of “recursive self-improvement”—AI systems that build smarter versions of themselves at an accelerating pace until their intellects surpass ours. Tegmark argues that the risks of AI come not from malevolence or conscious behavior per se, but rather from the misalignment of the goals of AI with those of humans.

[Total: 0    Average: 0/5]

Gerhard Schimpf, the recipient of the ACM Presidential Award 2016, has a degree in Physics from the University of Karlsruhe. As a former IBM development manager and self-employed consultant for international companies, he has been active in ACM for over four decades. He was a leading supporter of ACM Europe, serving on the first ACM Europe Council in 2009. He was also instrumental in coordinating ACM’s spot as one of the founding organizations of the Heidelberg Laureates Forum. Gerhard Schimpf is a member of the German Chapter of the ACM (Chair 2008 – 2011) and a member of the Gesellschaft für Informatik. --oo-- Gerhard Schimpf, der 2016 mit dem ACM Presidential Award geehrt wurde, hat an der TH Karlsruhe Physik studiert. Als ehemaliger Manager bei IBM im Bereich Entwicklung und Forschung und als freiberuflicher Berater international tätiger Unternehmen ist er seit 40 Jahren in der ACM aktiv. Er war Gründungsmitglied des ACM Europe Councils und gehört zum Founders Club für das Heidelberg Laureate Forum, einem jährlichen Treffen von Preisträgern der Informatik und Mathematik mit Studenten. Gerhard Schimpf ist Mitglied des German Chapter of the ACM (Chairperson 2008 – 2011) und der Gesellschaft für Informatik.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WP Facebook Auto Publish Powered By :