Homo Deus: A Brief History of Tomorrow

4.5 out of 5

36,324 global ratings

Official U.S. edition with full color illustrations throughout.

NEW YORK TIMES BESTSELLER

Yuval Noah Harari, author of the critically-acclaimed New York Times bestseller and international phenomenon Sapiens, returns with an equally original, compelling, and provocative book, turning his focus toward humanity’s future, and our quest to upgrade humans into gods.

Over the past century humankind has managed to do the impossible and rein in famine, plague, and war. This may seem hard to accept, but, as Harari explains in his trademark style—thorough, yet riveting—famine, plague and war have been transformed from incomprehensible and uncontrollable forces of nature into manageable challenges. For the first time ever, more people die from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists and criminals put together. The average American is a thousand times more likely to die from binging at McDonalds than from being blown up by Al Qaeda.

What then will replace famine, plague, and war at the top of the human agenda? As the self-made gods of planet earth, what destinies will we set ourselves, and which quests will we undertake? Homo Deus explores the projects, dreams and nightmares that will shape the twenty-first century—from overcoming death to creating artificial life. It asks the fundamental questions: Where do we go from here? And how will we protect this fragile world from our own destructive powers? This is the next stage of evolution. This is Homo Deus.

With the same insight and clarity that made Sapiens an international hit and a New York Times bestseller, Harari maps out our future.

464 pages,

Kindle

Audiobook

Hardcover

Paperback

Audio CD

First published September 3, 2018

ISBN 9780062464347


About the authors

Yuval Noah Harari

Yuval Noah Harari

Prof. Yuval Noah Harari (born 1976) is a historian, philosopher and the bestselling author of 'Sapiens: A Brief History of Humankind' (2014); 'Homo Deus: A Brief History of Tomorrow' (2016); '21 Lessons for the 21st Century' (2018); the children's series 'Unstoppable Us' (launched in 2022); and 'Nexus: A Brief History of Information Networks from the Stone Age to AI' (2024). He is also the creator and co-writer of 'Sapiens: A Graphic History': a radical adaptation of 'Sapiens' into a graphic novel series (launched in 2020), which he published together with comics artists David Vandermeulen (co-writer) and Daniel Casanave (illustrator). These books have been translated into 65 languages, with 45 million copies sold, and have been recommended by Barack Obama, Bill Gates, Natalie Portman, Janelle Monáe, Chris Evans and many others. Harari has a PhD in History from the University of Oxford, is a Lecturer at the Hebrew University of Jerusalem's History department, and is a Distinguished Research Fellow at the University of Cambridge’s Centre for the Study of Existential Risk. Together with his husband, Itzik Yahav, Yuval Noah Harari is the co-founder of Sapienship: a social impact company that advocates for global collaboration, with projects in the realm of education and storytelling.

Read more


Reviews

David Walter

David Walter

5

Encompassing perspective

Reviewed in the United States on December 5, 2020

Verified Purchase

In Homo Deus, Yuval Noah Harari brings a broad and encompassing historical view to bear on big-picture trends, dynamics and questions that are front and center in our world today, and in doing so shines a multifaceted light on what makes people what they are, how we’ve come to the place we are now and where we’re going. Starting with the omnipresent elements that threatened the survival of our ancestors and seeing how we managed those threats, Harari draws in many different threads: why we created and value religion; why we create the technology we do to address certain problems, and how our technology has evolved; the role of science in our lives; why we seek happiness and what it’s like to achieve happiness; how our relationship with the natural world has changed as we changed from hunter-gatherers to agriculturalists; the nature of the human soul; that humans are at their core emotional organisms; and so much more. The observations Harari makes and the conclusions he draws aren’t necessarily the obvious ones, but his logic and perspective and how he comes to his rationale are clear, and make for compelling reading.

Accurately predicting where the human species is headed, predicting what the evolution of technology will create and predicting the political and social dynamics of future populations is impossible – there are simply too many variables being influenced by too many elements to know what’s going to happen in the future. But the predictions Harari makes in Homo Deus don’t emanate from a belief in what will happen as much as from a historian’s view of what could happen, given our story up to now and the elements we’re composed of. Homo Deus gives us a chance to deeply assess who we are and how we’ve come to where we are. It makes for fascinating, thought-provoking reading.

This is an exceptional book. If you are drawn to big-picture views of humans and our place in the world, this book is valuable. Whether you agree with all that’s written here or not, what Harari has drawn together will give you many threads of valuable input, and offers much to chew on.

Read more

P. Schuyler

P. Schuyler

5

Intriguing and profound

Reviewed in the United States on March 31, 2017

Verified Purchase

Sapiens is among my favorite books, and its full of fascinating ideas. To my eyes, that book firmly established that its the Inter-subjective realm (our ability to create and share fictions) that gives humanity its power in nature, and essentially separates us from other animals. He makes an absolutely compelling and practical case for that. I managed to get through years of College philosophy without realizing anything even remotely similar to that. And how that has changed my worldview! Ever since I read that book I can understand better the madness of the modern world; our religions, nation-states, companies and media personalities. What was formerly incomprehensible; people glued to their cell phones and Facebook profiles, throngs chanting midevil rituals on Sunday, wild celebrity worship, bizarre legal and religious doctrines, bureaucracy and its true power, all these mysteries are suddenly clear and connected thanks to Sapiens.

Homo Deus summarizes the fundamentals of Sapiens in the first half of the book. Then it goes to dramatic new places which are a projection and warning about modern technologies and trends. To me his writing is always carefully and reasonably articulated and he states plainly when and where he is speculating. Sure he draws many extrapolations forward but that's the point of this book! When he presupposes he admits it as such, exactly as he did in Sapiens. If the 20th Century was really was a war among Humanist sects (as he contends)...then the advances of 21st Century science and technology are beginning to chip away at what has been assumed to be our sacred and individual human essence. That's an idea with major implications. Do you agree that Humanism is the modern world's primary underlying religion and that it is now (possibly) in danger? After some consideration, I agree with the notion, and also that it seems to be at risk as new discoveries chip away at the sacred notion of self. Everything that underpins the modern world: consumerism (the customer is always right), our political system (democratic voting), and our psychology (do what feels right) are all based on the assumption that the 'self' is irreducible. But what if that 'self' isn't so clear or autonomous? It appears less so every day, as computer/person hybrid thinking becomes more common (think GPS navigation), and as new understandings emerge about what makes us, well...US. Meanwhile, computer AI advances accelerate at an insane pace, doing things declared previously impossible only months earlier. Medicine does the same. New cheaper DNA sequencing and practical DNA splicing/editing reveal mechanics that underlie our physiology and psychology. And hey, we're on the verge of 3d printing organs!

Even without something like an AI consciousness emerging, the fact is that most of what we do really can be off-loaded to more efficient computer algorithms. When today the most powerful entities in the world are not people but rather inter-subjective entities like Google, will our children's world still be ruled by the 'sacred' self? Can that 'sacred' self be defined clearly, or rather manipulated, ostracized, dissected, distracted, drug-altered, or click-baited one way or another? Or is that not already a pretty darn good description of our modern world? You be the judge, but this book speculates reasonably about plenty of reasons to be nervous.

Read more

10 people found this helpful

Dennis Littrell

Dennis Littrell

5

Humans are toast; the data religion will rule

Reviewed in the United States on July 10, 2017

Verified Purchase

Most of this is not about “tomorrow” but about yesterday and today. Most of the material that pertains most directly to the future begins with Chapter 8 which is two-thirds of the way into the book. But no matter. This is another brilliant book by the very learned and articulate Professor Harari. It should be emphasized that Harari is by profession a historian. It is remarkable that he can also be not only a futurist but a pre-historian as well as evidenced by his previous book, “Sapiens.”

This quote from page 15 may serve as a point of departure: “Previously the main sources of wealth were material assets such as gold mines, wheat fields and oil fields. Today the main source of wealth is knowledge.” (p. 15)

In the latter part of the book Harari defines this knowledge more precisely as algorithms. We and all the plants in the ground and all fish in the sea are biological algorithms. There is no “self,” no free will, no individuals (he says we are “dividuals”) no God in the sky, and by the way, humans as presently constituted are toast.

The interesting thing about all this from my point of view is that I agree almost completely. I came to pretty much the same conclusions in my book, “The World Is Not as We Think It Is” several years ago.

What I want to do in this review is present a number of quotes from the book and make brief comments on them, or just let them speak for themselves. In this manner I think the reader can see how beautifully Harari writes and how deep and original a thinker he is.

“Islamic fundamentalists could never have toppled Saddam Hussein by themselves. Instead they enraged the USA by the 9/11 attacks, and the USA destroyed the Middle Eastern china shop for them. Now they flourish in the wreckage.” (p. 19) Notice “fundamentalists” instead of “terrorists.” This is correct because ISIS, et al., have been financed by Muslim fundamentalists in places like Saudi Arabia.

“You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins.” (p. 67)

Harari speaks of a “web of meaning” and posits, “To study history means to watch the spinning and unravelling of these webs, and to realise that what seems to people in one age the most important thing in life becomes utterly meaningless to their descendants.” (p. 147)

One of the themes begun in “Sapiens” and continued here is the idea that say 20,000 years ago humans were not only better off than they were in say 1850, but smarter than they are today. (See e.g., page 176 and also page 326 where Harari writes that it would be “immensely difficult to design a robotic hunter-gatherer” because of the great many skills that would have to be learned.) In “The World Is Not as We Think It Is” I express it like this: wild animals are smarter than domesticated animals; humans have domesticated themselves.

For Harari Nazism, Communism, “liberalism” humanism, etc. are religions. I put “liberalism” in quotes because Harari uses the term in a historical sense not as the opposite of conservatism in the contemporary parlance.

“For religions, spirituality is a dangerous threat.” (p. 186) I would add that religions are primarily social and political organizations.

“If I invest $100 million searching for oil in Alaska and I find it, then I now have more oil, but my grandchildren will have less of it. In contrast, if I invest $100 million researching solar energy, and I find a new and more efficient way of harnessing it, then both I and my grandchildren will have more energy.” (p. 213)

“The greatest scientific discovery was the discovery of ignorance.” (p. 213)

On global warming: “Even if bad comes to worse and science cannot hold off the deluge, engineers could still build a hi-tech Noah’s Ark for the upper caste, while leaving billions of others to drown….” (p. 217)

“More than a century after Nietzsche pronounced Him dead, God seems to be making a comeback. But this is a mirage. God is dead—it’s just taking a while to get rid of the body.” (p. 270)

“…desires are nothing but a pattern of firing neurons.” (p. 289)

Harari notes that a cyber-attack might shut down the US power grid, cause industrial accidents, etc., but also “wipe out financial records so that trillions of dollars simply vanish without a trace and nobody knows who owns what.” (p. 312) Now THAT ought to scare the bejesus out of certain members of the one percent!

On the nature of unconscious cyber beings, Harari asserts that for armies and corporations “intelligence is mandatory but consciousness is optional.” (p. 314) This seems obvious but I would like to point out that what “consciousness” is is unclear and poorly defined.

While acknowledging that we’re not there yet, Harari thinks it’s possible that future fMRI machines could function as “almost infallible truth machines.” Add this to all the knowledge that Facebook and Google have on each of us and you might get a brainstorm: totalitarianism for humans as presently constituted is inevitable.

One of conundrums of the not too distance future is what are we going to do with all the people who do not have jobs, the unemployable, what Harari believes may be called the “useless class”? Answer found elsewhere: a guaranteed minimum income (GMI). Yes, with cheap robotic labor and AI, welfare is an important meme of the future.

Harari speculates on pages 331 and 332 that artificial intelligence might “exterminate human kind.” Why? For fear humans will pull the plug. Harari mentions “the motivation of a system smarter than” humans. My problem with this is that machines, unless it is programmed in, have no motivations. However it could be argued that they must be programmed in such a way as to maintain themselves. In other words they do have a motivation. Recently I discussed this with a friend and we came to the conclusion that yes the machines will protect themselves and keep on keeping on, but they would not reproduce themselves because new machines would be taking resources from themselves.

Harari believes that we have “narrating selves” that spew out stories about why we do what we do, narratives that direct our behavior. He believes that with the mighty algorithms to come—think Google, Microsoft and Facebook being a thousand times more invasive and controlling so that they know more about us than we know about ourselves. Understanding this we will have to realize that we are “integral parts of a huge global network” and not individuals. (See e.g., page 343)

Harari even sees Google voting for us (since it will know our desires and needs better than we do). (p. 344) After the election of Trump in which some poor people voted to help billionaires get richer and themselves poorer, I think perhaps democracy as presently practiced may go the way of the dodo.

An interesting idea taking this further is to imagine as Harari does that Google, Facebook, et al. in say the personification of Microsoft’s Cortana, become first oracles, then agents for us and finally sovereigns. God is dead. Long live God. Along the way we may find that the books you read “will read you while you reading them.” (p. 349)

In other words what is coming are “techno-religions” which Harari sees as being of two types: “techno-humanism and data religion.” He writes that “the most interesting place in the world from a religious perspective is…Silicon Valley.” (p. 356)

The last chapter in the book, Chapter 11 is entitled “The Data Religion” in which the Dataists create the “Internet-of-All-Things.” Harari concludes, “Once this mission is accomplished, Homo sapiens will vanish.” (p. 386)

--Dennis Littrell, author of “Hard Science and the Unknowable”

Read more

108 people found this helpful

Nathan

Nathan

5

Thought provoking

Reviewed in the United States on August 22, 2024

Verified Purchase

This was a really interesting read. I read his other book, Homo Sapiens, and it was great too. I am about to start reading his other book. He is very thorough in his presentations.

William Brennan

William Brennan

5

Plenty to ponder

Reviewed in the United States on February 25, 2018

Verified Purchase

This is the most challenging book I’ve read since – well perhaps ever, not because the author’s style is in any way difficult but because it challenged my basic beliefs of about everything from the first page. The first two thirds of the book bring us up to date from the time of the primordial ooze to the present; then the author takes all of the knowledge we have acquired in all the sciences and arts and basically debunks much of it, and quite effectively. Spoiler alert: this book is not for the seriously religious among us. He makes no bones about it: God is dead and has been for most moderns for several centuries. He assumes his readers are liberal humanists and wastes little time defending his position; that given, he allows us to keep our religion as the useful fount of morality since we’re not very good creatures without it. But make no mistake there’s no real God in this religion. Modern liberal humanism allowed us – all of us moderns – to throw off religion although it gave thousands of years of satisfaction to our forbears by promising them meaningful survival after death if not in this life. Humanism gave us liberty and freedom and provided meaning for us while we strode the planet through love, art, scientific achievement, business acumen, governance or anything else that we might favor. But that left us relatively empty and that’s when we brought back the trappings of religion to provide us with a little comfort. After establishing us as the dominant species and killing off all that had made life unbearable for our ancestors, he states that we really have no free will which is the basis of liberal humanism. Between biology and computer science and neurobiology, science has concluded our minds are simply mathematical algorithms. These algorithms have been honed by eons of evolution to provide us with the best possible outcomes from flight or fight and for passing along our best genetic outcomes. But, but along come supercomputing algorithms which even now in the hands of Google, Microsoft, Facebook and others are beginning to know us better than we can possibly know ourselves. These algorithms make human decision making obsolete since we just can’t know ourselves as well as the algorithms and thus cannot compete with them in knowing ourselves and for making our best personal decisions. The author discusses many of the long held positions of members of this group and more often than not blows holes in each of our favorite positions. Walt’s fear of cyber warfare (no debunking here); Walt and Frank’s faith in job retraining; my concern AI will displace almost all workers; the fear – held by none of us that – computers will do away with humans; and many others are trotted out and handled with great skill…if not entirely believably. While I’d give it less than five stars on Amazon’s scale, it held my attention for the days it took me to read it. The first two thirds are absolutely devastating to my long held views on many subjects particularly free will. As a liberal humanist from my youth, free will has always been front and center in my thinking. Harari makes a strong case against this and while I reject the point, I can martial few answers to reject it. The final third contains Harari’s views on the most likely futures for the world and its human inhabitants. While I’ve argued that computers and artificial intelligence will probably wipe out more jobs than it creates, he takes it – in the very out years – that these forces may wind up with no use for us and could let us die or even exterminate us. That was far too dystopian for me and I rejected most of what he says the further he gets from our time. The guy has a very fluid style and I recommend the read highly. Beyond the review of content, the book while ONLY about 450 pages, including notes and index, it is printed on high quality heavy paper and it actually hurt my hands to read for long periods.

Read more

16 people found this helpful

George

George

4

interesting but overthought and underthought in many parts

Reviewed in the United States on May 2, 2024

Verified Purchase

Interesting and enlightening in parts but other parts, such as his discussion of consciousness, are overthought and ultimately gibberish.

The first part does a good job of describing how humans before the Enlightenment spent most of their lives dealing with disease, famine and conflict, but today those are relatively controlled. More people die now from obesity than hunger. The rest of the book discusses where the technologies and knowledge making that possible are heading. He essentially describes a journey from hunter gatherers that were just part of an ecosystem to masters of the planet on track to become gods but at the risk of being conquered by our own technologies.

He builds this theme around algorithms. Algorithms are the formulas for processes and that makes them not just computer codes but the essence of life itself. People are ultimately a collection of algorithms. Build better algorithms and people become superfluous unless they enhance people. That is where our technologies are heading. The book does a good job of making that point.

The book bogs down in discussing things like the algorighms for consciousness. Science hasn't been able to fully determine how consciousness and many other brain processes work. The middle of the book frequently goes off on tangents that add nothing. One ends by asking if consciousness is even needed. It gets lost in looking for algorithms when analyzing functions is the key point. Those parts are overthought on steroids. But it eventually gets to discussing how feelings govern actions and feelings are not chosen, they are simply felt, and that is why free will doesn't exist. That part is excellent.

I think the book is underthought in important ways.

One is not discussing how controlling disease, famine and conflict has allowed humans to multiply out of control. There is no technology that will allow that to continue indefinitely. It is virtually certain that disease, famine and conflict will return as the climate and civilizations collapse. Our inability to stop that is the lethal flaw in the whole journey to becoming gods. Perhaps a few homo deus supermen will survive but that is pure speculation. The apocalypse is certain and the book only mentions it as an issue in passing on one sentence that I saw. No discussion of the future is complete without at least acknowledging that problem.

Another oversight is the potential chaos resulting from the growth of misinformation. AI is facilitating a growng trend creating rather than merely measuring reality. One casualty of AI may be truth. If AI takes over and humans can no longer know what is real, algorithms assessing what is important can't work. What will stop AI from destroying reality itself? It would potentially be a good strategy to eliminate humans.

Another big issue not addressed is our inability to control the power of developing technologies in other ways. This theme is arising in many areas. The dismay of the scientists that developed the atomic bomb is a good example. One of the greatest intellectual achievements in history turned out to be a bomb that can destroy everything on earth. Once it was made the scientists turned their attention to controlling its use only to have the political and military leaders take over and start a new arms race. Similarly, gene editing technology will soon enable the creation of life itself. There is no way to ensure that bad people will not use this technology to create very dangerous master races to conquer the world.

I think anyone would be challenged to tie all of this together. This book is a good start along with Code Breaker and American Prometheus, but there is still a lot to consider missing here.

Read more

3 people found this helpful

Matthew Rapaport

Matthew Rapaport

4

Entertaining if presumptuous take on the future of mankind

Reviewed in the United States on December 6, 2022

Verified Purchase

God-Man is what this title means, but the content isn’t quite so literal. There are no themes in this book that haven’t been dealt with by numerous science fiction novels. But this isn’t supposed to be fiction, instead a sober look at where the history of humans, coupled with the technology of the twenty-first century, is taking us.

So where is that? The author cites three overall goals motivating humanity since its inception, and, according to Harari, now nascent and imbedded in modern technology. They are: (1) to be ageless, literally to live forever (beginning with living much longer than we do now) provided that we are not killed in accidents or murdered, (2) to be happy always, and (3) to acquire god-like (small ‘g’) powers of mind and body through mechanics, genetics, and cybernetics,

All of these are, he thinks, possible in the next 50 or so years despite the first’s violating the second law of thermodynamics, the second being a mental state that appears to demand an occasional (at least) lapse into something else to reset itself, leaving the third as the only one understood well enough to be achievable in some measure. Interestingly, achieving the third goal would have the most predictable negative impact on our present value systems and ways of life–illustrated to chilling effect in his last chapter. Putting it bluntly, post-sapiens humans take over the world, enslaving (or just eliminating, there being no further need for human labor) the rest of us. In a further twist, cybernetic intelligence eventually eliminates even those quasi-sapiens for its own sake, there being no further need for humans of any sort.

Concerning these specific prognostications, Harari gives himself an out. This is only speculation. The future is open, and there are many ways our technology might develop, and not everything we want may be possible. He also understands that perhaps time is not on our side. Some near future events (global nuclear war or civilizational collapse due to climate or ecological disaster) might derail our progress. Concerning the foundational assumptions of his projections, what makes them reasonable (and possible), he leaves himself no wiggle room.

Three things he assures us must be true: (1) the universe is entirely physical (no God, no extra-physical mind). As a consequence (2), free will is an illusion, and (3) so is the self. This leads him down a path of epistemic nihilism. Our brains react to every sensory input and make every decision some seconds (or fractions of seconds) before we are even aware of them. Our experiential arena is subjectively real (how this is given there is no subject) but has no impact whatsoever on what we think, feel, or do–there being no individual “us” anyway. The absurd consequences of these assumptions (he is not alone in believing these and cites long-challenged experiments purporting to prove them), for example, that there is no “he,” no Yuval Harari to whom we might give credit for this book, escape him.

Homo Deus is rich with philosophical implications, but the author is writing from a historical perspective and a forecast of “future history.” He is not trying to do philosophy, so I leave explorations of these implications for a blog essay. The book is well-written and entertaining. His take on human history from the paleolithic to the Enlightenment, the book’s part one, is novel. He credits literal religion (among other things) with pushing mankind forward until our own discoveries dethroned it, installing a new [metaphorical] religion, Humanism, the book’s part two, which brought us to the edge of the present age. Humanism is to be dethroned now, part three, and yet another [metaphorical] religion Harari calls Dataism is emerging. This overall thesis is coherent given his assumptions and gracefully presented with considerable humor, so four stars, even if it is more than a bit presumptuous!

Read more

3 people found this helpful

Ashutosh S. Jogalekar

Ashutosh S. Jogalekar

3

A mix of deft writing, sweeping ideas and incomplete speculation: 3.5 stars

Reviewed in the United States on January 5, 2017

Verified Purchase

Yuval Noah Harari's "Homo Deus" continues the tradition introduced in his previous book "Sapiens": clever, clear and humorous writing, intelligent analogies and a remarkable sweep through human history, culture, intellect and technology. In general it is as readable as "Sapiens" but suffers from a few limitations.

On the positive side, Mr. Harari brings the same colorful and thought-provoking writing and broad grasp of humanity, both ancient and contemporary, to the table. He starts with exploring the three main causes of human misery through the ages - disease, starvation and war - and talks extensively about how improved technological development, liberal political and cultural institutions and economic freedom have led to very significant declines in each of these maladies. Continuing his theme from "Sapiens", a major part of the discussion is devoted to shared zeitgeists like religion and other forms of belief that, notwithstanding some of their pernicious effects, can unify a remarkably large number of people across the world in striving together for humanity's betterment. As in "Sapiens", Mr. Harari enlivens his discussion with popular analogies from current culture ranging from McDonald's and modern marriage to American politics and pop music. Mr. Harari's basic take is that science and technology combined with a shared sense of morality have created a solid liberal framework around the world that puts individual rights front and center. There are undoubtedly communities that don't respect individual rights as much as others, but these are usually seen as challenging the centuries-long march toward liberal individualism rather than upholding the global trend.

The discussion above covers about two thirds of the book. About half of this material is recycled from "Sapiens" with a few fresh perspectives and analogies. The most important general message that Mr. Harari delivers, especially in the last one third of the book, is that this long and inevitable-sounding imperative of liberal freedom is now ironically threatened by the very forces that enabled it, most notably the forces of technology and globalization. Foremost among these are artificial intelligence (AI) and machine learning. These significant new developments are gradually making human beings cede their authority to machines, in ways small and big, explicitly and quietly. Ranging from dating to medical diagnosis, from the care of the elderly to household work, entire industries now stand to both benefit and be complemented or even superseded by the march of the machines. Mr. Harari speculates about a bold vision in which most manual labor has been taken over by machines and true human input is limited only to a very limited number of people, many of whom because of their creativity and demand will likely be in the top financial echelons of society. How will the rich and the poor live in these societies? We have already seen how the technological decimation of parts of the working class was a major theme in the 2016 election in the United States and the vote for Brexit in the United Kingdom. It was also a factor that was woefully ignored in the public discussion leading up to these events, probably because it is much easier to provoke human beings against other human beings rather than against cold, impersonal machines. And yet it is the cold, impersonal machines which will increasingly interfere with human lives. How will social harmony be preserved in the face of such interference? If people whose jobs are now being done by machines get bored, what new forms of entertainment and work will we have to invent to keep them occupied? Man after all is a thinking creature, and extended boredom can cause all sorts of psychological and social problems. If the division of labor between machines and men becomes extreme, will society fragment into H. G. Wells's vision of two species, one of which literally feeds on the other even as it sustains it?

These are all tantalizing as well as concerning questions, but while Mr. Harari does hold forth on them with some intensity and imagination, this part of the book is where his limitations become clear. Since the argument about ceding human authority to machines is also a central one, the omission also unfortunately appears to me to be a serious one. The problem is that Mr. Harari is an anthropologist and social scientist, not an engineer, computer scientist or biologist, and many of the questions of AI are firmly grounded in engineering and software algorithms. There are mountains of literature written about machine learning and AI and especially their technical strengths and limitations, but Mr. Harari makes few efforts to follow them or to explicate their central arguments. Unfortunately there is a lot of hype these days about AI, and Mr. Harari dwells on some of the fanciful hype without grounding us in reality. In short, his take on AI is slim on details, and he makes sweeping and often one-sided arguments while largely skirting clear of the raw facts. The same goes for his treatment for biology. He mentions gene editing several times, and there is no doubt that this technology is going to make some significant inroads into our lives, but what is missing is a realistic discussion of what biotechnology can or cannot do. It is one thing to mention brain-machine interfaces that would allow our brains to access supercomputer-like speeds in an offhand manner; it's another to actually discuss to what extent this would be feasible and what the best science of our day has to say about it.

In the field of AI, particularly missing is a discussion of neural networks and deep learning which are two of the main tools used in AI research. Also missing is a view of a plurality of AI scenarios in which machines either complement, subjugate or are largely tamed by humans. When it comes to AI and the future, while general trends are going to be important, much of the devil will be in the details - details which decide how the actual applications of AI will be sliced and diced. This is an arena in which even Mr. Harari's capacious intellect falls short. The ensuing discussion thus seems tantalizing but does not give us a clear idea of the actual potential of machine technology to impact human culture and civilization. For reading more about these aspects, I would recommend books like Nick Bostrom's "Superintelligence", Pedro Domingos's "The Master Algorithm" and John Markoff's "Machines of Loving Grace". All these books delve into the actual details that sum up the promise and fear of artificial intelligence.

Notwithstanding these limitations, the book is certainly readable, especially if you haven't read "Sapiens" before. Mr. Harari's writing is often crisp, the play of his words is deftly clever and the reach of his mind and imagination immerses us in a grand landscape of ideas and history. At the very least he gives us a very good idea of how far we as human beings have come and how far we still have to go. As a proficient prognosticator Mr. Harari's crystal ball remains murky, but as a surveyor of past human accomplishments his robust and unique abilities are still impressive and worth admiring.

Read more

1 people found this helpful

Poul

Poul

3

For whom are the bells tolled?

Reviewed in the United States on March 22, 2020

Verified Purchase

Where Sapiens basically described what made any historical period, except maybe hunter-gathering, not worth living in for most people, this volume lays out the various future options for dysfunction and self-obliteration in equally inspiring terms. Everything is presented as if the author is rushing to point out things no-one else has seen, which means that we get straw-man and high contrast either-or type scenarios. In each case one senses a more complicated underlying issue, if surely a relevant one. It is all very well written.

YNH seems to speak from within each scenario, so readers have questions as to where he is coming from and whom he is addressing. Religious practitioners won’t touch this, and the humanists won’t recognize themselves in his caricature. Maybe the average reader won’t think they are the described hacked to the core voter-consumer scattered non-selves, or algorithms bundled together, deterministically unfolding. If the author believes we can suspend those dynamics to attend to this book, he is not making it clear how. Of course determinists seem to consider themselves excempt.

YNH’s Buddhist leanings have helped me understand his thinking process. Disclosure: I am influenced by Buddhist practice, but find certain dogma unconvincing. Humanistic-liberal (they seem thrown together) “ideology" or "religion” is such a bugaboo because it values the individual, and in Buddhism the individual ‘self’ is an illusion and as such the root of all evil. The bias also shows itself when social institutions and instruments are referred to as “imaginations”, "fictions" That dog whistle frequency harmonizes with ‘delusions’, which are strong drivers in unenlightened human affairs.

Unfortunately the buddhist argument is itself somewhat convoluted. It turns out that the real problem is with, well, life itself. All life-forms, human, animal, god-like, fail at true existence, which is having a permanent essence. This is unfortunate and ironic in a universe not composed of such entities. But since an essence we must have, humans somehow manage to “feel” as if they have one at the core of their being. This delusion causes a toxic dynamic of not seeing things as they are (basically impermanent, interconnected and missing essences) and motivates all kinds of selfish behavior. The ultimate goal, which may take innumerable lifetimes, is to get off the wheel of death and rebirth altogether. This is the negative coin-side of an essence based ideology. Something is missing that really should have been there. Without that, what is there to care about or hold on to? Things arise and fall away. Of course, Buddhism is known for compassion with suffering, and also promotes the idea of an immanent true nature spontaneously seeing and doing right, but here only the critical aspect is apparent. Science supports some buddhist findings but explores how things actually interconnect without the notion of something missing.

YNH is really on the attack when it comes to humanism’s valuation of humans as individuals. He makes it the culprit of a go for broke pursuit of eternal life, but all you need for that is a few resourced individuals, comfortable enough not to find the idea crazy. And all they need in front of them are the problems the Buddha identified: old age, sickness and death. Humanists ‘ideology’ is not required. Death with dignity is a good humanist alternative.

Humanists had a hand in launching the scientific method, which proceeds by distrusting individual observations and conclusions. This was instrumental in placing humans in a vast universe, and launched the investigations into our place and nature that YNH so frequently draws upon. The interest in other, expanded mind states grew out of liberal societies not locked onto one faith system.

I don’t see YNH making up his mind whether our delusional experience of self is based on a presumed essence or identification with the varied narrations that occur in consciousness. Neither seem compelling as primary sources. In any case, the notion that essence based identity is necessary for humanists is a canard. That would depend on their religious-scientific stance. The experience of identity or self, is a matter for exploration, but we can’t talk ourselves out of the necessity of the notion. We do represent ourselves as persisting - and changing - over time. To conceive of continuity you need at minimum a functioning organism, stable and sophisticated enough to reference itself intelligently and make coherent distinctions in an environment that itself is stable enough to make these useful. The ‘inner self’ intuition seems due to the fact that we realize our consciousness does not pull itself out of its own hat. We only have theories of how it arises and is structured, it now seems to involve the whole brain and its subsystems, which accounts for the fact that we have rich inner lives in stead of a one note samba.

YNH also argues that the humanist endeavor is lost if we can’t prove free will, however even deterministic humanists perceive value. Unfortunately, the free will argument is stuck in essence thinking if it requires us to pull action from an immaculate realm via immaculate conception. I would more modestly argue the advantages of conditioned will that is alive to the being I actually am and the world I actually live in. Nobody knows how consciousness and organic operations relate. It is safe to say that they co-evolved, are hard to pry apart, and that the organism places a very costly priority on the quality, coherence and explanatory value of conscious content. In actual practice, the subtle and responsive intelligence of humans can be quite impressive, even when busy mapping out how much it sucks.

When larger social structures and instruments are bundled together as imaginations or ideology-religions, important distinctions disappear. They are all seen as oppressive false views and if there are any positive effects, for example of capitalism as it is erratically practiced, they are acknowledged, but as if they were unintended consequences. Of course locked down ideological liberalism exists, but some high functioning nations utilize both capitalism and large scale social instruments in managing areas of need not covered. Such societies insist on the input of their citizens not because they are self-enlightened, but because nobody is. What the individual is the authority of, may only be how it feels to be that individual, but this is enough reason to give them some little say in matters that affect them. Useful education is seen as crucial, as is having structures that give citizens means for participation.

Seeing nations and social structures as ‘imaginations’, anchors for false beliefs and attachments, is the luxury complaint of someone who doesn’t have to come up with large scale solutions to large scale problems. You need frames for regulating complex activity, and without such frames certain things deemed useful just won’t happen. The way fictions and actual false beliefs impact is different.

YNH’s quotable version of a new ruling ideology-religion starts its life as a caricature. Most ideologies promote large scale structural change as accomplishing some human good, that's their job - and then the hacking and unintended consequences set in. Data-ism is already conceived as rendering humans irrelevant, algorithms in an ocean of algorithms, and the free flow of information, the core purpose, is already described as the one-directional of flow from humans to larger entities of suppression or manipulation. At least everyone would know how to make biological weapons. But if we agree that big data is a problem, where is the solution going to come from if you propose that everything really just is algorithms and data? Data-ism has the look of a reductionism and these always have the same effect: the universe appears as an abstraction if seen through an abstraction. It goes well with the Buddhist view of fundamental insubstantiality though. Some humans and nations already see the problem and are trying to manage the flow of data, and increase transparency. In the midst of a pandemic, the advantages of sharing data intelligently are indisputable.

Humanists insist that we care about humans. Why? Who cares, is the question to ask. My humanist position is that the primary units worth caring about are those that can care - including caring if others care. This is as close to self evident as we are going to get. This expands naturally into wondering about the inner lives of other organisms or other states of mind. If the whales have exquisite artistic communications it is of ‘value’ to them, and could be indirectly to us if we explore further, but true communication comes from within similar minds. This is why some algorithms are more equal than others. To take the example of music, a musical composition - or 'algorithm' - only works as anything at all in the human mind. The anger directed at the competent listener-analyst who designed composing computer programs is not that they diminished great composers, because without them compelling models would not exist, but because humans do not like to be hacked and fooled. It is a basic biological ‘algorithm’, if you will, that has positive and negative expressions, but might be useful here. The point and value of music is human communication and contact, and it does not get better the more complex it gets, but the better it serves as a means of communication of the varieties of human experience. Should we care if objectively speaking, any attribution of value may be a misattribution, adding something that the object does not have 'in itself'? No, as it turns out, in a seemingly uncaring universe the best truths are relative.

YNH ridicules the mental gymnastics of some humanists trying to argue for the value of humans in view of all evidence of our conditioned and fallible nature, but I would like to see the gymnastics behind writing these books if such value is argued away.

As it turns out the professor has all along just wanted to present us with some possible scenarios so he can give us a homework assignment at the end to help us, what? make up our own minds? Make the world better after all by being our own agents of change?? What a humanistic idea! My assignment to the professor is to articulate what is there to care about in the first place.

Read more

19 people found this helpful

Dr. Flowers

Dr. Flowers

1

Inaccurate and Fallacious

Reviewed in the United States on February 21, 2017

Verified Purchase

I was disturbed to learn that in Harari's new book, Homo Deus, he ponders how future technologies will reshape humanity. This is a topic that Harari, a scholar of 15th century military history, is not especially qualified to discuss. If you’re into science fiction, read science fiction, not Harari.

To demonstrate how off-base Home Deus is likely to be, allow me to dwell on the penultimate chapter of Sapiens, as it deals with modern biology and neuroscience, my areas of expertise, to illustrate the superficiality of Harari's arguments. His examples of this brave new world in which we now live are curated from the most sensationalist science reporting. He talks of the art-project rabbit, Alba, designed to express a green fluorescent protein. He also mentions the famous "Vacanti mouse" with what looks like a human ear growing on its back (in fact, this is cartilage that has been grown inside an ear-shaped mold and implanted into a mouse's back--sort of neat, but only slightly more impressive than stapling something shaped like an ear to the back of a mouse). These are two odd examples, as they aren't really reflective of current cutting edge science but are perfect examples of what might morally outrage someone who does not clearly understand what they really are. What?! An artist can just make a designer rabbit? No, an artist didn't make the rabbit. What?! Is that a human ear growing on a mouse's back? No. It isn't, it's a thing shaped like an ear that someone implanted in a mouse.

Harari’s view of contemporary biology and neuroscience are more shaped by The Matrix and Jurassic Park than real academic research. Here is an example, "A team of Russian, Japanese, and Korean scientists has recently mapped the genome of ancient mammoths, found frozen in the Siberian ice. They now plan to take a fertilized egg-cell of a present-day elephant, replace the elephantine DNA with a reconstructed mammoth DNA, and implant the egg in the womb of an elephant." He lazily, and incorrectly, cites an article from Time magazine as his source. This, in fact, is not what the article is about, and almost everything in that quote is untrue. This team did not sequence the mammoth genome, they would not use fertilized eggs of recipients, and they would not use reconstructed mammoth DNA (as the technology does not exist to synthesize a mammoth genome from scratch). Instead, they hoped to find living mammoth cells containing an entire intact genome, then inject the nucleus of such a cell into an unfertilized elephant egg, hope it starts dividing, and implant that into an elephant womb. Here are the unknowns with respect to this task—the probability of finding mammoth cells that are thousands of years old but still have fully intact nuclei (impossible to estimate, but improbably small), the success rate of any elephant egg dividing after being injected with mammoth DNA (the rate for division after living mouse-to-mouse egg nuclear transfer is only a few percent), the probability that this nuclear-transfer derived egg would continue to divide after being implanted in an elephant womb (improbably small). In short, the captive population of elephants is too small to provide enough donor eggs and potential surrogate mothers to even consider performing this foolhardy project, yet, Harari, with his head full of ideas that he misunderstood from Time, seems to think that it’s just a matter of months before the mammoth will be resurrected. (I should note that, since the publication of Sapiens, several labs have found ways to investigate the function of individual mammoth genes inside modern elephant cells—but to say that this is recreating a mammoth is more extreme than saying that Alba, the fluorescent rabbit, is a perfect reconstruction of a jellyfish).

Similiarly, Harari cites George Church's claim that he could make a Neanderthal child for $30 million. For the same reasons, the technology does not exist to synthesize a 3+ billion base genome de novo. At current DNA synthesis costs, it would require $100 million to carry out this synthesis as thousands or millions of fragments, which then couldn't be coherently assembled into something like a genome. No multicellular organism has ever been created with a synthetic genome, and human embryos would be the last place to start testing the possibility. Whether this is something that may eventually be technically possible is not important, as there are so many ethical hurdles to even begin the proof-of-principle research that it’s safe to say this isn’t going to happen unless someone first clones Josef Mengele and installs him as the head of the National Institutes of Health. I assume that Harari believes that since humans are so cruel as to practice industrial farming, it is inevitable that they will permit hundreds or thousands of failed pregnancies to relish in the glory of a Neanderthal. Again, Harari ignorantly suggests that the question is not whether this is possible but how many days until there is a new underclass of Neanderthals.

Harari’s inability to discern science fiction from science fact extends into the world of neuroscience. He writes, "Yet of all the projects currently under development, the most revolutionary is the attempt to devise a direct two-way brain computer interface that will allow computers to read the electrical signals of a human brain, simultaneously transmitting signals that the brain can read in turn." He then supposes that in the very near future, people will be able to store their minds on external hard drives, and all human minds can be linked to form a super brain. While the quoted sentence is superficially true, it is not revolutionary. It should come as a surprise to no one that there are techniques like EEG and MRI that can crudely measure brain activity. It is also true that localized magnetic stimulation can affect one’s brain and even make someone perceive flashes of light. If you hook an MRI up to a computer which is, in turn, hooked up to a magnet on someone's head, you have made the technology that can make one person’s brain activity make another person see a flash of light. That is a stupid parlor trick and not something to fear.

Now let’s consider Harari’s extension of this gimmick--- the notion that we could record someone’s mind. Current brain recording technologies amount to something like a Fitbit for your brain--an EEG or MRI can crudely determine when and how much your brain is active. Harari seems to believe that technology soon will exist permitting complete brain-state knowledge. Imagine if a device could record the state of every cell of your brain. First, you would never want such a device implanted into your own head, because it would consist of 100 billion pins stabbing into your brain and would kill you. Second, the output of this machine would be completely useless. A recording of all of the activity of every brain cell for the entirety of someone’s life would be a bunch of numbers, would represent a minuscule portion of the information processing that the brain actually does, and is impossible, anyways. To speculate about what this means about humanity is to waste one’s time.

I have not yet completed Homo Deus, but I find it to be more well cited and more cautious than Sapiens. Nonetheless, I continue to find that his standard argument structure is: "X is not technically impossible; therefore, X is inevitable." When he rightfully notes that experts argue that genetically engineered babies or human-level AI are distant possibilities, he dismisses this as short-sightedness and argues that what scientists really mean by "distant" is "a few years." After all, he argues, when he first encountered the Internet in 1993, he didn't appreciate how great it would be in 2017. Is it just me, or is this comparison a bit arrogant? A modern-day neuroscientist's understanding of what is realistically possible in the field of artificial intelligence is the same as high-school-aged Harari's understanding of what the internet would be like in 25 years? This implies that high-school-aged Harari was a scholar of computer science and its history. He speculates that a la carte selection of the traits of your offspring is the near-certain next step from any DNA-editing-based correction of illness or disease. This seems to assume that the risk-to-reward ratio of correcting Hungtington's disease via gene editing is similar to that of eliminating freckles via gene editing. While Harari argues that it is a slippery slope leading from the former to the latter, one could argue that the latter is one-million times less practical or morally acceptable. The slope is both long and shallow to the extent that Harari's belief in the inevitability of commonplace cosmetic human genetic engineering arising from efforts to correct disease is like saying that birth control is dangerous because it will inevitably lead to widespread incest.

Harari is somewhat engaging, but I can't help but find his overgeneralizations irksome. When Harari talks about cutting-edge neuroscience, more often than not, he cites a newspaper or a website rather than a peer-reviewed publication. Then, he often paraphrases the part of the article in which a researcher wildly speculates about the future implications of her research as if it is a subject of active investigation. He glides along from topic to topic so that it becomes difficult to discern fact from wild prediction. This is a troubling trend as it makes the pursuit of knowledge about the function of genes or the organization of the brain seem like part of a nefarious, soon-to-be-realized plot to design an immortal class of cyborg elites.

Read more

1 people found this helpful