by Yuval Noah Harari, Derek Perkins
Published
Not found
Pages
Not found
Language
English
Publisher
HarperAudio
Hardcover
$21.81
Paperback
$20.53
Audiobook
$23.62
Audio CD
Not found
Yuval Noah Harari, author of the critically acclaimed New York Times best seller and international phenomenon Sapiens , returns with an equally original, compelling, and provocative book, turning his focus toward humanity's future and our quest to upgrade humans into gods. Over the past century, humankind has managed to do the impossible and rein in famine, plague, and war. This may seem hard to accept, but as Harari explains in his trademark style - thorough yet riveting - famine, plague, and war have been transformed from incomprehensible and uncontrollable forces of nature into manageable challenges.
For the first time ever, more people die from eating too much than from eating too little; more people die from old age than from infectious diseases; and more people commit suicide than are killed by soldiers, terrorists, and criminals put together. The average American is 1,000 times more likely to die from binging at McDonalds than from being blown up by Al Qaeda. What then will replace famine, plague, and war at the top of the human agenda?
As the self-made gods of planet Earth, what destinies will we set ourselves, and which quests will we undertake? Homo Deus explores the projects, dreams, and nightmares that will shape the 21st century - from overcoming death to creating artificial life. It asks the fundamental questions: Where do we go from here?
And how will we protect this fragile world from our own destructive powers? This is the next stage of evolution. This is Homo Deus .
With the same insight and clarity that made Sapiens an international hit and a New York Times best seller, Harari maps out our future.
In 'Homo Deus: A Brief History of Tomorrow,' Yuval Noah Harari delves into the fascinating realm of the future, exploring the potential paths humanity might traverse as technological and scientific advancements redefine how we understand ourselves and the world. Harari's journey takes readers through an exploration that melds history, science, and philosophy, asking critical questions about our collective ambitions and the potential hazards that lie ahead. As our capabilities expand beyond biological limitations, Harari poses a challenge to re-evaluate what it means to be human in an era where data, artificial intelligence, and bioengineering may shape the fabric of society.
Humanity is on the brink of unprecedented technological revolutions that could redefine life on Earth. The potential for data-driven societies raises ethical questions about privacy autonomy and societal control. Advancements in biotechnology and AI challenge traditional notions of life consciousness and morality.
Yuval Noah Harari in his groundbreaking book Homo Deus examines the dizzying array of possibilities awaiting humanity in the future. Through a thought-provoking narrative he addresses how humanity's quest for growth and power has reached a pivotal point. In this engaging exploration Harari weaves history with visionary insights questioning whether the quests for immortality and happiness will lead to a redefined species or societal upheaval.
His analysis goes beyond mere speculation grounding futuristic themes in historical context. Harari challenges the reader to consider the ethical implications of technologies like artificial intelligence and genetic engineering. He scrutinizes how these developments could create a world ruled by algorithms raising concerns about surveillance and privacy.
With a sweeping vision Homo Deus' projects the impact of potential shifts from Homo sapiens to a new kind of humanity. Harari's examination of society's trajectory integrates data-driven insights and philosophical inquiry merging to unveil a poignant narrative. Through this lens Harari invites us to engage with questions of free will the essence of consciousness and the possibilities that might reshape the very nature of existence.
Rethinking human potential he positions the reader to evaluate the promises and perils of the future.
'Homo Deus' captivates with its blend of rigorous research and accessible prose, inviting readers into a dialogue about humanity's future trajectories, making complex ideas more relatable Harari's unique approach, combining historical analysis with speculative foresight, provides an eye-opening perspective, encouraging readers to ponder ethical implications of breakthroughs in AI and biotechnology The book's intuitive examination of human ambition celebrates intellectual curiosity while questioning societal values Harari's narrative compels readers to confront the potential consequences of their aspirations.
Not found
Not found
Not found
Not found
Based on 36204 ratings
Interesting and enlightening in parts but other parts, such as his discussion of consciousness, are overthought and ultimately gibberish. The first part does a good job of describing how humans before the Enlightenment spent most of their lives dealing with disease, famine and conflict, but today those are relatively controlled. More people die now from obesity than hunger. The rest of the book discusses where the technologies and knowledge making that possible are heading. He essentially describes a journey from hunter gatherers that were just part of an ecosystem to masters of the planet on track to become gods but at the risk of being conquered by our own technologies. He builds this theme around algorithms. Algorithms are the formulas for processes and that makes them not just computer codes but the essence of life itself. People are ultimately a collection of algorithms. Build better algorithms and people become superfluous unless they enhance people. That is where our technologies are heading. The book does a good job of making that point. The book bogs down in discussing things like the algorighms for consciousness. Science hasn't been able to fully determine how consciousness and many other brain processes work. The middle of the book frequently goes off on tangents that add nothing. One ends by asking if consciousness is even needed. It gets lost in looking for algorithms when analyzing functions is the key point. Those parts are overthought on steroids. But it eventually gets to discussing how feelings govern actions and feelings are not chosen, they are simply felt, and that is why free will doesn't exist. That part is excellent. I think the book is underthought in important ways. One is not discussing how controlling disease, famine and conflict has allowed humans to multiply out of control. There is no technology that will allow that to continue indefinitely. It is virtually certain that disease, famine and conflict will return as the climate and civilizations collapse. Our inability to stop that is the lethal flaw in the whole journey to becoming gods. Perhaps a few homo deus supermen will survive but that is pure speculation. The apocalypse is certain and the book only mentions it as an issue in passing on one sentence that I saw. No discussion of the future is complete without at least acknowledging that problem. Another oversight is the potential chaos resulting from the growth of misinformation. AI is facilitating a growng trend creating rather than merely measuring reality. One casualty of AI may be truth. If AI takes over and humans can no longer know what is real, algorithms assessing what is important can't work. What will stop AI from destroying reality itself? It would potentially be a good strategy to eliminate humans. Another big issue not addressed is our inability to control the power of developing technologies in other ways. This theme is arising in many areas. The dismay of the scientists that developed the atomic bomb is a good example. One of the greatest intellectual achievements in history turned out to be a bomb that can destroy everything on earth. Once it was made the scientists turned their attention to controlling its use only to have the political and military leaders take over and start a new arms race. Similarly, gene editing technology will soon enable the creation of life itself. There is no way to ensure that bad people will not use this technology to create very dangerous master races to conquer the world. I think anyone would be challenged to tie all of this together. This book is a good start along with Code Breaker and American Prometheus, but there is still a lot to consider missing here.
This was a really interesting read. I read his other book, Homo Sapiens, and it was great too. I am about to start reading his other book. He is very thorough in his presentations.
Most of this is not about “tomorrow” but about yesterday and today. Most of the material that pertains most directly to the future begins with Chapter 8 which is two-thirds of the way into the book. But no matter. This is another brilliant book by the very learned and articulate Professor Harari. It should be emphasized that Harari is by profession a historian. It is remarkable that he can also be not only a futurist but a pre-historian as well as evidenced by his previous book, “Sapiens.” This quote from page 15 may serve as a point of departure: “Previously the main sources of wealth were material assets such as gold mines, wheat fields and oil fields. Today the main source of wealth is knowledge.” (p. 15) In the latter part of the book Harari defines this knowledge more precisely as algorithms. We and all the plants in the ground and all fish in the sea are biological algorithms. There is no “self,” no free will, no individuals (he says we are “dividuals”) no God in the sky, and by the way, humans as presently constituted are toast. The interesting thing about all this from my point of view is that I agree almost completely. I came to pretty much the same conclusions in my book, “The World Is Not as We Think It Is” several years ago. What I want to do in this review is present a number of quotes from the book and make brief comments on them, or just let them speak for themselves. In this manner I think the reader can see how beautifully Harari writes and how deep and original a thinker he is. “Islamic fundamentalists could never have toppled Saddam Hussein by themselves. Instead they enraged the USA by the 9/11 attacks, and the USA destroyed the Middle Eastern china shop for them. Now they flourish in the wreckage.” (p. 19) Notice “fundamentalists” instead of “terrorists.” This is correct because ISIS, et al., have been financed by Muslim fundamentalists in places like Saudi Arabia. “You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins.” (p. 67) Harari speaks of a “web of meaning” and posits, “To study history means to watch the spinning and unravelling of these webs, and to realise that what seems to people in one age the most important thing in life becomes utterly meaningless to their descendants.” (p. 147) One of the themes begun in “Sapiens” and continued here is the idea that say 20,000 years ago humans were not only better off than they were in say 1850, but smarter than they are today. (See e.g., page 176 and also page 326 where Harari writes that it would be “immensely difficult to design a robotic hunter-gatherer” because of the great many skills that would have to be learned.) In “The World Is Not as We Think It Is” I express it like this: wild animals are smarter than domesticated animals; humans have domesticated themselves. For Harari Nazism, Communism, “liberalism” humanism, etc. are religions. I put “liberalism” in quotes because Harari uses the term in a historical sense not as the opposite of conservatism in the contemporary parlance. “For religions, spirituality is a dangerous threat.” (p. 186) I would add that religions are primarily social and political organizations. “If I invest $100 million searching for oil in Alaska and I find it, then I now have more oil, but my grandchildren will have less of it. In contrast, if I invest $100 million researching solar energy, and I find a new and more efficient way of harnessing it, then both I and my grandchildren will have more energy.” (p. 213) “The greatest scientific discovery was the discovery of ignorance.” (p. 213) On global warming: “Even if bad comes to worse and science cannot hold off the deluge, engineers could still build a hi-tech Noah’s Ark for the upper caste, while leaving billions of others to drown….” (p. 217) “More than a century after Nietzsche pronounced Him dead, God seems to be making a comeback. But this is a mirage. God is dead—it’s just taking a while to get rid of the body.” (p. 270) “…desires are nothing but a pattern of firing neurons.” (p. 289) Harari notes that a cyber-attack might shut down the US power grid, cause industrial accidents, etc., but also “wipe out financial records so that trillions of dollars simply vanish without a trace and nobody knows who owns what.” (p. 312) Now THAT ought to scare the bejesus out of certain members of the one percent! On the nature of unconscious cyber beings, Harari asserts that for armies and corporations “intelligence is mandatory but consciousness is optional.” (p. 314) This seems obvious but I would like to point out that what “consciousness” is is unclear and poorly defined. While acknowledging that we’re not there yet, Harari thinks it’s possible that future fMRI machines could function as “almost infallible truth machines.” Add this to all the knowledge that Facebook and Google have on each of us and you might get a brainstorm: totalitarianism for humans as presently constituted is inevitable. One of conundrums of the not too distance future is what are we going to do with all the people who do not have jobs, the unemployable, what Harari believes may be called the “useless class”? Answer found elsewhere: a guaranteed minimum income (GMI). Yes, with cheap robotic labor and AI, welfare is an important meme of the future. Harari speculates on pages 331 and 332 that artificial intelligence might “exterminate human kind.” Why? For fear humans will pull the plug. Harari mentions “the motivation of a system smarter than” humans. My problem with this is that machines, unless it is programmed in, have no motivations. However it could be argued that they must be programmed in such a way as to maintain themselves. In other words they do have a motivation. Recently I discussed this with a friend and we came to the conclusion that yes the machines will protect themselves and keep on keeping on, but they would not reproduce themselves because new machines would be taking resources from themselves. Harari believes that we have “narrating selves” that spew out stories about why we do what we do, narratives that direct our behavior. He believes that with the mighty algorithms to come—think Google, Microsoft and Facebook being a thousand times more invasive and controlling so that they know more about us than we know about ourselves. Understanding this we will have to realize that we are “integral parts of a huge global network” and not individuals. (See e.g., page 343) Harari even sees Google voting for us (since it will know our desires and needs better than we do). (p. 344) After the election of Trump in which some poor people voted to help billionaires get richer and themselves poorer, I think perhaps democracy as presently practiced may go the way of the dodo. An interesting idea taking this further is to imagine as Harari does that Google, Facebook, et al. in say the personification of Microsoft’s Cortana, become first oracles, then agents for us and finally sovereigns. God is dead. Long live God. Along the way we may find that the books you read “will read you while you reading them.” (p. 349) In other words what is coming are “techno-religions” which Harari sees as being of two types: “techno-humanism and data religion.” He writes that “the most interesting place in the world from a religious perspective is…Silicon Valley.” (p. 356) The last chapter in the book, Chapter 11 is entitled “The Data Religion” in which the Dataists create the “Internet-of-All-Things.” Harari concludes, “Once this mission is accomplished, Homo sapiens will vanish.” (p. 386) --Dennis Littrell, author of “Hard Science and the Unknowable”