Yuval Noah Harari: Nexus

En su nuevo y voluminoso ensayo, el influyente divulgador israelí mira a través de la lente de la historia para analizar cómo las redes de información han influido en la configuración de la sociedad.

Actualizado a

477 Yuval Noah Harari freeimage

Escucha este articulo

Imprimir

In his global best-seller, Sapiens: A Brief History of Humankind, the Israeli academic Yuval Noah Harari guided his readers through some 75,000 years of human history. He described human development, including the development of cognition, agriculture and science, as a series of accidental revolutions. In his subsequent book, Homo Deus, Harari looked to the present and potential future, explaining how our ability to control the world around us is turning us into something new. 

477 Yuval Noah Harari Shutterjpg

connection

Harari’s most recent book Nexus: A Brief History of Information Networks From the Stone Age to AI is another ambitious work. In it, he attempts to explain why people can’t seem to communicate with each other, despite our having such sophisticated information technology. The development of AI is fundamental to this story, Harari says, as it is a tool with both enormous positive potential and with equal capacity to do harm. In a meeting with the press, Harari touched on some of the big themes in his book. He began with the good news.

Yuval Noah Harari (Middle Eastern accent): AI can give us the best healthcare in history. The AI doctor can be with us 24 hours a day, monitor our blood pressure, our sugar level, our stress level, and also give us advice which is tailored to our individual biology. This is something that human doctors cannot do, and it will also be much cheaper than a human doctor. It could be that in ten or twenty years, even poor people in some remote village will enjoy better healthcare than the billionaires of today, thanks to AI. 

AGENT AI 

However, we should be cautious of AI, warns Harari, as it is different from every previous technology we’ve ever invented. It’s not a tool, he insists, but an “independent agent” with the capability to make decisions and create new things.

Yuval Noah Harari: Every previous technology, if you think about nuclear weapons, the atom bomb, of course, immense destructive power, but still the power is in human hands. The bomb itself can’t decide anything and can’t invent any new weapon or military strategy. AI is different. An AI can produce new things. It starts small with [by] producing images and texts, and writing computer code, but ultimately AI could create more powerful AI. 

TOO FAST

Silicon Valley is caught up in an arms race mentality, says Harari. Developments are happening too quickly and thoughtlessly, with money-making in mind. 

Yuval Noah Harari:  If we go back a few years and we look at social media: AIs were already there, controlling the conversation by deciding what gets attention. So this immense power of the editor is now in the hands of the AI, now the new generation of AI could create the content. And I know that lots of people say, “Yes, they write texts, but they are not very good.” But understand this is just the very, very first steps of the AI revolution. We haven’t seen anything yet. It took billions of years for amoebas to evolve into dinosaurs and mammals and humans, because organic evolution is slow. AI evolution, digital evolution is millions of times faster.

INFORMATION OVERLOAD

And AIs don’t consider what is true and what is not true: they just churn out information, or disinformation.

Yuval Noah Harari: Most information in the world is junk. To write a truthful report, you need to invest time, money, effort; to write a lie or a fiction, you don’t need to invest anything, you just write the first thing that comes to mind. So the truth is costly, whereas fiction is cheap. The truth is often complicated, because reality is complicated, whereas fiction can be made as simple as you would like it to be. And the truth is often painful, whereas fiction can be made as flattering as you would like it to be. The expectation [is] that if you flood the world with information the truth will kind of float to the surface… No, it sinks to the bottom

DEMOCRACY: A HISTORY

And truth is fundamental to fairness, says Harari. Take democracy, for example, which is not an automatic consequence of elections. 

Yuval Noah Harari: They have elections every four years in North Korea, doesn’t make it a democracy. As we’ve seen in Venezuela, you can hold elections and rig them. Democracy is when the people stand, talk together and try to reach a common decision. We don’t have any example of a large-scale democracy from the ancient world. The only examples we know are of small-scale city states like ancient Athens or even smaller tribes, because to have a conversation people need to gather in the main square and talk. A large-scale conversation became possible only with the rise of modern information technology; newspapers begin to arise in the 17th and 18th centuries in places like the Netherlands and England, which is where you also see the rise of the first large-scale democracies in history. And afterwards we had more information technologies like telegraph and radio and television, and this is the foundation of large-scale democracy. Because again, without these technologies, there is no conversation and there is no democracy. Without the truth, without facts, the conversation means nothing. If we just exchange lies and fantasies, this is not really a conversation. 

A RIGHT TO STUPIDITY 

So what if we just censored people who wrote lies and fictions online?  

Yuval Noah Harari: Social media companies should be very careful about censoring or banning human beings. Humans produce so much content every day: some of it is full of hate, but some of it is full of compassion. Some of it is fake news, but some of it is the truth. People have a right to stupidity. In many situations people lie and it’s not good, but it’s still protected by the law and it’s part of freedom of speech. The main issue now with social media and fake news and the conspiracy theories is not the decisions of human users, it is the decisions of corporate algorithms. If the Facebook algorithm takes this conspiracy theory and decides to spread it because it attracts more attention and the company makes more money, this is not freedom of speech. This is a completely different issue.

TOTAL CONTROL

And there is another issue at play, says Harari. One that connects mortal enemies, like Iran and Israel.

Yuval Noah Harari: My home country of Israel is building a total surveillance regime in the occupied Palestinian territories, with cameras and drones and software following everybody all the time. In Iran there are hijab laws which force women to cover their heads anytime they go in public, even in their own car. If you go in your car somewhere, you have to cover your hair if you’re a woman. Now these were old laws from 1979, from the Khomeini Revolution, but the regime had difficulty enforcing them because you can’t put a policeman on every street in Iran, and there was also a problem of friction with the population because people didn’t like it, so they would argue with the police. In recent years, the Iranians switched to an AI system. Iran is now full of these surveillance cameras with facial recognition software which automatically identify women without the head cover and immediately punish them. The authority to punish them is now in the hands of the AI.

SUSPICIOUS MINDS 

And yet, while it seems counterintuitive, trust is the antidote, says Harari.

Yuval Noah Harari: What you hear on both the far right and the far left is suspicion of all the institutions that were established by human society to identify and promote the truth. Democracy is based on trust. You need to trust the newspapers or the main election committee or the parliament; you have to trust somebody for democracy to function. You need independent courts and you need independent media that can expose the mistakes and potentially the lies of the government. And you look at people who are obsessed only with power and don’t care at all about the truth. People like Putin, Maduro, Netanyahu in Israel, they don’t seem particularly happy individuals. I think that part of the struggle today is to remind people to avoid this extremely cynical view of humanity. Not everybody is obsessed with power.

SAFETY FEATURES

So how do we ensure AIs are safe? 

Yuval Noah Harari: If you produce a car, you have to invest some of your research in making sure the car is safe. If you produce a medicine or a vaccine, you have to invest a lot of effort and money and talent in making sure this medicine is safe. Same thing for AI. You develop a powerful algorithm, make sure it doesn’t have harmful side-effects for society, for politics, for culture. If AI companies would invest, say, 20 per cent of their budget, their talent, in safety, I think that would be a very good development. 

ETHICAL FEATURES

And, Harari adds, it becomes the job of scholars, philosophers, historians such as himself, to find solutions for the moral conundrums that inevitably arise in AI-driven inventions.

Yuval Noah Harari: Questions that bothered philosophers for thousands of years, but they were not very practical, now become practical questions. In the moment of crisis, few people act according to theoretical philosophy. We act from our emotions much more than from our intellectual ideas. Now that we have self-driving vehicles, you need to tell the algorithm what to do in a specific situation; like if the car is about to hit, say, two children, and the only way to avoid it could lead to an accident that will kill the owner of the car who is asleep on the back seat. What should the car do? Now this is, today, a practical question. You need to tell the AI what to do and for that you need philosophers. And not just engineers and mathematicians.  

The Soul of the Nation

Born in Israel in 1976,Yuval Noah Harari received his PhD from the University of Oxford in 2002. Harari originally specialised in world history, medieval history and military history and became a lecturer at the Department of History at the Hebrew University of Jerusalem in Israel. As a research fellow at the University of Cambridge’s Centre for the Study of Existential Risk, his research focuses on big macro-historical questions, such as: What is the relationship between history and biology? What is the essential difference between Homo sapiens and other animals? Did people become happier as history unfolded? Or what ethical questions do science and technology raise in the 21st century? 

Harari is very much involved in current affairs. In 2020, he wrote extensively on the Covid-19 crisis, and in 2022 on Russia’s invasion of Ukraine. He has also written at length on the war in Gaza, which he has called “unacceptable”; he believes that the biggest threat to his country comes not from Hamas, Hezbollah or Iran, but from the battle between Israelis for the “soul of the nation”.

ESP PORTADA 4722

Este artículo pertenece al número de december2024 de la revista Speak Up.

P.A.N.I.C.: A Short Story
iStock

Fiction

P.A.N.I.C.: A Short Story

La historia que George Orwell habría escrito si hubiese vivido en la era de YouTube.

Rachel Roberts

Weird Christmas: Top 10

Culture

Weird Christmas: Top 10

Gatos gigantes y rábanos desmesurados, estilismo feísta, cubos de pollo frito y hasta algo de escatología; seleccionamos las costumbres navideñas más extrañas.

Talitha Linehan

More in Explore

TODAY’S TOP STORIES

Cold Water Immersion Therapy: Benefits and Risks

Current Affairs

Cold Water Immersion Therapy: Benefits and Risks

Los efectos sobre la salud de los baños en agua muy fría están bien documentados, pero ¿hasta qué punto pueden ser peligrosos y cuáles son los beneficios frente a los riesgos?

Katherine Latham