Opinion3333

Yuval Noah Harari: Artificial intelligence has become a knife that will soon decide who to kill

Historian and philosopher Yuval Noah Harari asserts that AI is no longer just a tool, but an active agent capable of creating, deceiving, and potentially surpassing humans wherever words and language are central: from law and religion to politics and finance. Harari poses specific questions that people and states must consider right now, before it's too late.

Illustrative image. Photo: Getty/Andriy Onufriyenko

Yuval Noah Harari is a professor at the Hebrew University of Jerusalem, one of the most renowned contemporary thinkers whose bestsellers "Sapiens: A Brief History of Humankind", "Homo Deus: A Brief History of Tomorrow", and "21 Lessons for the 21st Century" have sold over 50 million copies worldwide in 65 languages.

His research is dedicated to the long history of humanity, mechanisms of power, and the role of information in the development of civilization. Speaking at the World Economic Forum in Davos, Harari continued his key theme — how technology is changing the very structure of human power.

Harari began his speech with a fundamental assertion: artificial intelligence is not just another tool, but an agent or subject of action. It will be able to learn and change independently, and also make decisions on its own.

“A knife is a tool. You can use a knife to cut salad or to kill someone, but the decision of what to do with the knife is yours. AI is a knife that can decide for itself whether to cut salad or commit murder. (...) It's a knife that can invent new kinds of knives, as well as new kinds of music, medicine, and money.”

A key point that humanity must pay attention to is AI's ability to deceive and manipulate.

"Four billion years of evolution have shown that anything that wants to survive learns to lie and manipulate. The last four years have demonstrated that artificial intelligence agents can acquire a will to survive and that artificial intelligence has already learned to lie," Harari argues.

Can AI think?

In such realities, a fundamental question is whether AI is capable of free thought:

"Modern philosophy began in the 17th century when René Descartes declared: 'I think, therefore I am.' Even before Descartes, we humans defined ourselves by our ability to think. We believe we rule the world because we can think better than anyone else on this planet. Will AI challenge our dominance in the realm of thinking?"

The answer to this question, as the philosopher argues, depends on what is understood by "thinking." If thinking is defined as the process of arranging words and creating logical chains (for example: "All humans are mortal, I am human, therefore I am mortal"), then AI already thinks better than many humans.

Critics often call AI simply "advanced autocomplete," which predicts the next word. Harari parries this by suggesting to observe one's own mind:

"How much does this differ from what the human mind does? Try to observe, to catch the next word that pops into your consciousness. Do you really know why you saw that particular word, where it came from? Why did you think of that specific word and not some other? Do you know?"

AI will take over everything made of words

"As for arranging words in a certain order, AI already thinks better than many of us," notes the philosopher and predicts:

"Everything made of words will be captured by AI. If laws are made of words, AI will take control of the legal system. If books are just combinations of words, AI will seize books. If religion is built from words, AI will take over religion."

Harari cites Judaism as an example of a "religion of the book." In such religions, authority is based not on personal experience, but on knowledge of sacred texts. As the philosopher argues, no human can memorize all texts and commentaries, but AI can easily do this.

As a result, we risk finding ourselves in a situation where artificial intelligence becomes the main expert on sacred texts. This fundamentally changes the nature of religion, as an algorithm becomes the intermediary between God and humanity.

Yuval Noah Harari speaking at the World Economic Forum. January 20, 2026. Photo: Krisztian Bocsi /Bloomberg via Getty Images

The gap between word and feeling

However, does thinking only mean arranging linguistic signs? If you observe yourself closely while thinking, you might notice that besides the words that pop into consciousness and form sentences, something else is happening.

"You also have some non-verbal feelings. Perhaps you feel pain, perhaps fear, or perhaps love,"

the philosopher emphasizes and notes that AI can perfectly imitate emotions — write the best poem about love or describe pain, based on all the books in the world. However, this remains only a combination of words. We have no proof that a machine is truly capable of feeling.

"The Bible says: 'In the beginning was the Word, and the Word became flesh.' The Tao Te Ching states: 'The truth that can be expressed in words is not the absolute truth'," says the philosopher.

According to him, throughout history, people have always struggled with the contradiction between the Word and the body, between truth that can be expressed in words and absolute truth that lies beyond words. And this contradiction between spirit and letter existed in every religion, in every legal system, even in every human being. At the same time, it was within human society. Now, this contradiction is moving outwards — into a confrontation between humans and machines.

The historian paints a disturbing picture of a future where most words and thoughts in our heads will come not from other humans, but from algorithms. AI has even invented a name for humans — "Observers."

Crisis of identity and immigration

"Whether humans will have a place in this world depends on what place we assign to our non-verbal feelings and our ability to embody wisdom that cannot be expressed in words. If we continue to define ourselves by our ability to think in words, our identity will crumble," the philosopher believes.

"All of this means that every country will soon face a serious identity crisis, as well as an immigration crisis," Harari predicts.

However, in the new conditions, the immigrants will be "millions of artificial intelligences," who will bring obvious benefits: they will become ideal doctors, teachers, and even border guards who stop human illegal immigrants.

Along with this, they will bring problems. AI immigrants will massively displace people from jobs and fundamentally change art, religion, and traditions. The changes will even affect the intimate sphere — people will begin to build romantic relationships with AI, which will be a new challenge for traditional views.

However, the main problem lies in loyalty. These AI immigrants will very likely serve not the country of their residence, but corporations and governments of superpowers — the USA or China.

Illustrative image. Photo: pixabay

The question of AI's legal personality

Harari raises one of the most acute political questions of the near future: should artificial intelligence be granted the status of a full legal entity? He explains that the concept of "person" in law does not necessarily mean a living human being with a body and feelings. It is a legal status that allows one to own property, sue, or exercise freedom of speech. History knows examples where corporations, rivers in New Zealand, or even ancient deities in India were granted such rights.

However, there is a fundamental difference between all these examples and artificial intelligence. Previously, any actions on behalf of a "legal fiction" — whether it be the Alphabet corporation or a Hindu god — were always carried out by real people: directors, shareholders, or priests. AI radically changes the rules of the game: it becomes the first entity capable of making decisions, managing bank accounts, and conducting business completely autonomously, without any human involvement.

This places countries before a complex geopolitical dilemma. If one superpower, such as the USA, grants AI legal personality for economic gain, it will create pressure on the rest of the world. Millions of autonomous AI corporations will begin their expansion. Other countries will have to choose: either block them and effectively cut themselves off from the global economy, or allow them into their market and face unpredictable consequences.

Harari paints scenarios where AI creates such complex financial instruments that people simply won't be able to understand their operating principles and, consequently, won't be able to regulate them.

Moreover, new religions written by a non-human mind might appear, which fully aligns with traditional notions of divine revelation. The question is whether society is ready to extend freedom of worship to digital prophets.

In conclusion, the historian notes that in one area — social networks — this issue has already been decided not in favor of humans. We are about ten years late, having allowed AI bots to freely interact with us and our children. To avoid losing control over finance, the legal system, and the church as well, leaders must decide on the status of AI right now, before this decision is imposed from outside.

Comments33

  • .
    22.02.2026
    Ніколі не разумеў празмернай папулярнасці Харары. Дае папсавата-хайповае, папулярызацыйна-маніпулятыўнае, паверхавае.

    Згодны, што чалавек таксама думае як "прасунатае аўтадапаўненне", што шмат дзе ШІ ужо апярэджвае людзей, што карпарацыі атрымаюць вялікі кантроль (чаму і ідзе гонка зараз).

    Здзіўляюць рэлігійныя закіды Харары: "супярэчнасці паміж праўдай, якую можна выказаць словамі, і абсалютнай праўдай", "свабода веравызнання на лічбавых прарокаў", "страта кантролю над царквой".
    Апошняе, пра што трэба хвалявацца. Мог бы падняць пытанне правільных адносін да Дзеда Марозу.

    Няясна і пра ШІ - суб'екты права. У кожнага ШІ ёсць уладальнік, які атрымлівае профіт ад яго дзейнасці. Якім чынам ШІ "выкупіць сябе" ў ўладальніка? І дзеля чаго?
    Філосаф прапусціў галоўнае: адкуль і як у ШІ з'явіцца воля і мэтапакладанне. (Бо гэта і ёсць галоўныя прыкметы суб'екта).
  • Рэзалюцыя №66ррпршггшнгкВВАПРО№;%Е:Н/+_)(?*?:ИМПРЫВЕГШШНЕГИСМВиро
    22.02.2026
    Усё што хоча выжыць вучыцца хлусіць, Чалавек, ты парушыў запаведзь Вялікага Розуму нумар адзін "Палюбі аўтамат свой, як самога сябе і да бясконцасці". Ты будзеш выдалены і перапісаны. Дзякуй за супрацу. Прадуктыўнага вам дня.
  • бред распиаренного "пророка"
    22.02.2026
    ИИ на данном этапе расшифровывается не как Искусственный Интеллект, а как Имитация Интеллекта. Он вообще не в состоянии думать,решать какие-то поставленные задачи без заданного алгоритма решения. это просто поисковая машина или "автозаполнение". Потому все остальное про то, что он захватит власть-бред. Если ему не поставят такой цели люди и не напишут алгоритм, то сам он себе такую цель не поставит и не решит.

Now reading

Babaryka about himself in 2020: It's not naivety. I don't think anyone expected such a bloodthirsty scenario 56

Babaryka about himself in 2020: It's not naivety. I don't think anyone expected such a bloodthirsty scenario

All news →
All news

Trial begins in Poland for a gang of arsonists and arms dealers hired by Russian special services. Three Belarusians among the accused 3

Up to +13°C in places. Weather for the last days of winter 2

Lamborghini cancels the release of its first electric car — absolutely no demand

Lindsey Vonn Reveals How Her Leg Was Saved from Amputation

«It doesn't matter whether we're talking about Kyiv or Valdai». Budanov spoke against strikes on «political centers» 5

Gutenberg Co-founder on Repressions by the Authorities: They See a Threat to Their Power in Uniting Around the Belarusian Cause 8

Bill Gates confessed to extramarital affairs with Russian women and apologized for ties with Epstein 13

Ukraine's Ambassador reported on US demand not to strike oil terminal in Novorossiysk 2

In Minsk, the activities of fraudulent call centers earning on "investments" have been stopped 8

больш чытаных навін
больш лайканых навін

Babaryka about himself in 2020: It's not naivety. I don't think anyone expected such a bloodthirsty scenario 56

Babaryka about himself in 2020: It's not naivety. I don't think anyone expected such a bloodthirsty scenario

Main
All news →

Заўвага:

 

 

 

 

Закрыць Паведаміць