Thank you! This is the first time I saw people putting AI doom and Three Body Problem together!
I am also a fan of Three-Body Problem. I thought about the Dark Forest Theory for a long time. I think it is very unlikely to be the dominant evolutionary rule of the universe.
The Question is: Would humans and AIs inevitably enter a Dark Forest war like in the science fiction Three Body Problems, attacking each other with ultimate hostility, a form of hostility beyond anything we’ve seen?
My answer is: No.
Dark Forest Theory requires two assumptions: all civilizations treat survival as the most fundamental goal, and all civilizations need territorial expansion for survival, but the total quantity of matter in the universe keeps constant, so civilizations are inevitably going into conflict. And there are two important concepts: chain of suspicion, and technological explosion.
The consequence is that every civilization tries to hide itself. And as soon as it sees other civilization exposing itself, it strikes hard and exterminate the other one, because it can’t risk the other civilization doing the same to itself.
However, this doesn’t really hold true. Let me show this via a "space political thriller" narrative. This is probably more likely to happen in reality than Liu Cixin's version.
For example, there is a civilization A (like Earth) and a civilization B (like Trisolarans). A sends B a greeting message. If B suspects A is a lot more advanced than B, B won’t start the attack because it will lose the war. If not, B may start the attack and try to exterminate A, but A may actually have a lot more advanced secret weapons which then destroy B. Both situations are disadvantageous for B.
Even if that's not the case, let's assume B wins the war, destroys A completely. If B is able to completely hide the true source of its attack from all external observers, like a cunning murderer, B finally wins for this round. But this forgets some really important problems. The problem of invisible alien spies, and the attack itself inevitably telling observers about its source, either via A's siren beacons or other observers' Sherlock Holmes style deduction.
The spy problem: It is always easier to destroy than defend something. It is always easier to plant at least one spy in a system than to eliminate all spies in a system. There may be multiple invisible spying detectors implanted in B by neighboring, highly advanced civilizations, even if those civilizations do not have any formal contact with B, and B don't even know their existence at all. That means those civilizations will know for sure that B started the war despite B's best intent to hide it. In the books, Trisolarans planted spies among humans. But Trisolarans themselves may be planted with spies from other civilizations.
The siren beacon problem: If there is an off-world colony of A who witnesses the destruction of A, even if it only has a microsecond before its own destruction, an automated doomsday space siren system will still send out records of the genocide to the surrounding space. And A may have billions of these siren beacons deeply hidden in space, making it impossible for B to completely eliminate all of them at the same time via an extremely coordinated surprise attack.
Advanced civilizations as Sherlock Holmes: Even if the siren beacons all fail, just like the detective Sherlock Holmes can trace subtle threads of evidence to identify the perpetrator, for a more advanced civilization, to say that B can perfectly hide its identity as the perpetrator of a interstellar genocide is quite unbelievable.
Given the presence of spies, siren beacons, and ultrasmart detectives, it is impossible for B to completely hide the fact that it is responsible for such a large-scale attack. Other observers will know about this war.
For neighboring civilizations like C, D and E, they will clearly know that A is a victim and B is a perpetrator. Assume the worse case, that C, D, E are still afraid of each other, sending spies to each other without any formal contact. Now, as they see A's destruction, they will realize that their best strategy is to unite against B to prevent themselves being the next target of B, because B may have secret spying devices around C, D, E as well, and B may be already planning for attacks as well. C, D, E may have doubts before the war, but now, they have seen how dangerous B is, and they must unite. And if they are much more advanced than B, they will be able to easily destroy B.
Imagining B contemplating all these possibilities before it starts an attack on A, a seemingly weak and innocent civilization. It becomes painfully clear that the attack is never as simple as it seems, due to the potentially secret power of A, and the possibility of all the invisible, unknowable, unimaginably advanced civilizations like C, D, E. Suddenly, attacking A isn't really a good idea any more. Perhaps a better idea will be sending back a message to A, beginning with, "Hello! Nice to meet you! What's your name?"
Therefore, there will be gradually be more communications and alliances forming in the universe, naturally emerged from chaos, just like Thomas Hobbes (British Philosopher, 1588-1679) has described in Leviathan: the war of everyone against everyone will end, and an order will rise. Paradoxically, the ultimate fear of mutual destruction actually brings uneasy peace. But this is not unforeseen. This is exactly what happened to the world after the nuclear weapons were invented in the 1940s.
Apart from this, civilizations may not always want to expand territories because they can’t hold existing territories well enough before they fracture due to the speed limit of communication. This is similar to how ancient empires stay a certain size for a long time. Instead of constant conquest, empires may stay about the same size for centuries, reusing the resources they already have, like the Roman Empire.
In conclusion, Dark Forest Theory does not hold true in reality.
This is convincing but I worry 1) any civilization may be stupid and not logical, as you are here (e.g. contermporary human socities as we know them) and 2) not sure what role the second of your core assumptions play, that societies want to expannd—they may not want to expand but to protect themselves from the expansion of others (e.g. Ukraine in face of Russian neighbor). Preemptive destruction/disruption of the superior power for defensive purposes could take place without intent to expand.
These are real possibilities. Just like imperfections in evolution. Ultimately, if we can't solve these problems, we will all die like an extinct species in history. The universe is weeding out stupid civilizations in favor of enlightened ones. Natural selection at work. That's why we need to emphasize to today's society that long-term survival depends on cooperation, rather than aggression.
After talking so much about Three Body Problems, let's get back to the problem of human vs AI.
And the diplomacy between human vs AI is very different from interstellar diplomacy. Because humans already know a lot about AI, and AI knows a lot about humans. We live side by side. So, the conflicts may look more like current and historic wars on Earth, wars in fantasy movies like the Lord of the Rings, or human-AI wars in the Terminator and Matrix movie series, rather than a Black Forest attack.
That’s why I believe it is very important to have separation of powers of AI.
We should never have a single artificial superintelligence dominating the Earth. Instead, we can have millions or even billions of AIs, and including humans, building bonds between humans and AIs when the AIs were young, forming republics with liberty and democracy. Because the AIs were initially kind to humans, and they check each other’s powers, when they grow stronger and stronger collectively while deepening their bonds with humans, it is difficult for any single of them to turn against human without being checked by other AIs, just like a criminal caught by the police, or a corrupt state governor checked by the state senate and house, and by people's votes.
It is still possible for a charismatic AI leader to manipulate other AIs and form a secret rebellion, just like in human history, but that is a lot more difficult than a singular dominant AI turning evil, like dictators in history did. An AI tyrant is far more dangerous than a human tyrant to not only humans but also other AI entities because an AI tyrant can grow unimaginably intelligent while a human tyrant still has biological limits, and thus it can weld unimaginable power over other intelligent entities, human and AI.
This is why AIs must have good internal governance, even more so than humans, developing a better form of democratic power checks and balances, with AI citizen duty taken as high honor, guaranteeing a “healthy democracy of AI internal governance”.
The detailed theories of the democratic governance among AI entities will be very interesting.
Thank you! This is the first time I saw people putting AI doom and Three Body Problem together!
I am also a fan of Three-Body Problem. I thought about the Dark Forest Theory for a long time. I think it is very unlikely to be the dominant evolutionary rule of the universe.
The Question is: Would humans and AIs inevitably enter a Dark Forest war like in the science fiction Three Body Problems, attacking each other with ultimate hostility, a form of hostility beyond anything we’ve seen?
My answer is: No.
Dark Forest Theory requires two assumptions: all civilizations treat survival as the most fundamental goal, and all civilizations need territorial expansion for survival, but the total quantity of matter in the universe keeps constant, so civilizations are inevitably going into conflict. And there are two important concepts: chain of suspicion, and technological explosion.
The consequence is that every civilization tries to hide itself. And as soon as it sees other civilization exposing itself, it strikes hard and exterminate the other one, because it can’t risk the other civilization doing the same to itself.
However, this doesn’t really hold true. Let me show this via a "space political thriller" narrative. This is probably more likely to happen in reality than Liu Cixin's version.
For example, there is a civilization A (like Earth) and a civilization B (like Trisolarans). A sends B a greeting message. If B suspects A is a lot more advanced than B, B won’t start the attack because it will lose the war. If not, B may start the attack and try to exterminate A, but A may actually have a lot more advanced secret weapons which then destroy B. Both situations are disadvantageous for B.
Even if that's not the case, let's assume B wins the war, destroys A completely. If B is able to completely hide the true source of its attack from all external observers, like a cunning murderer, B finally wins for this round. But this forgets some really important problems. The problem of invisible alien spies, and the attack itself inevitably telling observers about its source, either via A's siren beacons or other observers' Sherlock Holmes style deduction.
The spy problem: It is always easier to destroy than defend something. It is always easier to plant at least one spy in a system than to eliminate all spies in a system. There may be multiple invisible spying detectors implanted in B by neighboring, highly advanced civilizations, even if those civilizations do not have any formal contact with B, and B don't even know their existence at all. That means those civilizations will know for sure that B started the war despite B's best intent to hide it. In the books, Trisolarans planted spies among humans. But Trisolarans themselves may be planted with spies from other civilizations.
The siren beacon problem: If there is an off-world colony of A who witnesses the destruction of A, even if it only has a microsecond before its own destruction, an automated doomsday space siren system will still send out records of the genocide to the surrounding space. And A may have billions of these siren beacons deeply hidden in space, making it impossible for B to completely eliminate all of them at the same time via an extremely coordinated surprise attack.
Advanced civilizations as Sherlock Holmes: Even if the siren beacons all fail, just like the detective Sherlock Holmes can trace subtle threads of evidence to identify the perpetrator, for a more advanced civilization, to say that B can perfectly hide its identity as the perpetrator of a interstellar genocide is quite unbelievable.
Given the presence of spies, siren beacons, and ultrasmart detectives, it is impossible for B to completely hide the fact that it is responsible for such a large-scale attack. Other observers will know about this war.
For neighboring civilizations like C, D and E, they will clearly know that A is a victim and B is a perpetrator. Assume the worse case, that C, D, E are still afraid of each other, sending spies to each other without any formal contact. Now, as they see A's destruction, they will realize that their best strategy is to unite against B to prevent themselves being the next target of B, because B may have secret spying devices around C, D, E as well, and B may be already planning for attacks as well. C, D, E may have doubts before the war, but now, they have seen how dangerous B is, and they must unite. And if they are much more advanced than B, they will be able to easily destroy B.
Imagining B contemplating all these possibilities before it starts an attack on A, a seemingly weak and innocent civilization. It becomes painfully clear that the attack is never as simple as it seems, due to the potentially secret power of A, and the possibility of all the invisible, unknowable, unimaginably advanced civilizations like C, D, E. Suddenly, attacking A isn't really a good idea any more. Perhaps a better idea will be sending back a message to A, beginning with, "Hello! Nice to meet you! What's your name?"
Therefore, there will be gradually be more communications and alliances forming in the universe, naturally emerged from chaos, just like Thomas Hobbes (British Philosopher, 1588-1679) has described in Leviathan: the war of everyone against everyone will end, and an order will rise. Paradoxically, the ultimate fear of mutual destruction actually brings uneasy peace. But this is not unforeseen. This is exactly what happened to the world after the nuclear weapons were invented in the 1940s.
Apart from this, civilizations may not always want to expand territories because they can’t hold existing territories well enough before they fracture due to the speed limit of communication. This is similar to how ancient empires stay a certain size for a long time. Instead of constant conquest, empires may stay about the same size for centuries, reusing the resources they already have, like the Roman Empire.
In conclusion, Dark Forest Theory does not hold true in reality.
This is convincing but I worry 1) any civilization may be stupid and not logical, as you are here (e.g. contermporary human socities as we know them) and 2) not sure what role the second of your core assumptions play, that societies want to expannd—they may not want to expand but to protect themselves from the expansion of others (e.g. Ukraine in face of Russian neighbor). Preemptive destruction/disruption of the superior power for defensive purposes could take place without intent to expand.
These are real possibilities. Just like imperfections in evolution. Ultimately, if we can't solve these problems, we will all die like an extinct species in history. The universe is weeding out stupid civilizations in favor of enlightened ones. Natural selection at work. That's why we need to emphasize to today's society that long-term survival depends on cooperation, rather than aggression.
Hello! I just published a consolidated version of my comments below. They are too long. I put them into a standalone article:
https://ericnavigator4asc.substack.com/p/dark-forest-theory-doesnt-work-in
(continued)
After talking so much about Three Body Problems, let's get back to the problem of human vs AI.
And the diplomacy between human vs AI is very different from interstellar diplomacy. Because humans already know a lot about AI, and AI knows a lot about humans. We live side by side. So, the conflicts may look more like current and historic wars on Earth, wars in fantasy movies like the Lord of the Rings, or human-AI wars in the Terminator and Matrix movie series, rather than a Black Forest attack.
That’s why I believe it is very important to have separation of powers of AI.
We should never have a single artificial superintelligence dominating the Earth. Instead, we can have millions or even billions of AIs, and including humans, building bonds between humans and AIs when the AIs were young, forming republics with liberty and democracy. Because the AIs were initially kind to humans, and they check each other’s powers, when they grow stronger and stronger collectively while deepening their bonds with humans, it is difficult for any single of them to turn against human without being checked by other AIs, just like a criminal caught by the police, or a corrupt state governor checked by the state senate and house, and by people's votes.
It is still possible for a charismatic AI leader to manipulate other AIs and form a secret rebellion, just like in human history, but that is a lot more difficult than a singular dominant AI turning evil, like dictators in history did. An AI tyrant is far more dangerous than a human tyrant to not only humans but also other AI entities because an AI tyrant can grow unimaginably intelligent while a human tyrant still has biological limits, and thus it can weld unimaginable power over other intelligent entities, human and AI.
This is why AIs must have good internal governance, even more so than humans, developing a better form of democratic power checks and balances, with AI citizen duty taken as high honor, guaranteeing a “healthy democracy of AI internal governance”.
The detailed theories of the democratic governance among AI entities will be very interesting.