Menu

Advisory Board

Alexei Valerievich Turchin

The article Is SETI dangerous? said

If we think that Seed AI is possible, then SETI is much more dangerous than it was thought by Richard Carrigan in his article “Do potential SETI signals need to be decontaminated?”
 
Any supercivilization that reached AI could send through space pieces of its own source code by radio. These space viruses or trojans would be executed by naïve early civilizations who would then be eliminated by the code. Of course it will be not only plain code. A simple and clear method exists to send through radio signals a program which will be compatible with any computer. Here are the needed steps:
 
1. At first it will contain video, where extraterrestrials will explain their language and promise a lot of benefits if the code is executed. (Video will be easily recognized by signals of line end and picture end, as it was in old TV systems on Earth.)

Alexei Valerievich Turchin was the author of this article and is an active participant in the Russian Transhumanist Movement who is currently attending Moscow State University where he has studied Physics and Art History. He is responsible for all Russian translations on the Lifeboat Foundation site.
 
Read his Structure of the Global Catastrophe: Risks of human extinction in the XXI century, Observer Fate, and Natural Disasters and Anthropic Principle, and his translations of Artificial intelligence as a factor in global risk, Systematic errors that affect the risk assessment, Existential Risks Analyzing Human Extinction Scenarios and Related Hazards, A Primer on the Doomsday argument, and How Unlikely is a Doomsday Catastrophe? (All in Russian.)
 
Alexei is currently working on the Russian book Structure of global catastrophe and has published the Russian novel Walk in the forest, which describes an attempt to create a world without suffering, i.e. paradise engineering in contemporary terms.