Toggle light / dark theme

The first of my major #Libertarian policy articles for my California gubernatorial run, which broadens the foundational “non-aggression principle” to so-called negative natural phenomena. “In my opinion, and to most #transhumanist libertarians, death and aging are enemies of the people and of liberty (perhaps the greatest ones), similar to foreign invaders running up our shores.” A coordinated defense agianst them is philosophically warranted.


Many societies and social movements operate under a foundational philosophy that often can be summed up in a few words. Most famously, in much of the Western world, is the Golden Rule: Do onto others as you want them to do to you. In libertarianism, the backbone of the political philosophy is the non-aggression principle (NAP). It argues it’s immoral for anyone to use force against another person or their property except in cases of self-defense.

A challenge has recently been posed to the non-aggression principle. The thorny question libertarian transhumanists are increasingly asking in the 21st century is: Are so-called natural acts or occurrences immoral if they cause people to suffer? After all, taken to a logical philosophical extreme, cancer, aging, and giant asteroids arbitrarily crashing into the planet are all aggressive, forceful acts that harm the lives of humans.

Traditional libertarians throw these issues aside, citing natural phenomena as unable to be morally forceful. This thinking is supported by most people in Western culture, many of whom are religious and fundamentally believe only God is aware and in total control of the universe. However, transhumanists —many who are secular like myself—don’t care about religious metaphysics and whether the universe is moral. (It might be, with or without an almighty God.) What transhumanists really care about are ways for our parents to age less, to make sure our kids don’t die from leukemia, and to save the thousands of species that vanish from Earth every year due to rising temperatures and the human-induced forces.

An impasse has developed among philosophers, and questions once thought absurd, now bear the cold bearing of reality. For example, automation, robots, and software may challenge if not obliterate capitalism as we know it before the 21st century is out. Should libertarians stand against it and develop tenets and safeguards to protect their livelihoods? I have argued, yes, a universal basic income of some sort to guarantee a suitable livelihood is in philosophical line with the non-aggression principle.

Read more

Is the risk of cultural stagnation a valid objection to rejuvenation therapies? You guessed it—nope.


This objection can be discussed from both a moral and a practical point of view. This article discusses the matter from a moral standpoint, and concludes it is a morally unacceptable objection. (Bummer, now I’ve spoiled it all for you.)

However, even if the objection can be dismissed on moral grounds, one may still argue that, hey, it may be immoral to let old people die to avoid cultural and social stagnation, but it’s still necessary.

One could argue that. But one would be wrong.

Read more

A few ideas on self-awareness and self-aware AIs.


I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)

Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.

Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you’re ‘giving life to someone else’. Before you make them, there’s no one to give anything to. You’re not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.

Read more

A new well written but not very favorable write-up on #transhumanism. Despite this, more and more publications are tackling describing the movement and its science. My work is featured a bit.


On the eve of the 20th century, an obscure Russian man who had refused to publish any of his works began to finalize his ideas about resurrecting the dead and living forever. A friend of Leo Tolstoy’s, this enigmatic Russian, whose name was Nikolai Fyodorovich Fyodorov, had grand ideas about not only how to reanimate the dead but about the ethics of doing so, as well as about the moral and religious consequences of living outside of Death’s shadow. He was animated by a utopian desire: to unite all of humanity and to create a biblical paradise on Earth, where we would live on, spurred on by love. He was an immortalist: one who desired to conquer death through scientific means.

Despite the religious zeal of his notions—which a number of later Christian philosophers unsurprisingly deemed blasphemy—Fyodorov’s ideas were underpinned by a faith in something material: the ability of humans to redevelop and redefine themselves through science, eventually becoming so powerfully modified that they would defeat death itself. Unfortunately for him, Fyodorov—who had worked as a librarian, then later in the archives of Ministry of Foreign Affairs—did not live to see his project enacted, as he died in 1903.

Fyodorov may be classified as an early transhumanist. Transhumanism is, broadly, a set of ideas about how to technologically refine and redesign humans, such that we will eventually be able to escape death itself. This desire to live forever is strongly tied to human history and art; indeed, what may be the earliest of all epics, the Sumerian Epic of Gilgamesh, portrays a character who seeks a sacred plant in the black depths of the sea that will grant him immortality. Today, however, immortality is the stuff of religions and transhumanism, and how these two are different is not always clear to outsiders.

Contemporary schemes to beat death usually entail being able to “upload” our minds into computers, then downloading our minds into new, better bodies, cyborg or robot bodies immune to the weaknesses that so often define us in our current prisons of mere flesh and blood. The transhumanist movement—which is many movements under one umbrella—is understandably controversial; in 2004 in a special issue of Foreign Policy devoted to deadly ideas, Francis Fukuyama famously dubbed transhumanism one of the most dangerous ideas in human history. And many, myself included, have a natural tendency to feel a kind of alienation from, if not repulsion towards, the idea of having our bodies—after our hearts stop—flushed free of blood and filled with cryonic nitrogen, suspending us, supposedly, until our minds can be uploaded into a new, likely robotic, body—one harder, better, and faster, as Daft Punk might have put it.

Read more

Super-smart robots with artificial intelligence is pretty much a foregone conclusion: technology is moving in that direction with lightning speed. But what if those robots gain consciousness? Will they deserve the same rights as humans? The ethics of this are tricky. Explore them in the video below.

Read more

“As academics we can sign petitions, but it is not enough.”

As academics we can sign petitions, but it is not enough. Scott Aaronson wrote very eloquently about this issue after the initial ban was announced (see also Terry Tao). My department has seen a dramatic decrease in the number of applicants in general and not just from Iran. We were just informed that we can no longer make Teaching Assistant offers for students who are unlikely to get a visa to come here.

The Department of Homeland Security has demonstrated its blatant disregard for moral norms. Why should we trust its scientific norms? What confidence do we have that funding will not be used in some coercive way? What does it say to our students when we ask them to work for DHS? Yes, the government is big, but at some point the argument that it’s mostly the guy at the top who is bad but the rest of the agency is still committed to good science becomes just too hard to swallow. I decided that I can’t square that circle. Each one of us should think hard about whether we want to.

Read more

Here’s my take on why the overpopulation objection to rejuvenation is morally unacceptable.


In this article, I’ll try to show that the overpopulation objection against rejuvenation is morally deplorable. For this purpose, whether or not the world is overpopulated or might be such in the future doesn’t matter. I’ll deal with facts and data in the two other articles dedicated to this objection; for now, all I want is getting to the conclusion that not developing rejuvenation for the sake of avoiding overpopulation is morally unacceptable (especially when considering the obvious and ethically more sound alternative), and thus overpopulation doesn’t constitute a valid objection to rejuvenation.

I’ll start with an example. Imagine there’s a family of two parents and three children. They’re not doing too well financially, and they live packed in a tiny apartment with no chances of moving somewhere larger. Clearly they cannot afford having more children, but they would really like having more anyway. What should they do?

The only reasonable answer is that they should not have any more children until they can afford having them. Throwing away the old ones for the sake of some other child to be even conceived yet would be nothing short of sheer madness.

Read more