Toggle light / dark theme

Happy #Alien Day. Here’s my trilogy of alien stories for Vice. I’ll start by listing #2 first for those who only have time for one, but they do go in chronological order: 2) https://motherboard.vice.com/en_us/article/why-havent-we-met…ed-into-ai & 1) https://motherboard.vice.com/en_us/article/the-internet-will…wake-it-up & 3) (covered recently by the History Channel): https://motherboard.vice.com/en_us/article/the-language-of-a…cipherable #transhumanism


While traveling in Western Samoa many years ago, I met a young Harvard University graduate student researching ants. He invited me on a hike into the jungles to assist with his search for the tiny insect. He told me his goal was to discover a new species of ant, in hopes it might be named after him one day.

Whenever I look up at the stars at night pondering the cosmos, I think of my ant collector friend, kneeling in the jungle with a magnifying glass, scouring the earth. I think of him, because I believe in aliens—and I’ve often wondered if aliens are doing the same to us.

Believing in aliens—or insanely smart artificial intelligences existing in the universe—has become very fashionable in the last 10 years. And discussing its central dilemma: the Fermi paradox, has become even more so. The Fermi paradox states that the universe is very big—with maybe a trillion galaxies that might contain 500 billion stars and planets each —and out of that insanely large number, it would only take a tiny fraction of them to have habitable planets capable of bringing forth life.

Major Ed Dames predicted that “a series of powerful, deadly solar flares” he termed “the killshot” would impact the Earth and wipe out civilization (preceding this event was an event in North Korea)


A second “doomsday” vault will join the seed vault on Svalbard, with the new one offering an offline archive for important literature, data and other cultural relics.

Read more

In a previous essay, I suggested how we might do better with the unintended consequences of superintelligence if, instead of attempting to pre-formulate satisfactory goals or providing a capacity to learn some set of goals, we gave it the intuition that knowing all goals is not a practical possibility. Instead, we can act with a modest confidence having worked to discover goals, developing an understanding of our discovery processes that allows asserting an equilibrium between the risk of doing something wrong and the cost of work to uncover more stakeholders and their goals. This approach promotes moderation given the potential of undiscovered goals potentially contradicting any particular action. In short, we’d like a superintelligence that applies the non-parametric intuition, the intuition that we can’t know all the factors but can partially discover them with well-motivated trade-offs.

However, I’ve come to the perspective that the non-parametric intuition, while correct, on its own can be cripplingly misguided. Unfortunately, going through a discovery-rich design process doesn’t promise an appropriate outcome. It is possible for all of the apparently relevant sources not to reflect significant consequences.

How could one possibly do better than accepting this limitation, that relevant information is sometimes not present in all apparently relevant information sources? The answer is that, while in some cases it is impossible, there is always the background knowledge that all flourishing is grounded in material conditions, and that “staying grounded” in these conditions is one way to know that important design information is missing and seek it out. The Onion article “Man’s Garbage To Have Much More Significant Effect On Planet Than He Will” is one example of a common failure at living in a grounded way.

In other words, “staying grounded” means recognizing that just because we do not know all of the goals informing our actions does not mean that we do not know any of them. There are some goals that are given to us by the nature of how we are embedded in the world and cannot be responsibly ignored. Our continual flourishing as sentient creatures means coming to know and care for those systems that sustain us and creatures like us. A functioning participation in these systems at a basic level means we should aim to see that our inputs are securely supplied, our wastes properly processed, and the supporting conditions of our environment maintained.

Suppose that there were a superintelligence where individual agents have a capacity as compared to us such that we are as mice are to us. What might we reasonably hope from the agents of such an intelligence? My hope is that these agents are ecologists who wish for us to flourish in our natural lifeways. This does not mean that they leave us all to our own preserves, though hopefully they will see the advantage to having some unaltered wilderness in which to observe how we choose to live left to our own devices. Instead, we can be participants in patterned arrangements aimed to satisfy our needs in return for our engaged participation in larger systems of resource management. By this standard, our human systems might be found wanting by many living creatures today.

Given this, a productive approach to developing superintelligence would not only be concerned with its technical creation, but also by being in the position to demonstrate how all can flourish through good stewardship, setting a proper example for when these systems emerge and are trying to understand what goals should be like. We would also want the facts of its and our material conditions readily apparent, so that it doesn’t start from a disconnected and disembodied basis.

Overall, this means that in addition to the capacity to discover more goals, it would be instructive to supply this superintelligence with a schema of describing the relationships and conditions under which current participants flourish, as well as the goal to promote such flourishing whenever the means are clear and circumstances indicate such flourishing will not emerge of its own accord. This kind of information technology for ecological engineering might also be useful for our own purposes.

What will a superintelligence take as its flourishing? It is hard to say. However, hopefully it will find sustaining, extending, and promoting the flourishing of the ecology that allowed its emergence as a inspiring, challenging, and creative goal.

Dismantling the idea that older generations should ‘step down’ for younger ones.


Humans are really pros at sugarcoating. If you say old people should step down for the sake of new generations, it sounds so noble and rightful, doesn’t it? What it actually means, though, is ‘We value old people less than new ones,’ and this doesn’t sound very noble or rightful. This is plain and brutal survival of the species.

Kids are (generally) cute and helpless. This is what triggers our instinct to protect them, even thought it is not the reason we do it. A species relying on reproduction to ensure its existence wouldn’t last long if it didn’t care for its children. Even if we had already developed comprehensive rejuvenation therapies, we would still be mortals; if we stopped reproducing altogether and forever, we would still risk extinction, although on a very long timescale. (In other words, we could still die one by one of other causes than ageing.) It’s the reason children are important (to us and other species): They’re potential means of reproduction. Additionally, they need special attention, because they’re not able to take care of themselves and are thus more at risk of dying before they can reproduce. That’s why most species on the planet make such a big deal out of protecting their offspring—species that don’t are less likely to stick around long enough to tell the tale.

Some weird religious stories w/ transhumanism Expect the conflict between religion and transhumanism to get worse, as closed-minded conservative viewpoints get challenged by radical science and a future with no need for an afterlife: http://barbwire.com/2017/04/06/cybernetic-messiah-transhuman…elligence/ & http://www.livebytheword.blog/google-directors-push-for-comp…s-explain/ & http://ctktexas.com/pastoral-backstory-march-30th-2017/


By J. Davila Ashcroft

The recent film Ghost in the Shell is a science fiction tale about a young girl (known as Major) used as an experiment in a Transhumanist/Artificial Intelligence experiment, turning her into a weapon. At first, she complies, thinking the company behind the experiment saved her life after her family died. The truth is, however, that the company took her forcefully while she was a runaway. Major finds out that this company has done the same to others as well, and this knowledge causes her to turn on the company. Throughout the story the viewer is confronted with the existential questions behind such an experiment as Major struggles with the trauma of not feeling things like the warmth of human skin, and the sensations of touch and taste, and feels less than human, though she is told many times she is better than human. While this is obviously a science fiction story, what might comes as a surprise to some is that the subject matter of the film is not just fiction. Transhumanism and Artificial Intelligence on the level of the things explored in this film are all too real, and seem to be only a few years around the corner.

Recently it was reported that Elon Musk of SpaceX fame had a rather disturbing meeting with Demis Hassabis. Hassabis is the man in charge of a very disturbing project with far reaching plans akin to the Ghost in the Shell story, known as DeepMind. DeepMind is a Google project dedicated to exploring and developing all the possible uses of Artificial Intelligence. Musk stated during this meeting that the colonization of Mars is important because Hassabis’ work will make earth too dangerous for humans. By way of demonstrating how dangerous the goals of DeepMind are, one of its business partners, Shane Lange is reported to have stated, “I think human extinction will probably occur, and this technology will play a part in it.” Lange likely understands what critics of artificial intelligence have been saying for years. That is, such technology has an almost certain probability of become “self aware”. That is, becoming aware of its own existence, abilities, and developing distinct opinions and protocols that override those of its creators. If artificial intelligence does become sentient, that would mean, for advocates of A.I., that we would then owe them moral consideration. They, however, would owe humanity no such consideration if they perceived us as a danger to their existence, since we could simply disconnect them. In that scenario we would be an existential threat, and what do you think would come of that? Thus Lange’s statement carries an important message.

Is the future going to be so bad that longer, healthier lives will be undesirable? No, probably not.


The future looks grim? That’s quite an interesting claim, and I wonder whether there is any evidence to support it. In fact, I think there’s plenty of evidence to believe the opposite, i.e. that the future will be bright indeed. However, I can’t promise the future will certainly be bright. I am no madame clearvoyant, but neither are doomsday prophets. We can all only speculate, no matter how ‘sure’ pessimists may say they are about the horrible dystopian future that allegedly awaits us. I’m soon going to present the evidence of the bright future I believe in, but before I do, I would like to point out a few problems in the reasoning of the professional catastrophists who say that life won’t be worth living and there’s thus no point in extending it anyway.

First, we need to take into account that the quality of human life has been improving, not worsening, throughout history. Granted, there still are things that are not optimal, but there used to be many more. Sure, it sucks that your pet-peeve politician has been appointed president of your country (any reference to recent historical events is entirely coincidental), and it sucks that poverty and famine haven’t yet been entirely eradicated, but none of these implies that things will get worse. There’s a limit to how long a president can be such, and poverty and famine are disappearing all over the world. It takes time for changes to take place, and the fact the world isn’t perfect yet doesn’t mean it will never be. Especially people who are still chronologically young should appreciate the fact that by the time they’re 80 or 90, a long time will have passed, and the world will certainly have changed in the meanwhile.

Read more

Artificial intelligence has the capability to transform the world — but not necessarily for the better. A group of scientists gathered to discuss doomsday scenarios, addressing the possibility that AI could become a serious threat.

The event, ‘Great Debate: The Future of Artificial Intelligence — Who’s in Control?’, took place at Arizona State University (ASU) over the weekend.

“Like any new technology, artificial intelligence holds great promise to help humans shape their future, and it also holds great danger in that it could eventually lead to the rise of machines over humanity, according to some futurists. So which course will it be for AI and what can be done now to help shape its trajectory?” ASU wrote in a press release.

Read more