Toggle light / dark theme

A type of food that has been around for centuries, but is primed to be increasingly relevant to the future: Plant-Based “Meat.”

In this video series, the Galactic Public Archives takes bite-sized looks at a variety of terms, technologies, and ideas that are likely to be prominent in the future. Terms are regularly changing and being redefined with the passing of time. With constant breakthroughs and the development of new technology and other resources, we seek to define what these things are and how they will impact our future.

Follow us on social media:
Twitter / Facebook / Instagram

Happy Easter…and a reality check: https://motherboard.vice.com/en_us/article/where-were-going-we-dont-need-popes #transhumanism #reason


Modern values, transhumanist technology, and the embrace of reason are making many Catholic rules and rituals absurd.

Everywhere I look, Pope Francis, the 266th pope of the Catholic Church, seems to be in the news—and he is being positively portrayed as a genuinely progressive leader. Frankly, this baffles me. Few major religions have as backwards a philosophical and moral platform as Catholicism. Therefore, no leader of it could actually be genuinely progressive. Yet, no one seems to pay attention to this—no one seems to be discussing that Catholicism remains highly oppressive.

To even discuss how many archaic positions the Pope and Catholicism support would take volumes. But the one that irks me the most is that Pope Francis and his church are still broadly against condoms and contraceptives. Putting aside that this view is terribly anti-environmental, with over 175 million Catholics in Africa, it’s quite possible that this position may also create more AIDS deaths in Africa.

Some weird religious stories w/ transhumanism Expect the conflict between religion and transhumanism to get worse, as closed-minded conservative viewpoints get challenged by radical science and a future with no need for an afterlife: http://barbwire.com/2017/04/06/cybernetic-messiah-transhuman…elligence/ & http://www.livebytheword.blog/google-directors-push-for-comp…s-explain/ & http://ctktexas.com/pastoral-backstory-march-30th-2017/


By J. Davila Ashcroft

The recent film Ghost in the Shell is a science fiction tale about a young girl (known as Major) used as an experiment in a Transhumanist/Artificial Intelligence experiment, turning her into a weapon. At first, she complies, thinking the company behind the experiment saved her life after her family died. The truth is, however, that the company took her forcefully while she was a runaway. Major finds out that this company has done the same to others as well, and this knowledge causes her to turn on the company. Throughout the story the viewer is confronted with the existential questions behind such an experiment as Major struggles with the trauma of not feeling things like the warmth of human skin, and the sensations of touch and taste, and feels less than human, though she is told many times she is better than human. While this is obviously a science fiction story, what might comes as a surprise to some is that the subject matter of the film is not just fiction. Transhumanism and Artificial Intelligence on the level of the things explored in this film are all too real, and seem to be only a few years around the corner.

Recently it was reported that Elon Musk of SpaceX fame had a rather disturbing meeting with Demis Hassabis. Hassabis is the man in charge of a very disturbing project with far reaching plans akin to the Ghost in the Shell story, known as DeepMind. DeepMind is a Google project dedicated to exploring and developing all the possible uses of Artificial Intelligence. Musk stated during this meeting that the colonization of Mars is important because Hassabis’ work will make earth too dangerous for humans. By way of demonstrating how dangerous the goals of DeepMind are, one of its business partners, Shane Lange is reported to have stated, “I think human extinction will probably occur, and this technology will play a part in it.” Lange likely understands what critics of artificial intelligence have been saying for years. That is, such technology has an almost certain probability of become “self aware”. That is, becoming aware of its own existence, abilities, and developing distinct opinions and protocols that override those of its creators. If artificial intelligence does become sentient, that would mean, for advocates of A.I., that we would then owe them moral consideration. They, however, would owe humanity no such consideration if they perceived us as a danger to their existence, since we could simply disconnect them. In that scenario we would be an existential threat, and what do you think would come of that? Thus Lange’s statement carries an important message.

The fast-advancing fields of neuroscience and computer science are on a collision course. David Cox, Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard, explains how his lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.

http://www.weforum.org/

Read more

Algorithms with learning abilities collect personal data that are then used without users’ consent and even without their knowledge; autonomous weapons are under discussion in the United Nations; robots stimulating emotions are deployed with vulnerable people; research projects are funded to develop humanoid robots; and artificial intelligence-based systems are used to evaluate people. One can consider these examples of AI and autonomous systems (AS) as great achievements or claim that they are endangering human freedom and dignity.

We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles to fully benefit from the potential of them. AI and AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust for technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.

Read more

The first of my major #Libertarian policy articles for my California gubernatorial run, which broadens the foundational “non-aggression principle” to so-called negative natural phenomena. “In my opinion, and to most #transhumanist libertarians, death and aging are enemies of the people and of liberty (perhaps the greatest ones), similar to foreign invaders running up our shores.” A coordinated defense agianst them is philosophically warranted.


Many societies and social movements operate under a foundational philosophy that often can be summed up in a few words. Most famously, in much of the Western world, is the Golden Rule: Do onto others as you want them to do to you. In libertarianism, the backbone of the political philosophy is the non-aggression principle (NAP). It argues it’s immoral for anyone to use force against another person or their property except in cases of self-defense.

A challenge has recently been posed to the non-aggression principle. The thorny question libertarian transhumanists are increasingly asking in the 21st century is: Are so-called natural acts or occurrences immoral if they cause people to suffer? After all, taken to a logical philosophical extreme, cancer, aging, and giant asteroids arbitrarily crashing into the planet are all aggressive, forceful acts that harm the lives of humans.

Traditional libertarians throw these issues aside, citing natural phenomena as unable to be morally forceful. This thinking is supported by most people in Western culture, many of whom are religious and fundamentally believe only God is aware and in total control of the universe. However, transhumanists —many who are secular like myself—don’t care about religious metaphysics and whether the universe is moral. (It might be, with or without an almighty God.) What transhumanists really care about are ways for our parents to age less, to make sure our kids don’t die from leukemia, and to save the thousands of species that vanish from Earth every year due to rising temperatures and the human-induced forces.

Is the risk of cultural stagnation a valid objection to rejuvenation therapies? You guessed it—nope.


This objection can be discussed from both a moral and a practical point of view. This article discusses the matter from a moral standpoint, and concludes it is a morally unacceptable objection. (Bummer, now I’ve spoiled it all for you.)

However, even if the objection can be dismissed on moral grounds, one may still argue that, hey, it may be immoral to let old people die to avoid cultural and social stagnation, but it’s still necessary.

One could argue that. But one would be wrong.