Menu

Blog

Search results for 'Humanity': Page 39

Jan 25, 2014

“TRANSHUMAN VISIONS 2.0 — East Bay Conference” — Humanity+ speakers and co-sponsorship

Posted by in category: futurism

By: Hank Pellesier,Brighter Brains - H+

3-13_Poster_R4_small

Humanity+ members will be speaking at an exciting San Francisco East Bay transhumanist conference on March 1, that fans of Hplus should consider attending. Chairman Natasha Vita-More is a keynote speaker (along with her husband Max More) and recent HumanityPlus Board member Linda M. Glenn is also in the illustrious lineup. HumanityPlus is also co-sponsoring the event.

Read more

Apr 11, 2013

Faith in the Fat of Fate may be Fatal for Humanity

Posted by in categories: existential risks, futurism, human trajectories, philosophy

This essay was originally published at Transhumanity.

They don’t call it fatal for nothing. Infatuation with the fat of fate, duty to destiny, and belief in any sort of preordainity whatsoever – omnipotent deities notwithstanding – constitutes an increase in Existential Risk, albeit indirectly. If we think that events have been predetermined, it follows that we would think that our actions make no difference in the long run and that we have no control over the shape of those futures still fetal. This scales to the perceived ineffectiveness of combating or seeking to mitigate existential risk for those who have believe so fatalistically. Thus to combat belief in fate, and resultant disillusionment in our ability to wreak roiling revisement upon the whorl of the world, is to combat existential risk as well.

It also works to undermine the perceived effectiveness of humanity’s ability to mitigate existential risk along another avenue. Belief in fate usually correlates with the notion that the nature of events is ordered with a reason on purpose in mind, as opposed to being haphazard and lacking a specific projected end. Thus believers-in-fate are not only more likely to doubt the credibility of claims that existential risk could even occur (reasoning that if events have purpose, utility and conform to a mindfully-created order then they would be good things more often than bad things) but also to feel that if they were to occur it would be for a greater underlying reason or purpose.

Thus, belief in fate indirectly increases existential risk both a. by undermining the perceived effectiveness of attempts to mitigate existential risk, deriving from the perceived ineffectiveness of humanity’s ability to shape the course and nature of events and effect change in the world in general, and b. by undermining the perceived likelihood of any existential risks culminating in humanity’s extinction, stemming from connotations of order and purpose associated with fate.

Continue reading “Faith in the Fat of Fate may be Fatal for Humanity” »

Sep 26, 2012

What are End Of Humanity (EOH) events?

Posted by in categories: defense, ethics, existential risks, lifeboat, philosophy, physics, space, sustainability, transparency, treaties

EOH events are events that cause the irreversible termination of humanity. They are not events that start the physical destruction of humanity (that would be too late), but fundamental, non-threatening and inconspicuous events that eventually lead to the irreversible physical destruction of humanity. Using nations and civilizations I explain how.

(1) Fundamental: These events have to be fundamental to the survival of the human species or else they cannot negatively impact the foundation of humanity’s existence.

On a much smaller scale drought and war can and have destroyed nations and civilizations. However, that is not always the case. For example, it is still not know what caused the demise of the Mayan civilization.

The act of war can lead to the irreversible destruction of a nation or civilization, but the equivalent EOH event lay further back in history, and can only be answered by the questions who and why.

Continue reading “What are End Of Humanity (EOH) events?” »

Aug 22, 2012

Humanity’s Invention in the Cosmos is Kindness: I request Permission to Save your Lives

Posted by in categories: existential risks, particle physics

I know I am not authorized for doing that since you do not know me. But one third of all fundamental scientists in the world (those that deal with chaos and nonlinearity) are on my side. Two thirds (those that deal with quanta and gravitation) do not believe that a chaos theorist has the right to teach them anything. Much as in economics where nonlinearity was a taboo for many decades, in fundamental physics it still is.

So I beg the planet’s general population for mercy: please, forgive the linear community in physics for their not allowing the proof of danger that lies on the table for 4 years to be discussed: such a thing does not occur for the first time in history.

Also, everyone understands that CERN “cannot” update its safety report if doing this would involve discussing a danger that would not permit their experiment to be continued before a counterproof has been found.

All I ever requested is such a counterproof ( http://www.wissensnavigator.com/documents/PetitiontoCERN.pdf ); http://www.aljazeera.com/programmes/insidestory/2012/07/2012759585764599.html ). “Might is might.” Politicians have to rely on might, that is, majority opinion, and Western opinion at that. Scientists’ opinions unfortunately change all the time since new discoveries arise in a point-like fashion and spread slowly.

Continue reading “Humanity’s Invention in the Cosmos is Kindness: I request Permission to Save your Lives” »

Jan 27, 2012

Did Nature Put a Chain Trap to Humanity? (and other Writings)

Posted by in categories: existential risks, particle physics

[Disclaimer: This contribution does not reflect the views of the Lifeboat Foundation as with the scientific community in general, but individual sentiment — Web Admin]

If one of the following three elements can be defused, the black-hole danger is over:

# 1: Black holes possess radically new properties in general relativity that make them both much more likely to arise and undetectable at CERN.

# 2: A new chaotic attractor (rotation-symmetric Shil’nikov-Kleiner attractor) exists in real space which implies exponential growth of black holes inside matter.

Continue reading “Did Nature Put a Chain Trap to Humanity? (and other Writings)” »

Jun 12, 2010

My presentation on Humanity + summit

Posted by in categories: futurism, robotics/AI

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

Sep 2, 2008

Threats to humanity – the old and the resurgent

Posted by in categories: biological, biotech/medical, geopolitics

Following is a discussion of two potential threats to humanity – one which has existed for eons, the second we have seen recently resurfacing having thought it had been laid to rest.

First, a recent story on PhysOrg describes the work researchers at Vanderbilt University have performed in isolating antibodies from elderly people who had survived the 1918 flu pandemic. This comes three years after researchers at Mount Sinai and the Armed Forces Institute of Pathology in Washington, D.C isolated the same virus which caused this outbreak from the frozen bodies of people in Alaska who had died in the pandemic.

In addition to being an impressive achievement of biomedical science, which involved isolating antibody-secreting B cells from donors and generating “immortalized” cell lines to produce large amounts of antibodies, this research also demonstrates the amazing memory the immune system has (90 years!), as well as the ability scientists have to use tissue samples from people born nearly a century ago and fashion them into a potential weapon against future similar outbreaks. Indeed, these manufactured antibodies proved effective against 1918 flu virus when tested in mice.

Continue reading “Threats to humanity – the old and the resurgent” »

Feb 16, 2008

Safeguarding Humanity

Posted by in categories: existential risks, futurism

I was born into a world in which no individual or group claimed to own the mission embodied in the Lifeboat Foundation’s two-word motto. Government agencies, charitable organizations, universities, hospitals, religious institutions — all might have laid claim to some peace of the puzzle. But safeguarding humanity? That was out of everyone’s scope. It would have been a plausible motto only for comic-book organizations such as the Justice League or the Guardians of the Universe.

Take the United Nations, conceived in the midst of the Second World War and brought into its own after the war’s conclusion. The UN Charter states that the United Nations exists:

  • to save succeeding generations from the scourge of war, which twice in our lifetime has brought untold sorrow to mankind, and
  • to reaffirm faith in fundamental human rights, in the dignity and worth of the human person, in the equal rights of men and women and of nations large and small, and
  • to establish conditions under which justice and respect for the obligations arising from treaties and other sources of international law can be maintained, and
  • to promote social progress and better standards of life in larger freedom

All of these are noble, and incredibly important, aims. But even the United Nations manages to name only one existential risk, warfare, which it is pledged to help prevent. Anyone reading this can probably cite a half dozen more.

It is both exciting and daunting to live in an age in which a group like the Lifeboat Foundation can exist outside of the realm of fantasy. It’s exciting because our awareness of possibility is so much greater than it was even a generation or two ago. And it is daunting for exactly the same reason. We can envision plausible triumphs for humanity that really do transcend our wildest dreams, or at least our most glorious fantasies as articulated a few decades ago. Likewise, that worst of all possible outcomes — the sudden and utter disappearance of our civilization, or of our species, or of life itself — now presents itself as the end result of not just one possible calamity, but of many.

Continue reading “Safeguarding Humanity” »

Jan 13, 2008

Lifeboat Foundation SAB member asks “Is saving humanity is worth the cost?”

Posted by in categories: defense, futurism, geopolitics, lifeboat

In his most recent paper “Reducing the Risk of Human Extinction,” SAB member Jason G. Matheny approached the topic of human extinction from what is unfortunately a somewhat unusual angle. Jason examined the cost effectiveness of preventing humanity’s extinction due to a catastrophic asteroid impact.

Even with some rather pessimistic assumptions, his calculations showed a pretty convincing return on investment. For only about US$ 2.50 per life year saved, Matheny predicts that we could mitigate the risk of humanity being killed off by a large asteroid. Maybe it’s just me, but it sounds pretty compelling.

Matheny also made a very good point that we all should ponder when we consider how our charitable giving and taxes gets spent. “We take extraordinary measures to protect some endangered species from extinction. It might be reasonable to take extraordinary measures to protect humanity from the same.”

For more coverage on this important paper please see the October 2007 issue of Risk Analysis and a recent edition of Nature News.

Oct 3, 2024

AI will save us all, but only if it’s decentralized — SingularityNET CEO

Posted by in categories: mobile phones, robotics/AI, singularity

With the recent release of the iPhone 16, which Apple has promised is optimized for artificial intelligence, it’s clear that AI is officially front of mind, once again, for the average consumer. Yet the technology still remains rather limited compared with the vast abilities the most forward-thinking AI technologists anticipate will be achievable in the near future.

As much excitement as there still is around the technology, many still fear the potentially negative consequences of integrating it so deeply into society. One common concern is that a sufficiently advanced AI could determine humanity to be a threat and turn against us all, a scenario imagined in many science fiction stories. However, according to a leading AI researcher, most people’s concerns can be alleviated by decentralizing and democratizing AI’s development.

On Episode 46 of The Agenda podcast, hosts Jonathan DeYoung and Ray Salmond separate fact from fiction by speaking with Ben Goertzel, the computer scientist and researcher who first popularized the term “artificial general intelligence,” or AGI. Goertzel currently serves as the CEO of SingularityNET and the ASI Alliance, where he leads the projects’ efforts to develop the world’s first AGI.

Page 39 of 224First37383940414243