Blog

Archive for the ‘defense’ category

Oct 22, 2014

Pentagon preparing for mass civil breakdown

Posted by in categories: defense, government, security

— The Guardian

Pentagon Building in Washington

A US Department of Defense (DoD) research programme is funding universities to model the dynamics, risks and tipping points for large-scale civil unrest across the world, under the supervision of various US military agencies. The multi-million dollar programme is designed to develop immediate and long-term “warfighter-relevant insights” for senior officials and decision makers in “the defense policy community,” and to inform policy implemented by “combatant commands.”

Launched in 2008 – the year of the global banking crisis – the DoD ‘Minerva Research Initiative’ partners with universities “to improve DoD’s basic understanding of the social, cultural, behavioral, and political forces that shape regions of the world of strategic importance to the US.”

Read more

Sep 25, 2014

Question: A Counterpoint to the Technological Singularity?

Posted by in categories: defense, disruptive technology, economics, education, environmental, ethics, existential risks, finance, futurism, lifeboat, policy, posthumanism, science, scientific freedom

Question: A Counterpoint to the Technological Singularity?

0  wildest

Douglas Hofstadter, a professor of cognitive science at Indiana University, indicated about The Singularity is Near Book (ISBN: 978–0143037880),

“ … A very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad …”

Continue reading “Question: A Counterpoint to the Technological Singularity?” »


Sep 18, 2014

Why Superintelligence May Not Help Us Think about Existential Risks — or Transhumanism

Posted by in categories: alien life, biological, cyborg, defense, disruptive technology, ethics, existential risks, futurism, homo sapiens, human trajectories, internet, military, philosophy, policy, posthumanism, science, singularity, transhumanism

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Continue reading “Why Superintelligence May Not Help Us Think about Existential Risks -- or Transhumanism” »


Sep 13, 2014

Neuromodulation 2.0: New Developments in Brain Implants, Super Soldiers and the Treatment of Chronic Disease

Posted by in categories: biotech/medical, defense, transhumanism

Written By: — Sigularity Hub

neuro-modulation

Brain implants here we come.

DARPA just announced the ElectRX program, a $78.9 million attempt to develop miniscule electronic devices that interface directly with the nervous system in the hopes of curing a bunch of chronic conditions, ranging from the psychological (depression, PTSD) to the physical (Crohn’s, arthritis). Of course, the big goal here is to usher in a revolution in neuromodulation—that is, the science of modulating the nervous system to fix an underlying problem.

Continue reading “Neuromodulation 2.0: New Developments in Brain Implants, Super Soldiers and the Treatment of Chronic Disease” »


Sep 4, 2014

Navy’s Next Fighter Likely to Feature Artificial Intelligence

Posted by in categories: defense, robotics/AI

By: — USNI News

Boeing concept for F/A-XX. Boeing Image

Artificial intelligence will likely feature prominently onboard the Pentagon’s next-generation successors to the Boeing F/A-18E/F Super Hornet and the Lockheed Martin F-22 Raptor.

“AI is going to be huge,” said one U.S. Navy official familiar with the service’s F/A-XX effort to replace the Super Hornet starting around 2030.

Further, while there are significant differences between the U.S. Air Force’s vision for its F-X air superiority fighter and the Navy’s F/A-XX, the two services agree on some fundamental aspects about what characteristics the jet will need to share.

Continue reading “Navy’s Next Fighter Likely to Feature Artificial Intelligence” »


Aug 28, 2014

Funding Request

Posted by in categories: astronomy, business, cosmology, defense, disruptive technology, general relativity, physics, quantum physics, science, space, space travel

Astrophysicists like Robert Nemiroff have shown, using Hubble photographs, that quantum foam does not exist. Further, the famous string theorists, Michio Kaku, in his April 2008 Space Show interview stated that string theories will require hundreds of years before gravity modification is feasible.

Therefore the need to fund research into alternative propulsion technologies to get us into space cheaper and quicker. We can be assured that such space technologies will filter down into terrestrial technologies.

This video explain how this can be achieved and the benefits of doing so. The two organizations that are actively engaged in this endeavor are Propulsion Physics, Inc. and the Xodus One Foundation.

Continue reading “Funding Request” »


Jul 1, 2014

E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)

Posted by in categories: business, defense, economics, education, ethics, existential risks, science, scientific freedom, security

E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)

047

1.- E.Q.-Focused Nations argue that the millenarian applied terms such as: Prudence, Tact, Sincerity, Kindness and Unambiguous Language DO NOT SUFFICE and hence they need to invent a marketeer’s stunt: Emotional Intelligence. I.Q.-Centric Countries argue that the millenarian applied terms are beyond utility and desirability and that stunts are to social-engineer and brain-wash the weak: Ergo, all of these are optimal: Prudence, Tact, Sincerity, Kindness and Unambiguous Language, as well as plain-vanilla Psychology 101.

2.- E.Q.-Focused Nations are mired with universal corruption, both in private and public office. I.Q.-Centric Countries are mired with transparency, accountability and reliability, as well as collective integrity and ethics.

Continue reading “E.Q.-Focused Nations (suboptimal) Versus I.Q.-Centric Countries (optimal)” »


May 26, 2014

Oil Refineries that has continuously benefited from Mr. Andres Agostini’s White Swan Transformative and Integrative Risk Management. The White Swan Idea is at http://lifeboat.com/blog/2014/04/white-swan

Posted by in categories: automation, big data, business, chemistry, complex systems, computing, defense, disruptive technology, economics, education, energy, engineering, existential risks, finance, futurism, information science, innovation, physics, robotics/AI, science, scientific freedom, security, supercomputing, surveillance

Oil Refineries that has continuously benefited from Mr. Andres Agostini’s White Swan Transformative and Integrative Risk Management. The White Swan Idea is at http://lifeboat.com/blog/2014/04/white-swan

Through five and half years, the White Swan Book’s Author Andres Agostini concurrently managed the risks of the world’s number 1 and the world’s number 3 Oil Refineries. There is a sample of installations of these two refineries.

new-1

Continue reading “Oil Refineries that has continuously benefited from Mr. Andres Agostini’s White Swan Transformative and Integrative Risk Management. The White Swan Idea is at http://lifeboat.com/blog/2014/04/white-swan” »


May 25, 2014

The Lifeboat Foundation Worldwide Ambassador Mr. Andres Agostini’s own White Swan Dictionary, Countermeassuring Every Unthinkable Black Swan, at http://lifeboat.com/blog/2014/04/white-swan

Posted by in categories: big data, biological, business, complex systems, computing, defense, disruptive technology, economics, education, engineering, existential risks, finance, genetics, information science, innovation, internet, law, law enforcement, lifeboat, physics, robotics/AI, science, scientific freedom, security, singularity, supercomputing, sustainability

The Lifeboat Foundation Worldwide Ambassador Mr. Andres Agostini’s own White Swan Dictionary, Countermeassuring Every Unthinkable Black Swan, at http://lifeboat.com/blog/2014/04/white-swan

035

WHITE SWAN — UNABRIDGED DICTIONARY

Altogetherness.— Altogetherness is the quality of conforming to the ability to investigate with all or everything included.

Continue reading “The Lifeboat Foundation Worldwide Ambassador Mr. Andres Agostini’s own White Swan Dictionary, Countermeassuring Every Unthinkable Black Swan, at http://lifeboat.com/blog/2014/04/white-swan” »


May 23, 2014

The Navy’s Rail Gun Hides a Secret

Posted by in categories: business, counterterrorism, defense, disruptive technology, engineering, innovation, physics, science, space

The Navy’s Rail Gun technology hides a secret, that the Navy’s projectile accuracy has been substantially increased by about 45x.
But first some history.

The US government brought Prof Eric Laithwaite to help them build a rocket launcher based on linear motor principles. Today we call this the Rail Gun. In terms of its original objectives it was not a success, because astronauts could not survive the accelerations required to launch from a rail gun and cargo required a much longer rail gun than feasible with the then technologies.

Continue reading “The Navy’s Rail Gun Hides a Secret” »


Page 1 of 1712345678Last