Menu

Blog

Archive for the ‘existential risks’ category: Page 108

Feb 16, 2015

The new global c endows Black Holes with radically new Properties

Posted by in categories: existential risks, particle physics

c-global means that the speed of light in the vacuum, c, can no longer be added-on to other speeds like a global expansion speed.

Hence three historical events have the same structure:

• The “phlogiston” theory of fire got superseded by Lavoisier’s discovery of oxygen
• The “miasma” theory of infection got superseded by Semmelweis’ discovery of asepsis
• The “big-bang” theory of the cosmos got superseded by the discovery of c-global

A collateral consequence of c-global is the fact that the deliberate attempt to produce black holes down on earth, scheduled to re-start at doubled energies in two months’ time, cannot be allowed without a prior disproof of c-global. Otherwise the re-start becomes a crime.

I thank Stephen Hawking for his recent public acknowledgment of the danger.

Feb 9, 2015

Benign AI

Posted by in categories: existential risks, robotics/AI, transhumanism

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Continue reading “Benign AI” »

Feb 6, 2015

A Propos Stephen Hawking

Posted by in categories: existential risks, particle physics

A revolutionary Finding waits for the final Clinch: c-global

Otto E. Rossler

Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle 14, 72076 Tubingen, Germany

Abstract: The global nature of the speed of light in the vacuum, c, was reluctantly given up by Einstein in December of 1907. A revival of the status c had enjoyed during the previous 2 ½ years, from mid-1905 to late-1907, is in the literature for several years by now. The consequences of c-global for cosmology and black-hole theory are far-reaching. Since black holes are an acute concern to date because there exists an attempt to produce them down on earth, the question of whether a global-c transform of the Einstein field equations can be found represents a vital issue — only days before an experiment that is based on the assumed absence of the new result is about to be ignited. (December 22, 2014, February 6, 2015)

Continue reading “A Propos Stephen Hawking” »

Feb 6, 2015

New Propaganda for CERN — or Not?

Posted by in categories: existential risks, particle physics

http://yournewswire.com/cern-to-attempt-big-bang-in-march-st…s-warning/

Jan 29, 2015

Dr. Ken Hayworth, Part 3: If we can build a brain, what is the future of I?

Posted by in categories: augmented reality, biotech/medical, entertainment, existential risks, futurism, neuroscience, particle physics, philosophy, physics, quantum physics, science, singularity

The study of consciousness and what makes us individuals is a topic filled with complexities. From a neuroscience perspective, consciousness is derived from a self-model as a unitary structure that shapes our perceptions, decisions and feelings. There is a tendency to jump to the conclusion with this model that mankind is being defined as self-absorbed and only being in it for ourselves in this life. Although that may be partially true, this definition of consciousness doesn’t necessarily address the role of morals and how that is shaped into our being. In the latest addition to The Galactic Public Archives, Dr. Ken Hayworth tackles the philosophical impact that technologies have on our lives.

Our previous two films feature Dr. Hayworth extrapolating about what radical new technologies in neuroscience could eventually produce. In a hypothetical world where mind upload is possible and we could create a perfect replica of ourselves, how would one personally identify? If this copy has the same memories and biological components, our method of understanding consciousness would inevitably shift. But when it comes down it, if we were put in a situation where it would be either you or the replica – it’s natural evolutionary instinct to want to save ourselves even if the other is an exact copy. This notion challenges the idea that our essence is defined by our life experiences because many different people can have identical experiences yet react differently.

Hayworth explains, that although there is an instinct for self-survival, humanity for the most part, has a basic understanding not to cause harm upon others. This is because morals are not being developed in the “hard drive” of your life experiences; instead our morals are tied to the very idea of someone just being a conscious and connected member of this world. Hayworth rationalizes that once we accept our flawed intuition of self, humanity will come to a spiritual understanding that the respect we give to others for simply possessing a reflection of the same kind of consciousness will be the key to us identifying our ultimate interconnectedness.

Continue reading “Dr. Ken Hayworth, Part 3: If we can build a brain, what is the future of I?” »

Jan 27, 2015

Doomsday Clock Now Three Minutes to Midnight

Posted by in categories: existential risks, nuclear weapons

Citing catastrophic climate change, proliferation of nuclear weapons, and the energence of cybercrime, the Bulletin of the Atomic Scientists moved the hands of the Doomsday Clock closer to midnight.

http://thebulletin.org/three-minutes-and-counting7938

Among the proposed actions that should immediately be taken is the creation of “institutions specifically assigned to explore and address potentially catastrophic misuses of new technologies.”

Jan 7, 2015

CROSS-FUNCTIONAL AWAKEN, YET CONDITIONALIZED CONSCIOUSNESS AS PER NON-GIRLIE U.S. HARD ROCKET SCIENTISTS! By Mr. Andres Agostini

Posted by in categories: business, complex systems, defense, disruptive technology, economics, education, engineering, ethics, existential risks, finance, futurism, innovation, physics, science, security, strategy

CROSS-FUNCTIONAL AWAKEN, YET CONDITIONALIZED CONSCIOUSNESS AS PER NON-GIRLIE U.S. HARD ROCKET SCIENTISTS!

0000  GIRLY SUPER OVERMAN

(Excerpted from the White Swan Book)

Sequential and Progressive Tidbits as Follows:

Continue reading “CROSS-FUNCTIONAL AWAKEN, YET CONDITIONALIZED CONSCIOUSNESS AS PER NON-GIRLIE U.S. HARD ROCKET SCIENTISTS! By Mr. Andres Agostini” »

Jan 6, 2015

SIMPLICITY DEATH! By Mr. Andres Agostini

Posted by in categories: business, complex systems, computing, counterterrorism, defense, disruptive technology, economics, education, engineering, existential risks, futurism, geopolitics, governance, innovation, physics, science, security, singularity, strategy

SIMPLICITY DEATH! By Mr. Andres Agostini

CORNUSCOPIA  400

(PLEASE PAY ATTENTION TO THIS SUBJECT MATTER AS IT WOULD BE AMPLIFIED IN FUTURE NEW ARTICLES UNDER THE SAME TITLE).

I will give you some considerations excerpted from the White Swan book ( ASIN: B00KMY0DLK ) to show that Simplicity, via Technological, Social, Political, Geopolitical, and Economic Changes, is OUTRIGHT OBSOLETE and there is now ONLY: COMPLEXITY AND THE POWER OF COMPLEXITY.

THEREFORE:

Continue reading “SIMPLICITY DEATH! By Mr. Andres Agostini” »

Jan 5, 2015

Lockheed Martin’s SkunkWorks!

Posted by in categories: big data, business, complex systems, economics, education, engineering, ethics, existential risks, futurism, information science, innovation, physics, science, security, strategy

I have admired Lockheed Martin’s SkunkWorks for a long, long time.

00000000000000000000000000   SKUNK

FORTUNATELY AND TO THIS PURPOSE, A LOCKHEED MARTIN SCIENTIFIC RESEARCHER AND ENGINEER WROTE:

” … Many businesses think today’s world is complicated and with technology rapidly changing, trying to figure out all the correct things to do is impossible, that it is better to just do what can be done, and adjust things when the result happens to be what is not expected. This is simply gambling where the odds for success and the liability of failure are getting worse by the day. The truth is the world is not complicated, just complex, and with complexity increasing at the same time technology is rapidly changing, the combination of the two conditions only seems complicated. The difference between complexity and complication is complexity can be logically addressed and accounted for such that proper risk management can then be applied and when the quality of the technology is assured early in the planning, analysis and design of the technical solution instead of only assuring it late in the development cycle, the integrated combination of these two scientifically validated methodologies can be used to reliably predict the expected outcomes. There is nobody better at applying the integrated combination of risk management and quality assurance than Mr. Andres Agostini or is there anybody that has more real world experience in doing so, and this includes solving some of the most wicked problems of some of the largest businesses throughout the world. If you are just gambling things work out, then I highly recommend you stop doing business dangerously and seek the assistance of Andres, the master of risk management and quality assurance, as well as reliability and continuous process improvement …”

ABSOLUTE END.

Continue reading “Lockheed Martin's SkunkWorks!” »

Jan 5, 2015

MANDATE: Thou Shalt Sin In Favor Of Explosively-Nonlinear Victory For Eternity!, Stupid? By Mr. Andres Agostini — Amazon — LinkedIn — Lifeboat Foundation

Posted by in categories: business, complex systems, disruptive technology, economics, education, engineering, existential risks, futurism, governance, life extension, physics, science, singularity

MANDATE: Thou Shalt Sin In Favor Of Explosively-Nonlinear Victory For Eternity!

BRUSH 400

ERGO:

Thou Shalt Sin Against Linear Failure, In Order To Embrace Explosively-Nonlinear Victory For Eternity!

What to do against the item below?

Continue reading “MANDATE: Thou Shalt Sin In Favor Of Explosively-Nonlinear Victory For Eternity!, Stupid? By Mr. Andres Agostini — Amazon — LinkedIn — Lifeboat Foundation” »