Menu

Blog

Archive for the ‘security’ category: Page 126

May 16, 2015

So, the NSA Has an Actual Skynet Program — Kim Zet Wired

Posted by in categories: privacy, robotics/AI, security, Skynet, supercomputing, surveillance

We’ve suspected it all along—that Skynet, the massive program that brings about world destruction in the Terminator movies, was just a fictionalization of a real program in the hands of the US government. And now it’s confirmed—at least in name.

As The Intercept reports today, the NSA does have a program called Skynet. But unlike the autonomous, self-aware computerized defense system in Terminator that goes rogue and launches a nuclear attack that destroys most of humanity, this one is a surveillance program that uses phone metadata to track the location and call activities of suspected terrorists. A journalist for Al Jazeera reportedly became one of its targets after he was placed on a terrorist watch list. Read more

Apr 24, 2015

Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Posted by in categories: astronomy, big data, computing, cosmology, energy, engineering, environmental, ethics, existential risks, futurism, general relativity, governance, government, gravity, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear energy, nuclear weapons, open source, particle physics, philosophy, physics, policy, posthumanism, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treaties

Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Why the LHC must be shut down

Apr 24, 2015

CERN-Critics: LHC restart is a sad day for science and humanity!

Posted by in categories: astronomy, big data, complex systems, computing, cosmology, energy, engineering, ethics, existential risks, futurism, general relativity, governance, government, gravity, hardware, information science, innovation, internet, journalism, law, life extension, media & arts, military, nuclear energy, nuclear weapons, particle physics, philosophy, physics, policy, quantum physics, science, security, singularity, space, space travel, supercomputing, sustainability, time travel, transhumanism, transparency, treaties
PRESS RELEASE “LHC-KRITIK”/”LHC-CRITIQUE” www.lhc-concern.info
CERN-Critics: LHC restart is a sad day for science and humanity!

Continue reading “CERN-Critics: LHC restart is a sad day for science and humanity!” »

Apr 12, 2015

Human Laws Can’t Control Killer Robots, New Report Says

Posted by in categories: ethics, law, robotics/AI, security

Kari Paul | Motherboard


”​When a human being is killed by an autonomous machine, who takes the blame? Human rights non-governmental organization Human Rights Watch says it is virtually impossible to tell, and that presents unprecedented danger in the future of warfare. The group released a report today showing how difficult it will be to hold commanders, operators, programmers or manufacturers legally responsible for crimes committed by autonomous machines under current legislature.” Read more

Mar 29, 2015

Intelligent robots must uphold human rights

Posted by in categories: human trajectories, law, robotics/AI, security

Hutan Ashrafiannature.comhttp://images.sequart.org/images/i-robot-510ea6801c50a.jpg

There is a strong possibility that in the not-too-distant future, artificial intelligences (AIs), perhaps in the form of robots, will become capable of sentient thought. Whatever form it takes, this dawning of machine consciousness is likely to have a substantial impact on human society.

Microsoft co-founder Bill Gates and physicist Stephen Hawking have in recent months warned of the dangers of intelligent robots becoming too powerful for humans to control. The ethical conundrum of intelligent machines and how they relate to humans has long been a theme of science fiction, and has been vividly portrayed in films such as 1982’s Blade Runner and this year’s Ex Machina.Read more

Mar 29, 2015

It’s Time For Robot Pilots

Posted by in categories: automation, human trajectories, robotics/AI, security, transportation

Jason Koebler — MotherBoard

http://motherboard-images.vice.com/content-images/article/20326/1427390573566811.png?crop=1xw:0.8160465116279069xh;*,*&resize=2300:*&output-format=jpeg&output-quality=90
It’s increasingly looking like the plane that crashed Monday in France, killing 150 people, went down because one of the pilots ​turned off the autopilot and intentionally crashed it into the ground. Why are we still letting humans fly passenger planes?

The short answer is, we’re not really. It’s no secret that planes are already highly automated, and, with technology that’s available today (but that isn’t installed on the Airbus A320 operated by Germanwings that crashed), it would have been possible for someone in a ground station somewhere to have wrested control of the plane from those on board and reestablished autopilot (or to have piloted the plane from the ground)Read more

Mar 19, 2015

Intel Wants You to Forget Your Passwords (You Won’t Need Them)

Posted by in category: security

Intel — Wired
https://lifeboat.com/blog.images/intel-wants-you-to-forget-your-passwords-you-wont-need-them.jpg
Passwords, as they exist now, don’t work. They are the keys with which we lock up everything from our gaming profiles to our personal documents and financial access, and the truth is they just aren’t that secure. For starters, humans are terrible at choosing passwords. “Password” and “123456” were still the two most common passwords used in 2014—despite years of warnings against precisely that.

To force us to use more unique, less obvious keys, many of the sites we frequent make us choose passwords that combine letters and numbers, and sometimes even special characters (such as ! or @). But that raises another issue—complexity. With dozens of online accounts per person, it’s hard to keep track of all the different variations of passwords needed to access them. No wonder too many people (55 percent of adults, according to a study from the UK’s Ofcom) still reuse the same password between most, if not all, of the sites they visit.
Read more

Jan 7, 2015

CROSS-FUNCTIONAL AWAKEN, YET CONDITIONALIZED CONSCIOUSNESS AS PER NON-GIRLIE U.S. HARD ROCKET SCIENTISTS! By Mr. Andres Agostini

Posted by in categories: business, complex systems, defense, disruptive technology, economics, education, engineering, ethics, existential risks, finance, futurism, innovation, physics, science, security, strategy

CROSS-FUNCTIONAL AWAKEN, YET CONDITIONALIZED CONSCIOUSNESS AS PER NON-GIRLIE U.S. HARD ROCKET SCIENTISTS!

0000  GIRLY SUPER OVERMAN

(Excerpted from the White Swan Book)

Sequential and Progressive Tidbits as Follows:

Continue reading “CROSS-FUNCTIONAL AWAKEN, YET CONDITIONALIZED CONSCIOUSNESS AS PER NON-GIRLIE U.S. HARD ROCKET SCIENTISTS! By Mr. Andres Agostini” »

Jan 6, 2015

SIMPLICITY DEATH! By Mr. Andres Agostini

Posted by in categories: business, complex systems, computing, counterterrorism, defense, disruptive technology, economics, education, engineering, existential risks, futurism, geopolitics, governance, innovation, physics, science, security, singularity, strategy

SIMPLICITY DEATH! By Mr. Andres Agostini

CORNUSCOPIA  400

(PLEASE PAY ATTENTION TO THIS SUBJECT MATTER AS IT WOULD BE AMPLIFIED IN FUTURE NEW ARTICLES UNDER THE SAME TITLE).

I will give you some considerations excerpted from the White Swan book ( ASIN: B00KMY0DLK ) to show that Simplicity, via Technological, Social, Political, Geopolitical, and Economic Changes, is OUTRIGHT OBSOLETE and there is now ONLY: COMPLEXITY AND THE POWER OF COMPLEXITY.

THEREFORE:

Continue reading “SIMPLICITY DEATH! By Mr. Andres Agostini” »

Jan 5, 2015

Lockheed Martin’s SkunkWorks!

Posted by in categories: big data, business, complex systems, economics, education, engineering, ethics, existential risks, futurism, information science, innovation, physics, science, security, strategy

I have admired Lockheed Martin’s SkunkWorks for a long, long time.

00000000000000000000000000   SKUNK

FORTUNATELY AND TO THIS PURPOSE, A LOCKHEED MARTIN SCIENTIFIC RESEARCHER AND ENGINEER WROTE:

” … Many businesses think today’s world is complicated and with technology rapidly changing, trying to figure out all the correct things to do is impossible, that it is better to just do what can be done, and adjust things when the result happens to be what is not expected. This is simply gambling where the odds for success and the liability of failure are getting worse by the day. The truth is the world is not complicated, just complex, and with complexity increasing at the same time technology is rapidly changing, the combination of the two conditions only seems complicated. The difference between complexity and complication is complexity can be logically addressed and accounted for such that proper risk management can then be applied and when the quality of the technology is assured early in the planning, analysis and design of the technical solution instead of only assuring it late in the development cycle, the integrated combination of these two scientifically validated methodologies can be used to reliably predict the expected outcomes. There is nobody better at applying the integrated combination of risk management and quality assurance than Mr. Andres Agostini or is there anybody that has more real world experience in doing so, and this includes solving some of the most wicked problems of some of the largest businesses throughout the world. If you are just gambling things work out, then I highly recommend you stop doing business dangerously and seek the assistance of Andres, the master of risk management and quality assurance, as well as reliability and continuous process improvement …”

ABSOLUTE END.

Continue reading “Lockheed Martin's SkunkWorks!” »