Menu

Blog

Archive for the ‘robotics/AI’ category: Page 2408

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Nov 9, 2010

The Singularity Hypothesis: A Scientific and Philosophical Assessment

Posted by in categories: cybercrime/malcode, ethics, existential risks, futurism, robotics/AI

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Continue reading “The Singularity Hypothesis: A Scientific and Philosophical Assessment” »

Nov 6, 2010

Hating Technology is Hating Yourself

Posted by in categories: human trajectories, robotics/AI

Kevin Kelly concluded a chapter in his new book What Technology Wants with the declaration that if you hate technology, you basically hate yourself.

The rationale is twofold:

1. As many have observed before, technology–and Kelly’s superset “technium”–is in many ways the natural successor to biological evolution. In other words, human change is primarily through various symbiotic and feedback-looped systems that comprise human culture.

Continue reading “Hating Technology is Hating Yourself” »

Oct 25, 2010

Open Letter to Ray Kurzweil

Posted by in categories: biotech/medical, business, economics, engineering, futurism, human trajectories, information science, open source, robotics/AI

Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Continue reading “Open Letter to Ray Kurzweil” »

Sep 26, 2010

The problems in our world aren’t technical, but social

Posted by in categories: open source, robotics/AI

If the WW II generation was The Greatest Generation, the Baby Boomers were The Worst. My former boss Bill Gates is a Baby Boomer. And while he has the potential to do a lot for the world by giving away his money to other people (for them to do something they wouldn’t otherwise do), after studying Wikipedia and Linux, I see that the proprietary development model Gates’s generation adopted has stifled the progress of technology they should have provided to us. The reason we don’t have robot-driven cars and other futuristic stuff is that proprietary software became the dominant model.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones.

Simply put, there is no computer vision codebase with critical mass.

Continue reading “The problems in our world aren't technical, but social” »

Jul 30, 2010

Robots And Privacy

Posted by in categories: cybercrime/malcode, ethics, robotics/AI

Within the next few years, robots will move from the battlefield and the factory into our streets, offices, and homes. What impact will this transformative technology have on personal privacy? I begin to answer this question in a chapter on robots and privacy in the forthcoming book, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge: MIT Press).

I argue that robots will implicate privacy in at least three ways. First, they will vastly increase our capacity for surveillance. Robots can go places humans cannot go, see things humans cannot see. Recent developments include everything from remote-controlled insects to robots that can soften their bodies to squeeze through small enclosures.

Second, robots may introduce new points of access to historically private spaces such as the home. At least one study has shown that several of today’s commercially available robots can be remotely hacked, granting the attacker access to video and audio of the home. With sufficient process, governments will also be able to access robots connected to the Internet.

There are clearly ways to mitigate these implications. Strict policies could reign in police use of robots for surveillance, for instance; consumer protection laws could require adequate security. But there is a third way robots implicate privacy, related to their social meaning, that is not as readily addressed.

Continue reading “Robots And Privacy” »

Jun 24, 2010

Singularity Summit 2010 in San Francisco to Explore Intelligence Augmentation

Posted by in category: robotics/AI
This year, the Singularity Summit 2010 (SS10) will be held at the Hyatt Regency Hotel in San Francisco, California, in a 1100-seat ballroom on August 14–15.

Our speakers will include Ray Kurzweil, author of The Singularity is Near; James Randi, magician-skeptic and founder of the James Randi Educational Foundation; Terry Sejnowski, computational neuroscientist; Irene Pepperberg, pioneering researcher in animal intelligence; David Hanson, creator of the world’s most realistic human-like robots; and many more. In all, the conference will include over twenty speakers, including many scientists presenting on their latest cutting-edge research in topics like intelligence enhancement and regenerative medicine.

Continue reading “Singularity Summit 2010 in San Francisco to Explore Intelligence Augmentation” »

Jun 12, 2010

My presentation on Humanity + summit

Posted by in categories: futurism, robotics/AI

In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.

This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit

Jun 11, 2010

H+ Conference and the Singularity Faster

Posted by in categories: futurism, robotics/AI

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Continue reading “H+ Conference and the Singularity Faster” »

Jun 5, 2010

Friendly AI: What is it, and how can we foster it?

Posted by in categories: complex systems, ethics, existential risks, futurism, information science, policy, robotics/AI

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

Continue reading “Friendly AI: What is it, and how can we foster it?” »