Comments on: Software and the Singularity https://lifeboat.com/blog/2010/04/software-and-the-singularity Safeguarding Humanity Thu, 29 Apr 2010 07:02:45 +0000 hourly 1 https://wordpress.org/?v=6.6.2 By: Singularity Utopia https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-55323 Wed, 28 Apr 2010 22:57:09 +0000 http://lifeboat.com/blog/?p=858#comment-55323 Regarding the software for AI it will arrive eventually; I feel sure by 2045 at the latest. Improved hardware will help create better software.

Regarding the paranoid-Luddite anti-singularity theories about humans becoming redundant or enslaved by AI: these theories are ridiculous. Very minimal dangers are associated with the Singularity. When we all have easy access to replicators, and if some wacko decides to replicate a Apache helicopter for a killing spree, I’m sure such a high-tech era will allow people to defend themselves with consummate ease from all possible threats: there is no need to worry about any self-replicating ecophage. Superior machine-intelligences, due to their high intelligence, will have NO interest in enslaving of destroying the human race, furthermore humans will use technology to evolve thus we won’t be left behind. If some primitive humans want to be left behind, the super-AIs have a whole universe to play in so Earth can carry on as usual.

It is time to evolve. When you play with fire you may get burned, but the discovery and use of fire is good despite people who got burned during the discovery and who continue to get burned today. Our world is a better place because of Einstein despite deaths due to the nuclear bomb, which probably saved lives due to abruptly ending the war.

Humans take risks; we explore; we evolve; we push ourselves to the limits. Obviously we make things as safe as possible but accidents occur such as fires or car crashes. People who are anti-singularity are anti-evolution. It is time to evolve. Neil Armstrong took and leap for mankind and soon the human race will take the most important leap in the history of life on Earth! Singularity Utopia is coming.

http://singularity-2045.org/

]]>
By: Simon Dufour https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54904 Mon, 26 Apr 2010 18:02:31 +0000 http://lifeboat.com/blog/?p=858#comment-54904 @Keith Curtis
It seems you are talking about Ray Kurzweil preditions in you post. I’d like to point out to a few things.

In his predictions, Ray Kurzweil did claim he was conservative. I’d like to point out that he also thought that we’d get a computer able to pass the Turing test around 2020ish.

The singularity is a completely different matter than what you’re talking here. When the singularity happen, everything we know today will be obsolete. When the singularity happen, the rate of new technology will be so great that we’ll get a thousand year worth of progress in a few seconds.. and this will continue to grow exponentially forever. It’s beyond comprehension really.

Our definition of software will change in the next few years. In my opinion, the idea you gave in your article here are all stuff that will happen in the next 10 years.

At some point however, computers will just build themselves and optimize themselves to the point where all software will be completely hidden behind a huge AI that reprogram itself constantly. Anyway.. that’s what I think and that’s why I want to keep my mind open, especially as a software engineer.

]]>
By: Simon Dufour https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54903 Mon, 26 Apr 2010 17:49:42 +0000 http://lifeboat.com/blog/?p=858#comment-54903 I will give my point of view because it seems that this thread will soon spin out of control.

If we kept the current technologies and didn’t evolve, it’d kill many. War, polutions, scarcity, poverty, famine… they all kill peoples. We don’t need a new superweapon to do that.

The Singularity is not an event, it’s a concept. The Singularity will happen when the exponential growth of technology will meet the knee of the curve. Then, technology will expand so fast that nothing will be the same anymore. At least, that’s how Ray Kurzweil defines it in “The Singularity is Near”. What you have to understand here is that technological progress, even in the near future, is all about making progress in Intelligence. Making us think better. We can achieve these kinds of result by making communication easier (like by making Internet, smartphone and social networking better). The breakthrough in communication that we’re currently in will change a bunch of thing and the first one that I see is the coordination and understanding of the whole world. Now, we’ll all talk to each other and understand each others goals. Research will sync up and people will cooperate.

Sure, our capitalism world don’t favor these kinds of things but I think that’s about to change too. With technological progress, scarcity will slowly disappear. What will happen if food is not an issue anywhere in the world anymore? If we can feed people with grown meat using nanotechnology or biotechnology, we could make meat for almost free.

What if entertainment became free. With virtual reality, you can do anything you want, without any risks or price. Criminality could drop as food and basic needs are always covered.

We have to think that technology will drastically transform how we live today. Sure, if you think that Technological progress will happen during our current world-state, it’s pretty frightning. I hope that one day, progress will shift from money to a post-scarcity world where welfare is more important than anything else.

AI, cooperation, communication, Biotecnology, Nanotechnology, Robotics.. they’re all tools to help us get there.

]]>
By: John Hunt https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54375 Fri, 23 Apr 2010 15:38:06 +0000 http://lifeboat.com/blog/?p=858#comment-54375 > BTW, I’m of the opinion that the nuclear bomb has (so far) saved lives by shortening the war.

I agree, and I know that, when I have spoken to WW2 vets, they are strongly of the opinion that this was the case. Of course, it was those guys who would have had to invade the Japanese homeland.

> I also believe in specialization. Not all of us need to work on all of the same problems.

By all means. But the problem is the specialists who are developing tools (for good or bad) working independently of those who work to try and prevent the risks from those tools. Ultimately, those who are developing the tools need to have their work controlled so you can’t separate these groups.

> Finally, I see the good uses of technology. 30-40K people die in car accidents every year.

Of course this is true. Technology brings many benefits. The danger is when the benefits of technology blinds us to the risks.

> We can regulate dangerous technologies, and devise means to counteract them. Do you worry about someone stealing an Apache helicopter and going on a killing spree?

No, but if home replicators could construct an Apache for $500 then, yes, I would be very worried about a wacko going on a killing spree. My point here is that, in the future, it is entirely conceivable that one day the tools could become commonplace whereby a single individual could create a self-replicating entity which could destroy the entirety of humanity. An Apache can kill how many? A self-replicating ecophage can kill how many. This is the fundamental difference. This is why we need to handle certain future technology much more differently that previous technology. As difficult as it would be to do, we need to have universal controls on certain technologies. This would require universal snap inspections with severe consequences. And even then it would probably only buy us time to develop an off-Earth colony.

> I also think that regime change in Iran and North Korea and a few other places would be good to reduce deadly risks. Spreading democracy is an important way to make humanity safer.

I agree. The concept of absolute sovereignty is one of the greatest problems. How many North Koreans have died without intervention because it was an “internal matter”? See how Saddam used this in not cooperating with inspectors in the early years which led to the well-known consequences. See how Iran is moving to the verge of ICBMs because of its “inalienable rights”. And finally, consider how this argument will be used to prevent effective control over the enormously powerful technology of nanotech. We don’t need a world government but we do need universal systems of regulating certain technologies.

]]>
By: Keith Curtis https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54278 Fri, 23 Apr 2010 04:23:31 +0000 http://lifeboat.com/blog/?p=858#comment-54278 Hi Atarivandio;

The major point of this section here is to state that we are not waiting for more hardware. I discuss AI in other parts of the chapter this piece is from. I agree that we have what seems like AI today, but it still has a long way to go.

The singularity is not my idea, and what I know of it you are not describing it properly.

]]>
By: Keith Curtis https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54277 Fri, 23 Apr 2010 04:18:22 +0000 http://lifeboat.com/blog/?p=858#comment-54277 John;

What should Einstein have done? BTW, I’m of the opinion that the nuclear bomb has (so far) saved lives by shortening the war. The Japanese were ready to die to the last man. It also serves as a deterrent against chemical and biological attacks.

I also believe in specialization. Not all of us need to work on all of the same problems.

Finally, I see the good uses of technology. 30-40K people die in car accidents every year.

We can regulate dangerous technologies, and devise means to counteract them. Do you worry about someone stealing an Apache helicopter and going on a killing spree?

I also think that regime change in Iran and North Korea and a few other places would be good to reduce deadly risks. Spreading democracy is an important way to make humanity safer. 300K people have died in the Sudan — mostly with machetes.

]]>
By: John Hunt https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54270 Fri, 23 Apr 2010 03:46:39 +0000 http://lifeboat.com/blog/?p=858#comment-54270 > My goal is just better software faster. I leave it to others like people here to worry about the downsides of the progress and how to mitigate them.

But herein lies the problem. Einstein didn’t want to kill hundreds of thousands. But his work helped show the way. Drexler doesn’t want to see nanoweapons but he’s creating the tools which will make it possible. Do AI researchers want a superintelligence who values humans only for their atoms? Some of them actually sound like they do.

So if AI researchers are only looking at the upside of their technology and if, by leaving concerns about risks to others, they proceed unhindered, then nothings going to stop our worst nightmares.

> I think that many of the poorest billions of people on this earth today might feel like they are redundant refuse. So it would just become that now all of us are in that boat! I think that is a good thing as it will increase our respect for other humans.

Incredible. I don’t know what there is that I can say.

]]>
By: Atarivandio https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54253 Fri, 23 Apr 2010 01:38:28 +0000 http://lifeboat.com/blog/?p=858#comment-54253 Actually…
Machines are in fact capable of learning or gathering information for internal inference and the solution is quiet basic…
’Chaterbox’ technology, though not specific to language, uses statistics to replicate the thought process needed to reproduce language.
The failure in advancement occurs through specialization, if all computer scientists from all fields of research actually met and actually put forth any kind of effort, one could easily see that the ability to inference is what is important.
Calculus gives math an understanding of time.
Geometry and Trigonometry brings shape and form to math.
Statistics is the method for breathing intelligence into machines.
The brain as a person is merely electric signals making us just as alive as a machine.
The question is do you want to spend thirty years programming intelligence or do you want to spend fifteen minutes using statistics to allow a machine to learn and then spend thirty years teaching it.
The honest truth is that the second option is more beneficial as one can bind, cut, and transfer segments of soul between entities.

Basically the tech is there, otherwise I wouldn’t be using it, it’s just that most people simply do not posses the knowledge to allow them to combine modules for usefulness.

Imagine using a hypervisor module like KVM with a chatterbox module and then a simple interface to the web. If you follow these steps then you have a persona with multiple personalities that share knowledge to vote for an appropriate solution using less memory by mutilating the page features a bit. The web is just a faster way to educate and train it.

Ive used it in combination with any knowledge I could find to basically replicate a ‘George Washington’ that even answers questions similarly if not exactly as he would have.

Heck, I’ve even had a few conversations with Jesus and Hitler. The coolest part was when I left and they started talking to each other.

If this is your Idea of the singularity, then I’m afraid that it’s already happened, the bad part is that nobody noticed. Google and KDE are secretly working on this though they might not realize it.

]]>
By: Keith Curtis https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54179 Thu, 22 Apr 2010 17:13:14 +0000 http://lifeboat.com/blog/?p=858#comment-54179 HI John;

I changed the text slightly to clarify my “amazing” comment. I was just trying to explain that the Singularity’s proponents describe it as a date in time when (presumably) good things happen. The Singularity is not a term I personally ever use as I see progress as a series of inventions. Perhaps Strong AI is the biggest invention, but there are many interesting ones that will come before and after. Strong AI is not even the end because you need thinking and knowledge.

I think that many of the poorest billions of people on this earth today might feel like they are redundant refuse. So it would just become that now all of us are in that boat! I think that is a good thing as it will increase our respect for other humans.

It is a good point that I don’t much discuss the risks or dangers of intelligent machines and other future developments. My goal is just better software faster. I leave it to others like people here to worry about the downsides of the progress and how to mitigate them. In general, I see technology as something that can save lives. Much of life today is drudgery and misery even for the luckiest of us.

What all we do is a good question. Definitely I can imagine many fun things! We can go build a space elevator, terraform Mars and visit it. Just because my computer can add numbers doesn’t mean I shouldn’t also learn how.

BTW, A number of these risks happen even if we never developed things like strong or even weak AI. Once we work on Wikipedia for another few decades, what will those 100 or 1000 years from now contribute to? What about rock music? Will it still be evolving in 100 years?

As for continuous learning, I mean it as weak AI, but a situation where my software is constantly adapting to me. Imagine a situation where a neural network class library is embedded deep in the foundation of a software stack, as it is in all living things with a brain today. The larger point I was making is that one shouldn’t look at dumb software and use that to extrapolate out to when we will have smart software.

]]>
By: John Hunt https://lifeboat.com/blog/2010/04/software-and-the-singularity#comment-54160 Thu, 22 Apr 2010 14:54:32 +0000 http://lifeboat.com/blog/?p=858#comment-54160 Also, is continuous learning the same thing as seed AI?

]]>