Toggle light / dark theme

well-in-an-oasisIt’s easy to think of people from the underdeveloped world as quite different from ourselves. After all, there’s little to convince us otherwise. National Geographic Specials, video clips on the Nightly News, photos in every major newspaper – all depicting a culture and lifestyle that’s hard for us to imagine let alone relate to. Yes – they seem very different; or perhaps not. Consider this story related to me by a friend.

Ray was a pioneer in software. He sold his company some time ago for a considerable amount of money. After this – during his quasi-retirement he got involved in coordinating medical relief missions to some of the most impoverished places on the planet, places such as Timbuktu in Africa.

The missions were simple – come to a place like Timbuktu and set up medical clinics, provide basic medicines and health care training and generally try and improve the health prospects of native peoples wherever he went.

Upon arriving in Timbuktu, Ray observed that their system of commerce was incredibly simple. Basically they had two items that were in commerce – goats and charcoal.

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

Originally posted @ Perspective Intelligence

Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.

The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.

Computer aided crash (proof of concept for future cyber-attack)

There has yet to be a definitive explanation of how stocks such as Proctor and Gamble plunged 47% and the normally solid Accenture plunged from a value of roughly $40 to one cent, based on no external input of information into the financial system. The SEC has issued directives in recent years boosting competition and lowering commissions, which has had the effect of fragmenting equity trading around the US and making it highly automated. This has created four leading exchanges, NYSE Euronext, Nasdaq OMX Group, Bats Global Market and Direct Edge and secondary exchanges include International Securities Exchange, Chicago Board Options Exchange, the CME Group and the Intercontinental Exchange. There are also broker-run matching systems like those run by Knight and ITG and so called ‘dark-pools’ where trades are matched privately with prices posted publicly only after trades are done. As similar picture has emerged in Europe, where rules allowing competition with established exchanges and known by the acronym “Mifid” have led to a similar explosion of types and venues.

To navigate this confusing picture traders have to rely on ‘smart order routers’ – electronic systems that seek the best price across all of the platforms. Therefore, trades are done in vast data centers – not in exchange buildings. This total automation of trading allows for the use of a variety of ‘trading algorithms’ to manage investment themes. The best known of these is a ‘Volume Algo’, which ensures throughout the day that a trader maintains his holding in a share at a pre-set percentage of that share’s overall volume, automatically adjusting buy and sell instructions to ensure that percentage remains stable whatever the market conditions. Algorithms such as this have been blamed for exacerbating the rapid price moves on May 6th. High-frequency traders are the biggest proponents of algos and they account for up to 60% of US equity trading.

The most likely cause of the collapse on May 6th was the slowing down or near stop on one side of the trading pool. So in very basic terms a large number of sell orders started backing up on one side of the system (at the speed of light) with no counter-parties taking the order on the other side of the trade. The counter-party side of the trade slowed or stopped causing this almost instant pile-up of orders. The algorithms on the other side finding no buyer for their stocks kept offering lower prices (as per their software) until they attracted a buyer. However, as no buyer’s appeared on the still slowed or stopped counter-party side prices tumbled at an alarming rate. Fingers have pointed at the NYSE for causing the slow down on one side of the trading pool as it instituted some kind of circuit breaker into the system, which caused all the other exchanges to pile-up on the other side of the trade. There has also been a focus on one particular trade, which may have been the spark igniting the NYSE ‘circuit breaker’. Whatever the precise cause, once events were set in train the system had in no way caught up with the new realities of automated trading and diversified exchanges.

More nodes same assumptions

On one level this seems to defy conventional thinking about security – more diversity greater strength – not all nodes in a network can be compromised at the same time. By having a greater number of exchanges surely the US and global financial system is more secure? However, in this case, the theory collapses quickly if thinking is switched from examining the physical to the virtual. While all of the exchanges are physically and operationally separate they all seemingly share the same software and crucially trading algorithms that all have some of the same assumptions. In this case they all assumed that because they could find no counter-party to the trade they needed to lower the price (at the speed of light). The system is therefore highly vulnerable because it relies on one set of assumptions that have been programmed into lighting fast algorithms. If a national circuit breaker could be implemented (which remains doubtful) then this could slow rapid descent but it doesn’t take away the power of the algorithms – which are always going to act in certain fundamental ways ie continue to lower the offer price if they obtain no buy order. What needs to be understood are the fundamental ways in which all the trading algorithms move in concert. All will have variances but they will all share key similarities, understanding these should lead to the design of logic circuit breakers.

New Terrorism

However, for now the system looks desperately vulnerable to both generalized and targeted cyber attack and this is the opportunity for the next generation of terrorists. There has been little discussion as to whether the events of last Thursday were prompted by malicious means but it certainly is worth mentioning. At a time when Greece was burning launching a cyber attack against this part of the US financial system would clearly have been stunningly effective. Combining political instability with a cyber attack against the US financial system would create enough doubt about the cause of a market drop for the collapse gain rapid traction. Using targeted cyber attacks to stop one side of the trade within these exchanges (which are all highly automated and networked) would, as has now been proven, cause a dramatic collapse. This could also be adapted and targeted at specific companies or asset classes to cause a collapse in price. A scenario where-by one of the exchanges slows down its trades surrounding the stock of a company the bad-actor is targeting seems both plausible and effective.

A hybrid cyber and kinetic attack could also cause similar damage – as most trades are now conducted within data-centers – it begs the question why are there armed guards outside the NYSE – of course if retains some symbolic value but security resources would be better placed outside of the data-centers where these trades are being conducted. A kinetic attack against financial data centers responsible for these trades would surely have a devastating effect. Finding the location of these data centers is as simple as conducting a Google search.

In order for terrorism to have impact in the future it needs to shift its focus from the weapons of the 20th Century to those of the present day. Using their current tactics the Pakistan Taliban and their assorted fellow-travelers cannot fundamentally damage western society. That battle is over. However, the next era of conflict motivated by a radicalism from as yet unknown grievances, fueled by a globally networked generation Y, their cyber weapons of choice and the precise application of ultra-violence and information spin has dawned. Five days in Manhattan flashed a light on this new era.

Roderick Jones

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog (Software and the Singularity, AI and Driverless cars) Here are the sections on the Space Elevator. I hope you find these pages food for thought and I appreciate any feedback.


A Space Elevator in 7

Midnight, July 20, 1969; a chiaroscuro of harsh contrasts appears on the television screen. One of the shadows moves. It is the leg of astronaut Edwin Aldrin, photographed by Neil Armstrong. Men are walking on the moon. We watch spellbound. The earth watches. Seven hundred million people are riveted to their radios and television screens on that July night in 1969. What can you do with the moon? No one knew. Still, a feeling in the gut told us that this was the greatest moment in the history of life. We were leaving the planet. Our feet had stirred the dust of an alien world.

—Robert Jastrow, Journey to the Stars

Management is doing things right, Leadership is doing the right things!

—Peter Drucker

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here are several more sections on AI topics. I hope you find these pages food for thought and I appreciate any feedback.


The future is open source everything.

—Linus Torvalds

That knowledge has become the resource, rather than a resource, is what makes our society post-capitalist.

—Peter Drucker, 1993

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

An obvious next step in the effort to dramatically lower the cost of access to low Earth orbit is to explore non-rocket options. A wide variety of ideas have been proposed, but it’s difficult to meaningfully compare them and to get a sense of what’s actually on the technology horizon. The best way to quantitatively assess these technologies is by using Technology Readiness Levels (TRLs). TRLs are used by NASA, the United States military, and many other agencies and companies worldwide. Typically there are nine levels, ranging from speculations on basic principles to full flight-tested status.

The system NASA uses can be summed up as follows:

TRL 1 Basic principles observed and reported
TRL 2 Technology concept and/or application formulated
TRL 3 Analytical and experimental critical function and/or characteristic proof-of concept
TRL 4 Component and/or breadboard validation in laboratory environment
TRL 5 Component and/or breadboard validation in relevant environment
TRL 6 System/subsystem model or prototype demonstration in a relevant environment (ground or space)
TRL 7 System prototype demonstration in a space environment
TRL 8 Actual system completed and “flight qualified” through test and demonstration (ground or space)
TRL 9 Actual system “flight proven” through successful mission operations.

Progress towards achieving a non-rocket space launch will be facilitated by popular understanding of each of these proposed technologies and their readiness level. This can serve to coordinate more work into those methods that are the most promising. I think it is important to distinguish between options with acceleration levels within the range human safety and those that would be useful only for cargo. Below I have listed some non-rocket space launch methods and my assessment of their technology readiness levels.