In the lunch time I am existing virtually in the hall of the summit as a face on the Skype account — i didn’t get a visa and stay in Moscow. But ironically my situation is resembling what I an speaking about: about the risk of remote AI which is created by aliens million light years from Earth and sent via radio signals. The main difference is that they communicate one way, and I have duplex mode.
This is my video presentation on YouTube:
Risks of SETI, for Humanity+ 2010 summit
We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing
As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.
I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.
Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.
Some think that these problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work. There has been code for doing OCR and neural networks and much more kicking around for years. The biggest challenge right now is getting people together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each scientist is making. Advances must eventually be expressed in software (and data) so it can be executed by a computer. Even if you believe we need certain scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those breakthroughs, let’s get 100 people together!
There is an additional problem: that C/C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of taxes on time, from learning the archane rules about these ungainly languages, to the fact that libraries often use their own string classes, synchronization primitives, error handling schemes, etc. In many cases, it is easier to write a specialized and custom computer vision library in C/C++ than to integrate something like OpenCV which does everything by itself down to the Matrix class. The pieces for building your own computer vision library (graphics, i/o, math, etc.) are in good shape, but the computer vision is not, so that we haven’t moved beyond that stage! Another problem with C/C++ is that they do not have garbage collection which is necessary but insufficient for reliable code.
A SciPy-based computational fluid dynamic (CFD) visualization of a combustion chamber.
I think scientific programmers should move to Python and build on SciPy. Python is a modern free language, and has quietly built up an extremely complete set of libraries for everything from gaming to scientific computing. Specifically, its SciPy library with various scikit extensions are a solid baseline patiently waiting for more people to work on all sorts of futuristic problems. (It is true that Python and SciPy both have issues. One of Python’s biggest issues is that the default implementation is interpreted, but there are several workarounds being built [Cython, PyPy, Unladen Swallow, and others]. SciPy’s biggest challenge is how to be expansive without being duplicative. It is massively easier to merge English articles in Wikipedia that discuss the same topics than to do this equivalent in code. We need to share data in addition to code, but we need to share code first.)
Some think the singularity is a hardware problem, and won’t be solved for a number of years. I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. In fact, we could have built intelligent machines and cured cancer years ago. The problems right now are much more social than technical.
Perhaps you think I’m crazy or naive to pose this question. But more and more the past few months I’ve begun to wonder if there is a possibility here that this idea may not be too far off the mark.
Not because of some half-baked theory about a global conspiracy or anything of the sort but simply based upon the behavior of many multinational corporations recently and the effects this behavior is having upon people everywhere.
Again, you may disagree but my perspective on these financial giants is that they are essentially predatory in nature and that their prey is any dollar in commerce that they can possibly absorb. The problem is that for anyone in the modern or even quasi-modern world money is nearly as essential as plasma when it comes to our well-being.
It has been clearly demonstrated again and again — all over the world — that when a population has become sufficiently destitute that the survival of the individual is actually threatened violence inevitably occurs. On a large enough scale this sort of violence can erupt into civil war and wars, as we all know too well can spread like a virus across borders, even oceans.
Until fairly recently, corporations were not big enough, powerful enough or sufficiently meshed with our government to push the US population to a point of violence and perhaps we’re not there yet, but between the bank bailout, the housing crisis, the bailouts of the automakers, the subsidies to the big oil companies and ten thousand other government gifts that are coming straight from the taxpayer I fear we are getting ever closer to the brink.
Who knows — it might just take one little thing — like that new one dollar charge many stores have suddenly begun instituting for any purchase using an ATM or credit card — to push us over the edge.
The last time I got hit with one of these dollar charges I thought about the ostensible reason for this — that the credit card company is now charging the merchant more per transaction so the merchant is passing that cost on to you — however this isn’t the whole story. The merchant is actually charging you more than the transaction costs him and even if this is a violation of either the law or the terms and services agreement between the card company and the merchant, the credit card company looks the other way because they are securing a bigger transaction because of what the merchant is doing thus increasing their profits even further.
Death by big blows or a thousand cuts — the question is will we be forced to do something about it before the big corporations eat us alive?
Transitions
Posted in futurism
King Louis XVI’s entry in his personal diary for that fateful day of July 14, 1789 suggests that nothing important had happened. He did not know that the events of the day-the attack upon the Bastille-meant that the revolution was under way, and that the world as he knew it was essentially over. Fast forward to June, 2010: a self-replicating biological organism (mycoplasma mycoides bacterium transformed) has been created in a laboratory by J. Craig Venter and his team. Yes, the revolution has begun. Indeed, the preliminaries have been going on for several years; it’s just that … um, well, have we been wide awake?
Ray Kurzweil’s singularity might be 25 years into the future, but sooner, a few years from now, we’ll have an interactive global network that some refer to as ‘global brain.’ Web3. I imagine no one knows exactly what will come out of all this, but I expect that we’ll find that the whole will be more than and different from the sum of the parts. Remember Complexity Theory. How about the ‘butterfly effect?’ Chaos Theory. And much more not explainable by theories presently known. I expect surprises, to say the least.
I am a retired psychiatrist, not a scientist. We each have a role to enact in this drama/comedy that we call life, and yes, our lives have meaning. Meaning! For me life is not a series of random events or events brought about by ‘them,’ but rather an unfolding drama/comedy with an infinite number of possible outcomes. We don’t know its origins or its drivers. Do we even know where our visions comes from?
So, what is my vision and what do I want? How clearly do I visualize what I want? Am I passionate about what I want or simply lukewarm? How much am I prepared to risk in pursuit of what I want? Do I reach out for what I want directly or do I get what I want indirectly by trying to serve two masters, so to speak? If the former I practice psychological responsibility, if the latter I do not. An important distinction. The latter situation suggests unresolved dilemma, common enough. Who among us can claim to be without?
As we go through life there are times when we conceal from others and to some extent from ourselves exactly what it is that we want, hoping that what we want will come to pass without us clarifying openly what we stand for. One basic premise I like is that actions speak louder than words and therefore by our actions in our personal lives directly or indirectly we bring to pass what we bottom line want.
Does that include what I fear? Certainly it might if deep within me I am psychologically engineering an event that frightens me. If what I fear is what I secretly bring about. Any one among us might surreptitiously arrange drama so as to inspire or provoke others in ways that conceal our personal responsibility. All this is pertinent and practical as will become obvious in the coming years.
We grew up in 20th century households or in families where we and other family members lived by 20th century worldviews, and so around the world 20th century thinking still prevails. Values have much to do with internalized learned relationships to limited and limiting aspects of the universe. In the midst of change we can transcend these. I wonder if by mid-century people will talk of the BP oil spill as the death throes of a dinosaur heralding the end of an age. I don’t know, but I imagine that we’re entering a phase of transition-a hiatus-in which we see our age fading away from us and a new age approaching. But the new has yet to consolidate. A dilemma. If we embrace the as yet ethereal new we risk losing our roots and all that we value; if we cling to the old we risk seeing the ship leave without us.
We are crew-and not necessarily volunteers-on a vessel bound for the Great Unknown. Like all such voyages taken historically this one is not without its perils. When established national boundaries become more porous, when old fashioned foreign policy fails, when the ‘old guard’ feels threatened beyond what it will tolerate, what then? Will we regress into authoritarianism, will we demand a neo-fascist state so as to feel secure? Or will we climb aboard the new? Yes, we can climb aboard even if we’re afraid. To be sure we’ll grumble, and some will talk of mutiny. A sense of loss is to be expected. We all feel a sense of loss when radical change happens in our personal lives, even when the change is for the better. I am aware of this in my own life, I clarify meaning in life. There are risks either way. Such is life.
But change is also adventure: I am old enough to remember the days of the ocean liners and how our eyes lit up and our hearts rose up joyfully as we stood on deck departing into the vision, waving to those left behind. Indeed we do this multiple times in our lives as we move from infancy to old age and finally towards death. And like good psychotherapy, the coming change will be both confronting and rewarding. Future generations are of us and we are of them; we cannot be separated.
What a time to be alive!
Wendy McElroy brings an important issue to our attention — the increasing criminalization of filming / recording on-duty police officers.
The techno-progressive angle on this would have to take sousveillance into consideration. If our only response to a surveillance state is to observe “from the bottom” (as, for example, Steve Mann would have it), and if that response is made illegal, it seems that the next set of possible steps forward could include more entrenched recording of all personal interaction.
Already we have a cyborg model for this — “eyeborgs” Rob Spence and Neil Harbisson. So where next?
Resources:
http://www.nytimes.com/2006/12/10/magazine/10section3b.t-3.html
http://en.wikipedia.org/wiki/Steve_Mann
http://jointchiefs.blogspot.com/2010/06/camera-as-gun-drop-shooter.html
Cell Phones in Timbuktu
Posted in economics, finance, geopolitics, human trajectories
It’s easy to think of people from the underdeveloped world as quite different from ourselves. After all, there’s little to convince us otherwise. National Geographic Specials, video clips on the Nightly News, photos in every major newspaper – all depicting a culture and lifestyle that’s hard for us to imagine let alone relate to. Yes – they seem very different; or perhaps not. Consider this story related to me by a friend.
Ray was a pioneer in software. He sold his company some time ago for a considerable amount of money. After this – during his quasi-retirement he got involved in coordinating medical relief missions to some of the most impoverished places on the planet, places such as Timbuktu in Africa.
The missions were simple – come to a place like Timbuktu and set up medical clinics, provide basic medicines and health care training and generally try and improve the health prospects of native peoples wherever he went.
Upon arriving in Timbuktu, Ray observed that their system of commerce was incredibly simple. Basically they had two items that were in commerce – goats and charcoal.
According to Ray they had no established currency – they traded goats for charcoal, charcoal for goats or labor in exchange for either charcoal or goats. That was basically it.
Ray told me that after setting up the clinic and training people they also installed solar generators for the purpose of providing power for satellite phones that they left in several villages in the region.
They had anticipated that the natives, when faced with an emergency or if they needed additional medicines or supplies would use the satellite phones to communicate these needs however this isn’t what ended up happening…
Two years after his initial visit to Timbuktu, Ray went back to check on the clinics that they had set up and to make certain that the people there had the medicines and other supplies that they required.
Upon arriving at the same village he had visited before Ray was surprised to note that in the short period of only two years since his previous visit things had changed dramatically – things that had not changed for hundreds, perhaps even thousands of years.
Principally, the change was to the commerce in Timbuktu. No longer were goats and charcoal the principal unit of currency. They had been replaced by a single unified currency – satellite phone minutes!
Instead of using the satellite phones to call Ray’s organization, the natives of Timbuktu had figured out how to use the phones to call out to neighboring villages. This enabled more active commerce between the villages – the natives could now engage in business miles from home – coordinating trade between villages, calling for labor when needed or exchanging excess charcoal for goats on a broader scale for example.
Of course their use of these phones wasn’t limited strictly to commerce – just like you and I, they also used these phones to find out what was happening in other places – who was getting married, who was sick or injured or simply to communicate with people from other places that were too far away to conveniently visit.
In other words, a civilization that had previously existed in a way that we would consider highly primitive had leapfrogged thousands of years of technological and cultural development and within the briefest of moments had adapted their lives to a technology that is among the most advanced of any broadly distributed in the modern world.
It’s a powerful reminder that in spite of our belief that primitive cultures are vastly different from us the truth is that basic human needs, when enabled by technology, are very much the same no matter where in the world or how advanced the civilization.
Perhaps we are not so different after all?
Kepler Space University was a participant in the ISDC-2010 at Chicago, May 27–30, 2010 with a PhD Commencement and nine presentations.
Read KSC Scores at ISDC.
Bob Krone, Ph.D., Provost
www.keplerspaceuniversity.org
Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]
Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.
Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.
1. Introduction
There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].
(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)
Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.
The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.
Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.
Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.
To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.
By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.
2. What do we mean by “friendly”?
Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:
When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.
– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)
I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:
2.1. Ritual / Courteous AI
Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.
2.2. Just AI
Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.
How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?
Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.
2.3. Kind / Friendly AI
How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.
Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.
Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.
However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.
2.4. Good / Benevolent AI
Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.
We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.
3. Robotic Dick & Jane Readers?
As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.
If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.
“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.
Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.
= = = = =
Footnotes:
[1] Author contact: fwsudia-at-umich-dot-edu.
[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.
[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.
[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!
[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.
[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.
[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.
[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.
Originally posted @ Perspective Intelligence
Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.
The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.
Computer aided crash (proof of concept for future cyber-attack)
There has yet to be a definitive explanation of how stocks such as Proctor and Gamble plunged 47% and the normally solid Accenture plunged from a value of roughly $40 to one cent, based on no external input of information into the financial system. The SEC has issued directives in recent years boosting competition and lowering commissions, which has had the effect of fragmenting equity trading around the US and making it highly automated. This has created four leading exchanges, NYSE Euronext, Nasdaq OMX Group, Bats Global Market and Direct Edge and secondary exchanges include International Securities Exchange, Chicago Board Options Exchange, the CME Group and the Intercontinental Exchange. There are also broker-run matching systems like those run by Knight and ITG and so called ‘dark-pools’ where trades are matched privately with prices posted publicly only after trades are done. As similar picture has emerged in Europe, where rules allowing competition with established exchanges and known by the acronym “Mifid” have led to a similar explosion of types and venues.
To navigate this confusing picture traders have to rely on ‘smart order routers’ – electronic systems that seek the best price across all of the platforms. Therefore, trades are done in vast data centers – not in exchange buildings. This total automation of trading allows for the use of a variety of ‘trading algorithms’ to manage investment themes. The best known of these is a ‘Volume Algo’, which ensures throughout the day that a trader maintains his holding in a share at a pre-set percentage of that share’s overall volume, automatically adjusting buy and sell instructions to ensure that percentage remains stable whatever the market conditions. Algorithms such as this have been blamed for exacerbating the rapid price moves on May 6th. High-frequency traders are the biggest proponents of algos and they account for up to 60% of US equity trading.
The most likely cause of the collapse on May 6th was the slowing down or near stop on one side of the trading pool. So in very basic terms a large number of sell orders started backing up on one side of the system (at the speed of light) with no counter-parties taking the order on the other side of the trade. The counter-party side of the trade slowed or stopped causing this almost instant pile-up of orders. The algorithms on the other side finding no buyer for their stocks kept offering lower prices (as per their software) until they attracted a buyer. However, as no buyer’s appeared on the still slowed or stopped counter-party side prices tumbled at an alarming rate. Fingers have pointed at the NYSE for causing the slow down on one side of the trading pool as it instituted some kind of circuit breaker into the system, which caused all the other exchanges to pile-up on the other side of the trade. There has also been a focus on one particular trade, which may have been the spark igniting the NYSE ‘circuit breaker’. Whatever the precise cause, once events were set in train the system had in no way caught up with the new realities of automated trading and diversified exchanges.
More nodes same assumptions
On one level this seems to defy conventional thinking about security – more diversity greater strength – not all nodes in a network can be compromised at the same time. By having a greater number of exchanges surely the US and global financial system is more secure? However, in this case, the theory collapses quickly if thinking is switched from examining the physical to the virtual. While all of the exchanges are physically and operationally separate they all seemingly share the same software and crucially trading algorithms that all have some of the same assumptions. In this case they all assumed that because they could find no counter-party to the trade they needed to lower the price (at the speed of light). The system is therefore highly vulnerable because it relies on one set of assumptions that have been programmed into lighting fast algorithms. If a national circuit breaker could be implemented (which remains doubtful) then this could slow rapid descent but it doesn’t take away the power of the algorithms – which are always going to act in certain fundamental ways ie continue to lower the offer price if they obtain no buy order. What needs to be understood are the fundamental ways in which all the trading algorithms move in concert. All will have variances but they will all share key similarities, understanding these should lead to the design of logic circuit breakers.
New Terrorism
However, for now the system looks desperately vulnerable to both generalized and targeted cyber attack and this is the opportunity for the next generation of terrorists. There has been little discussion as to whether the events of last Thursday were prompted by malicious means but it certainly is worth mentioning. At a time when Greece was burning launching a cyber attack against this part of the US financial system would clearly have been stunningly effective. Combining political instability with a cyber attack against the US financial system would create enough doubt about the cause of a market drop for the collapse gain rapid traction. Using targeted cyber attacks to stop one side of the trade within these exchanges (which are all highly automated and networked) would, as has now been proven, cause a dramatic collapse. This could also be adapted and targeted at specific companies or asset classes to cause a collapse in price. A scenario where-by one of the exchanges slows down its trades surrounding the stock of a company the bad-actor is targeting seems both plausible and effective.
A hybrid cyber and kinetic attack could also cause similar damage – as most trades are now conducted within data-centers – it begs the question why are there armed guards outside the NYSE – of course if retains some symbolic value but security resources would be better placed outside of the data-centers where these trades are being conducted. A kinetic attack against financial data centers responsible for these trades would surely have a devastating effect. Finding the location of these data centers is as simple as conducting a Google search.
In order for terrorism to have impact in the future it needs to shift its focus from the weapons of the 20th Century to those of the present day. Using their current tactics the Pakistan Taliban and their assorted fellow-travelers cannot fundamentally damage western society. That battle is over. However, the next era of conflict motivated by a radicalism from as yet unknown grievances, fueled by a globally networked generation Y, their cyber weapons of choice and the precise application of ultra-violence and information spin has dawned. Five days in Manhattan flashed a light on this new era.