Toggle light / dark theme

AI logo small

Asia Institute Report

Proposal for a Constitution of Information
March 3, 2013
Emanuel Pastreich

Introduction

When David Petraeus resigned as CIA director afteran extramarital affair with his biographer Paula Broadwell was exposed, the problem of information security gained national attention. The public release of personal e-mails in order to impugn someone at the very heart of the American intelligence community raised awareness of e-mail privacy issues and generated a welcome debate on the need for greater safeguards. The problem of e-mail security, however, is only the tip of the iceberg of a far more serious problem involving information with which we have not started to grapple. We will face devastating existential questionsin the years ahead as human civilization enters a potentially catastrophic transformation—one driven not by the foibles of man, but rather by the exponential increase in our capability to gather, store, share, alter and fabricate information of every form, coupled with a sharp drop in the cost of doing so. Such basic issues as how we determine what is true and what is real, who controls institutions and organizations, and what has significance for us in an intellectual and spiritual sense will become increasingly problematic. The emerging challenge cannot be solved simply by updating the 1986 “Electronic Communications Privacy Act” to meet the demands of the present day;[1] it will require a rethinking of our society and culture and new, unprecedented, institutions to respond to the challenge. International Data Corporation estimated the total amount of digital information in the world to be 2.7 zettabytes (2.7 followed by 21 zeros) in 2012, a 48 percent increase from 2011—and we are just getting started.[2]

[1]As is suggested in the article by Tony Romm “David Petraeus affair scandal highlights email privacy issues” (http://www.politico.com/news/stories/1112/83984.html#ixzz2CUML3RDy).

[2] http://www.idc.com/getdoc.jsp?containerId=prUS23177411#.UTL3bDD-H54

The explosion in the amount of information circulating in the world, and the increase in the ease with which that information can be obtained or altered, will change every aspect of our lives, from education and governance to friendship and kinship, to the very nature of human experience. We need a comprehensive response to the information revolution that not only proposes innovative ways to employ new technologies in a positive manner but also addresses the serious, unprecedented, challenges that they present for us.

The ease with information of every form can now be reproduced and altered is an epistemological and ontological and a governmental challenge for us. Let us concentrate on the issue of governance here. The manipulability of information is increasing in all aspects of life, but the constitution on which we base our laws and our government has little to say about information, and nothing to say about the transformative wave sweeping through our society today as a result. Moreover, we have trouble grasping the seriousness of the information crisis because it alters the very lens through which we perceive the world. If we rely on the Internet to tell us how the world changes, for example, we are blind as to how the Internet itself is evolving and how that evolution impacts human relations. For that matter, in that our very thought patterns are molded over time by the manner in which we receive, we may come to see information that is presented in that on-line format as a more reliable source than our direct perceptions of the physical world. The information revolution has the potential to dramatically change human awareness of the world and inhibit our ability to make decisions if we are surrounded with convincing data whose reliability we cannot confirm. These challenges call out for a direct and systematic response.

There are a range of piecemeal solutions to the crisis being undertaken around the world. The changes in our world, however, are so fundamental that they call out for a systematic response.We need to hold an international constitutional convention through which we can draft a legally binding global “constitution of information” that will address the fundamental problems created by the information revolution and set down clear guidelines for how we can control the terrible cultural and institutional fluidity created by this information revolution. The process of identifying the problems born of the massive shift in the nature of information, and suggesting solutions workable will be complex, but the issue calls out for an entirely new universe of administration and jurisprudence regarding the control, use, and abuse of information. As James Baldwin once wrote, “Not everything that is faced can be changed. But nothing can be changed until it is faced.”

The changes are so extensive that they cannot be dealt with through mere extensions of the United States constitution or the existing legal code, nor can it be left to intelligence agencies, communications companies, congressional committees or international organizations that were not designed to handle the convergence of issues related to increased computational power, but end up formulating information policy by default. We must bravely set out to build a consensus in the United States, and around the world, about the basic definition of information, how information should be controlled and maintained, and what the long-term implications of the shifting nature of information will be for humanity. We should then launch a constitutional convention and draft a document that sets forth a new set of laws and responsible agencies for assessing the accuracy of information and addressing its misuse.

Those who may object to such a constitution of information as a dangerous form of centralized authority that is likely to encourage further abuse are not fully aware of the difficulty of the problems we face. The abuse of information has already reached epic proportions and we are just at the beginning of an exponential increase. There should be no misunderstanding: We are not suggesting a totalitarian “Ministry of Truth” that undermines a world of free exchange between individuals. Rather, we are proposing a system that will bring accountability, institutional order, and transparency to the institutions and companies that already engage in the control, collection, and alternation of information. Failure to establish a constitution of information will not assure preservation of an Arcadian utopia, but rather encourage the emergence of even greater fields of information collection and manipulation entirely beyond the purview of any institution. The result will be increasing manipulation of human society by dark and invisible forces for which no set of regulations has been established—that is already largely the case. The constitution of information, in whatever form it may take, is the only way to start addressing the hidden forces in our society that tug at our institutional chains.

Drafting a constitution is not merely a matter of putting pen to paper. The process requires the animation of that document in the form of living institutions with budgets and mandates. It is not my intention to spell out the full parameters of such a constitution of information and the institutions that it would support because a constitution of information can only be successful if it engages living institutions and corporations in a complex and painful process of deal making and compromises that, like the American Constitutional Convention of 1787, is guided at a higher level by certain idealistic principles. The ultimate form of such a constitution cannot be predicted in advance, and to present a version in advance here would be counterproductive. We can, however, identify some of the key challenges and the issues that would be involved in drafting such a constitution of information.

The Threats posed by the Information Revolution

The ineluctable increase of computational power in recent years has simplified the transmission, modification, creation, and destruction of massive amounts of information, rendering all information fluid, mutable, and potentially unreliable. The rate at which information can be rapidly and effectively manipulated is enhanced by an exponential rise in computers’ capacity. Following Moore’s Law, which suggests that the number of microprocessors that can be placed on a chip will double every 18 months, the capacity of computers continues to increase dramatically, whereas human institutions change only very slowly.[3] That gap between technological change and the evolution of human civilization has reached an extreme, all the more dangerous because so many people have trouble grasping the nature of the challenge and blame the abuse of information they observe on the dishonesty of individuals, or groups, rather than the technological change itself.

The cost for surveillance of electronic communications, for keeping track of the whereabouts of people and for documenting every aspect of human and non-human interaction, is dropping so rapidly that what was the exclusive domain of supercomputers at the National Security Agency a decade ago is now entirely possible for developing countries, and will soon be in the hands of individuals. In ten years, when vastly increased computational power will mean that a modified laptop computer can track billions of people with considerable resolution, and that capability is combined with autonomous drones, we will need a new legal framework to respond in a systematic manner to the use and abuse of information at all levels of our society. If we start to plan the institutions that we will need, we can avoid the greatest threat: the invisible manipulation of information without accountability.

Surveillance and gathering of massive amounts of information

As the cost of collecting information becomes inexpensive, it is becoming easier to collect and sort massive amounts of data about individuals and groups and to extract from that information relevant detail about their lives and activities. Seemingly insignificant data taken from garbage, emails, and photographs can now be easily combined and systematically analyzed to essentially give as much information about individuals as a government might obtain from wiretapping—although emerging technology makes the process easier to implement and harder to detect. Increasingly smaller devices can take photographs of people and places over time with great ease and that data can be combined and sorted so as to obtain extremely accurate descriptions of the daily lives of individuals, who they are, and what they do. Such information can be combined with other information to provide complete profiles of people that go beyond what the individuals know about themselves. As cameras are combined with mini-drones in the years to come, the range of possible surveillance will increase dramatically. Global regulations will be an absolute must for the simple reason that it will be impossible to stop this means of gathering big data.

Fabrication of information

In the not-too-distant future, it will be possible to fabricate cheaply not only texts and data, but all forms of photographs, recordings, and videos with such a level of verisimilitude that fictional artifacts indistinguishable from their historically accurate counterparts will compete for our attention. Currently, existing processing power can be combined with intermediate user-level computer skills to effectively alter information, whether still-frame images using programs like Photoshop or videos using Final Cut Pro. Digital information platforms for photographs and videos are extremely susceptible to alteration and the problem will get far worse. It will be possible for individuals to create convincing documentation, photo or video, in which any event involving any individual is vividly portrayed in an authentic manner. It will be increasingly easy for any number of factions and interest groups to make up materials to that document their perspectives, creating political and systemic chaos. Rules stipulating what is true,and what is not, will no longer be an option when we reach that point. Of course the authorization of an organization to make a call as to what information is true brings with it incredible risk of abuse. Nevertheless, although there will be great risk in enabling a group to make binding determination concerning authenticity (and there will clearly be a political element to truth as long as humans rule society) the danger posed by inaction is far worse.

When fabricated images and movies can no longer be distinguished from reality by the observer, and computers can easily continue to create new content, it will be possible to continue these fabrications over time, thereby creating convincing alternative realities with considerable mimetic depth. At that point, the ability to create convincing images and videos will merge with the next generation virtual reality technologies to further confuse the issue of what is real. We will see the emergence ofvirtual worlds that appear at least as real as the one that we inhabit. If some event becomes a consistent reality in those virtual worlds, it may be difficult, if not impossible, for people to comprehend that the event never actually “happened,” thereby opening the door for massive manipulation of politics and ultimately of history.

Once we have complex virtual realities that present a physical landscape that possesses almost as much depth as the real world and the characters have elaborate histories and memories of events over decades and form populations of millions of anatomically distinct virtual people with distinct individualities, the potential for confusion will be tremendous. It will no longer be clear what reality has authority and many political and legal will be irresolvable.

But that is only half of the problem. Thosevirtual worlds are already extending into social networks. An increasing number of people on Facebook are not actual people at all, but characters, avatars, created by third parties. As computers grow more powerful, it will be possible to create thousands, then hundreds of thousands, of individuals on social networks who have complex personal histories and personalities. These virtual people will be able toengage human partners in compelling conversations that pass the Turing Test. And, because those virtual people can write messages and Skype 24 hours a day, and customize their message to what the individual finds interesting, they can be more attractive than human “friends” and have the potential to seriously distort our very concept of society and of reality. There will be a concrete and practical need for a set of codes and laws to regulate such an environment.

The Problem of Perception

Over time, virtual reality may end up seeming much more real and convincing to people who are accustomed to it than actual reality. That issue is particularly relevant when it comes to the next generation, who will be exposed to virtual reality from infancy. Yet virtual reality is fundamentally different from the real world. For example, virtual reality is not subject to the same laws of causality. The relations between events can be altered with ease in virtual reality and epistemological assumptions from the concrete world do not hold. Virtual reality can muddle such basic concepts as responsibility and guilt, or the relationship of self and society. It will be possible in the not-too-distant future to convince people of something using faulty or irrational logic whose only basis is in virtual reality. This fact has profound implications for every aspect of law and institutional functionality.

And if falsehoods are continued in virtual reality—which seems to represent reality accurately—over time in a systematic way, interpretations of even common-sense assumptions about life and society will diverge, bringing everything into question. As virtual reality expands its influence, we will have to make sure that certain principles are upheld even in virtual space so as to assure that it does not create chaos in our very conception of the public sphere. That process, I hold, cannot be governed in the legal system that we have at present. New institutions will have to be developed.

The dangers of the production of increasingly unverifiable information are perhaps a greater threat than even terrorism. While the idea of individual elements setting off “dirty bombs” is certainly frightening, imagine a world in which the polity simply can never be sure whether anything they see/read/hear is true or not. This threat is at least as significant as surveillance operations, but has received far less attention. The time has come for us to formulatethe institutional foundation that will define and maintain firm parameters for the use, alteration and retention of information on a global scale.

Money

We live in a money economy, but the information revolution is altering the nature of money itself right before our eyes. Money has gone from an analog system within which it was once was restricted to the amount of gold an individual possessed to a digital system in which the only limitation on the amount of money represented in computers is the tolerance for risk on the part of the players involved and the ability of national and international institutions to monitor. In any case, the mechanisms are now in place to alter the amount of currency, or for that matter of many other items such as commodities or stocks, without any effective global oversight. The value of money and the quantity in circulation can be altered with increasing ease, and current safeguards are clearly insufficient. The problem will grow worse as computational power, and the number of players who can engage in complex manipulations of money, increase.

Drones and Robots

Then there is the explosion of the field of drones and robots, devices of increasingly small size that can conduct detailed surveillance and which increasingly are capable of military action and other forms of interference in human society. Whereas the United States had no armed drones and no robots when it entered Afghanistan, it has now more than 8,000 drones in the air and more than 12,000 robots on the ground.[4] The number of drones and robots will continue to increase rapidly and they are increasingly being used in the United States and around the world without regard for borders.

As the technology becomes cheaper, we will see an increasing number of tiny drones and robots that can operate outside of any legal framework. They will be used to collect information, but they can also be hacked and serve as portals for the distortion and manipulation of information at every level. Moreover, drones and robots have the potential to carry out acts of destruction and other criminal activities whose source can be hidden because of ambiguities as to control and agency. For this reason, the rapidly emerging world of drones and robots deserves to be treated at great length within the constitution of information.

Drafting the Constitution of Information

The constitution of information could become an internationally recognized, legally binding, document that lays down rules for maintaining the accuracy of information and protecting it from abuse. It could also set down the parameters for institutions charged with maintaining long-term records of accurate information against which other data can be checked, thereby serving as the equivalent of an atomic clock for exact reference in an age of considerable confusion. The ability to certify the integrity of information is an issue an order of magnitude more serious than the intellectual property issues on which most international lawyers focus today, and deserves to be identified as an entire field in itself—with a constitution of its own that serves as the basis for all future debate and argument.

This challenge of drafting a constitution of information requires a new approach and a bottom-up design in order to be capable of sufficiently addressing the gamut of complex, interconnected issues found in transnational spaces like that in which digital information exists. The governance systems for information are simply not sufficient, and overhauling them to make them meet the standards necessary would be much more work and much less effective than designing and implementing an entirely new, functional system, which the constitution of information represents. Moreover, the rate of technological change will require a system that can be updated and made relevant while at the same time safeguarding against it being captured by vested interests or made irrelevant.

A possible model for the constitution of information can be found in the “Freedom of Information” section of the new Icelandic constitution drafted in 2011. The Constitutional Council engaged in a broad debate with citizens and organizations throughout the country about the content of the new constitution. The constitution described in detail mechanisms required for government transparency and public accessibility that are far more aligned with the demands of today than other similar documents.[5]

It would be meaningless, however, to merely put forth a model international “constitution of information” without the process of drafting it because without the buy-in of institutions and individuals in its formulation, the constitution would not have the authority necessary to function. The process of debating and compromising that determines the contours of that constitution would endow it with social and political significance, and, like the constitution of 1787, it would become the core for governance. For that matter, the degree to which the content of the constitution of information would be legally enforceable would have to be part of the discussion held at the convention.

Process for the Constitutional Convention

To respond to this global challenge, we should call a constitutional convention in which we will put forth a series of basic principles and enforceable regulations that are agreed upon by major institutions responsible for policy—including national governments and supra-national organizations and multi-national corporations, research institutions, intelligence agencies, NGOs, and a variety of representatives from other organizations. Deciding who to invite and how will be difficult, but it should not be a stumbling block. The United States Constitution has proven quite effective over the last few centuries even though it was drafted by a group that was not representative of the population of North America at the time. Although democratic process is essential to good government, there are moments in history in which we confront deeper ontological and epistemological questions that cannot be addressed by elections or referendums and require a select group of individuals like Benjamin Franklin, Thomas Jefferson and Alexander Hamilton. At the same time, the constitutional convention cannot be merely a gathering of wise men, but will have to involve those directly engaged in the information economy and information policy.

That process of drafting a constitution will involve the definition of key concepts, the establishment of the legal and social limits of the constitution’s authority, the formulation of a system for evaluating the use and misuse of information and suggestions as to policies for responding to abuses of information on a global scale. The text of this constitution of information should be carefully drafted with a literary sense of language so that it will outlive the specifics of the moment and with a clear historic vision and unmistakable idealism that will inspire future generations as the United States Constitution inspires us. This constitution cannot be a flat and bureaucratic rehashing of existing policies on privacy and security.

We must be aware of the dangers involved in trying to determine what is and is not reliable information as draft the constitution of information. It is essential to set up a workable system for assuring the integrity of information, but multiple safeguards, checks, and balances will be necessary. There should be no assumptions as to what the constitution of information would ultimately be, but only the requirement that it should be binding and that the process of drafting it should be cautious but honest.

One essential assumption should be, following David Brin’s argument in his book The Transparent Society,[6] that privacy will be extremely difficult, if not impossible, to protect in the current environment. We must accept, paradoxically, that much information must be made “public” in some sense in order to preserve its integrity and its privacy. That is to say that the process of rigorously protecting privacy is not sufficient granted the overwhelming changes that will take place in the years to come.

Brin draws heavily on Steve Mann’s concept of sousveillance, a process through which ordinary people could observe the actions of the rich and powerful so as to counter the power of the state or the corporation to observe the individual. The basic assumption behind sousveillance is that there is no means of arresting the development of technologies for surveillance and that those with wealth and power will be able to deploy such technologies more effectively than ordinary citizens. Therefore the only possible response to increased surveillance is to create a system of mutual monitoring to assure symmetry, if not privacy. Although the constitution of information does not assume that a system that allows the ordinary citizen to monitor the actions of those in power is necessary, the importance of creating information systems that monitor all information in a 360-degree manner should be seriously considered as part of a constitution of information. The one motive for a constitution of information is to undo the destructive process of designating information as classified and blocking off reciprocity and accountability on a massive scale. We must assure that multiple parties are involved in that process of controlling information so as to assure its accuracy and limit its abuse.

In order to achieve the goal of assuring accuracy, transparency and accountability on a global scale, but avoid massive institutional abuse of the power over information that is granted, we must create a system for monitoring information with a balance of powers at the center. Brin suggests a rather primitive system in which the ruled balance out the power of rulers through an equivalent system for observing and monitoring that works from below. I am skeptical that such a system will work unless we create large and powerful institutions within government (or the private sector) itself that have a functional need to check the power of other institutions.

Perhaps it is possible to establish a complex balance of powers wherein information is monitored and its abuses can be controlled, or punished, according to a meticulous, painfully negotiated, agreement between stakeholders. It could be that ultimately information would be governed by three branches of government, something like the legislative, executive, and judicial systems that has served well for many constitution-based governments. The branches assigned different tasks and authority within this system for monitoring information must have built into their organizations set conflicts of interest and approach in accord with the theory of the “balance of power” to assure that they limited the power of the other branches.

The need to assure accuracy may ultimately be a more essential task than the need to protect privacy. The general acceptance of inaccurate descriptions of state of affairs, or of individuals, is a profoundly damaging and cannot be easily rectified. For this reason, I suggest as part of the three branches of government, a “three keys” system for the management of information be adopted. That is to say that sensitive information will be accessible—otherwise we cannot assure that information will be accurate—but that information can only be accessed when three keys representing the three branches of government are presented. That process would assure that accountability can be maintained because three institutions whose interests are not necessarily aligned must be present to access that information.

Systems for the gathering, analysis and control of information on a massive scale have already reached a high level of sophistication. What is sadly lacking is a larger vision of how information should be treated for the sake of our society. Most responses to the information revolution have been extremely myopic, dwelling on the abuse of information by corporations or intelligence agencies without considering the structural and technological background of those abuses. To merely attribute the misuse of information to a lack of human virtue is to miss the profound shifts sweeping through our society today.

The constitution of information will be fundamentally different than most constitutions in that it must contain both rigidity in terms of holding all parties to the same standards and also considerable flexibility in that it can readily adapt to new situations resulting from rapid technological change. The rate at which information can be stored and manipulated will continue to increase and new horizons and issues will emerge,perhaps more quickly than expected. For this reason, the constitution of information cannot be overly static and must derive much of its power from its vision.

Structure of an Information Accuracy System

We can imagine a legislative body to represent all the elements of the information community engaged in the regulation of the traffic and the quality of information as well as individuals and NGOs. It would be a mistake to assume that the organizations represented in that “legislature” would necessarily be nation states according to the United Nations formulation of global governance. The limits of the nation state concept with regards to information policy are increasingly obvious and this constitutional convention could serve as an opportunity to address the massive institutional changes that have taken place over the past fifty years. It would be more meaningful, in my opinion, to make the members companies, organizations, networks, local government, a broad range of organizations that make the actual decisions concerning the creation, distribution and reception of information. That part of the information security system would only be “legislative” in a conceptual sense. It would not necessarily have meetings or be composed of elected or appointed representatives. In fact, if we consider the fact that the actual physical meetings of government legislatures around the world have become but rituals, we can sense that there the whole concept of the legislative process requires much modification.

The executive branch of the new information accuracy system would be charged with administrating the policies based on the legislative branch’s policies. It would implement rules concerning information to preserve its integrity and prevent its misuse. The details of how information policy is carried out would be determined at the constitutional convention.

The executive would be checked not only by the legislative branch but also a judicial branch. The judicial branch would be responsible for formulating interpretations of the constitution with regards to an ever-changing environment for information, and also for assessing the appropriateness of actions taken by the executive and legislative.

The terms “executive,” “legislative” and “judicial” are meant more as placeholders in this initial discussion, rather than as actual concrete descriptions of the institutions to be established. The functioning of these units would be profoundly different from such branches of present local and national governments, or even international organizations like the United Nations. If anything, the constitution of information, in that information and its changing nature underlie all other institutions; will be a step forward towards a new approach to governance in general.

Conclusion

It would be irresponsible and rash to draft an “off the shelf” constitution of information that can be readily applied around the world to respond to the complex situation of information today. Although I accept that initial proposals for a constitution of information like this one may be dismissed as irrelevant and wrong-headed, I assert that as we enter an unprecedented age of information revolution and most of the assumptions that undergirded our previous governance systems based on physical geography and discrete domestic economies will be overturned, there will be a critical demand for new systems to address this crisis. This initial foray can help formulate the problems to be addressed and the format in which do to so in advance.

In order to effectively govern a new space that exists outside of our current governance systems (or in the interstices between systems), we must make new rules that can effectively govern that space and work to defend transparency and accuracy in the perfect storm born of the circulation and alteration of information. If information exists in a transnational or global space and affects people at that scale, then the governing institutions responsible for its regulation need to be transnational or global in scale. If unprecedented changes are required, then so be it. If all records for hundreds of years exist on line, then it will be entirely possible, as suggested in Margaret Atwood’s The Handmaid’s Tale, to alter all information in a single moment if there is not a constitution of information. But the solution must involve designing the institutions that will be used to govern information, thus bringing an inspiring vision to what we are doing. We must give a philosophical foundation for the regulation information and open up of new horizons for human society while appealing to our better angels.

Oddly, many assume that the world of policy must consist of the drafting turgid and mind-numbing documents in the specialized terminology of economists. But history also has moments such as the drafting of the United States constitution during which a small group of visionary individuals manage to meet up with government institutions to create an inspiring new vision of what is possible that are recorded in terse and inspiring language. That is what we need today with regards to information. To propose such an approach is not a misguided modern version of Neo-Platonism, but a chance to seize the initiative with regards to ineluctable change and put forth a vision, rather than responding to change.

[1]As is suggested in the article by Tony Romm “David Petraeus affair scandal highlights email privacy issues” (http://www.politico.com/news/stories/1112/83984.html#ixzz2CUML3RDy).
[2] http://www.idc.com/getdoc.jsp?containerId=prUS23177411#.UTL3bDD-H54
[3] Human genetic evolution is even slower.
[4]Peter Singer. “The Robotics Revolution” in Canadian International Council, December 11, 2012.
[5] http://fairerglobalization.blogspot.kr/2011/06/iceland-write…n-age.html
[6]Brin, ‚David. The Transparent Society: Will Technology Force Us to Choose between Privacy and Freedom? New York: Basic Books, 1998.

I have seen the future of Bitcoin, and it is bleak.

The Promise of Bitcoin

If you were to peek into my bedroom at night (please don’t), there’s a good chance you would see my wife sleeping soundly while I stare at the ceiling, running thought experiments about where Bitcoin is going. Like many other people, I have come to the conclusion that distributed currencies like Bitcoin are going to eventually be recognized as the most important technological innovation of the decade, if not the century. It seems clear to me that the rise of distributed currencies presents the biggest (and riskiest) investment opportunity I am likely to see in my lifetime; perhaps in a thousand lifetimes. It is critically important to understand where Bitcoin is going, and I am determined to do so.

Continue reading “Bitcoin’s Dystopian Future” | >

Greetings to the Lifeboat Foundation community and blog readers! I’m Reno J. Tibke, creator of Anthrobotic.com and new advisory board member. This is my inaugural post, and I’m honored to be here and grateful for the opportunity to contribute a somewhat… different voice to technology coverage and commentary. Thanks for reading.

This Here Battle Droid’s Gone Haywire
There’s a new semi-indy sci-fi web series up: DR0NE. After one episode, it’s looking pretty clear that the series is most likely going to explore shenanigans that invariably crop up when we start using semi-autonomous drones/robots to do some serious destruction & murdering. Episode 1 is pretty and well made, and stars 237, the android pictured above looking a lot like Abe Sapien’s battle exoskeleton. Active duty drones here in realityland are not yet humanoid, but now that militaries, law enforcement, the USDA, private companies, and even citizens are seriously ramping up drone usage by land, air, and sea, the subject is timely and watching this fiction is totally recommended.

(Update: DR0NE, Episode 2 now available)

It would be nice to hope for some originality, and while DR0NE is visually and means-of-productionally and distributionally novel, it’s looking like yet another angle on a psychology & set of issues that fiction has thoroughly drilled — like, for centuries.

Higher-Def Old Hat?
Okay, so the modern versions go like this: one day an android or otherwise humanlike machine is damaged or reprogrammed or traumatized or touched by Jesus or whatever, and it miraculously “wakes up,” or its neural network remembers a previous life, or what have you. Generally the machine becomes severely bi-polar about its place in the universe; while it often struggles with the guilt of all the murderdeathkilling it did at others’ behest, it simultaneously develops some serious self-preservation instinct and has little compunction about laying waste to its pursuers, i.e., former teammates & commanders who’d done the behesting.

Admittedly, DR0NE’s episode 2 has yet to be released, but it’s not too hard to see where this is going; the trailer shows 237 delivering some vegetablizing kung-fu to it’s human pursuers, and dude, come on — if a human is punched in the head hard enough to throw them across a room and into a wall or is uppercut into a spasticating backflip, they’re probably just going to embolize and die where they land. Clearly 237 already has the stereotypical post-revelatory per-the-plot justifiable body count.

Where have we seen this pattern before? Without Googling, from the top of one robot dork’s head, we’ve got: Archetype, Robocop, iRobot (film), Iron Giant, Short Circuit, Blade Runner, Rossum’s Universal Robots, and going way, way, way back, the golem.

Show Me More Me
Seems we really, really dig on this kind of story. Continue reading “The Recurring Parable of the AWOL Android” | >

Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

It may be a point of little attention, as the millennium bug came with a lot of hoo-ha and went out with a whimper, but the impact it had on business was small because of all the hoo-ha, not in spite of it. And so it is with some concern that I consider operating system rollover dates as a potential hazard by software malfunction at major industrial operations such as nuclear power stations and warhead controls, which in worst case scenario, could of course have disastrous implications due to out-dated control systems.

The main dates of interest are 19 January 2038 by when all 32-bit Unix operating systems need to have been replaced by at least their 64-bit equivalents, and 17 Sept 2042 when IBM mainframes that use a 64-bit count need to be phased out.

Scare mongering? Perhaps not. While all modern facilities will have the superior time representation, I question if facilities built in the 70s and 80s, in particular those behind the old iron curtain were or ever will be upgraded. This raises a concern that for example the old soviet nuclear arsenal could become a major global threat within a few decades by malfunction if not decommissioned or control systems upgraded. It is one thing for a bank statement to print the date wrong on your latest bill due to millennium bug type issues, but if automated fault tolerance procedures have coding such as ‘if(time1 > time2+N) then initiate counter-measures’ then that is quite a different matter entirely.

I believe this is a topic which warrants higher profile lest it be forgot. Fortunately the global community has a few decades on its hands to handle this particular issue, though all it takes is just one un-cooperative facility to take such a risk rather than perform the upgrades necessary to ensure no such ‘meltdowns’ occur. Tick-tock, tick-tock, tick-tock…

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

Within the next few years, robots will move from the battlefield and the factory into our streets, offices, and homes. What impact will this transformative technology have on personal privacy? I begin to answer this question in a chapter on robots and privacy in the forthcoming book, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge: MIT Press).

I argue that robots will implicate privacy in at least three ways. First, they will vastly increase our capacity for surveillance. Robots can go places humans cannot go, see things humans cannot see. Recent developments include everything from remote-controlled insects to robots that can soften their bodies to squeeze through small enclosures.

Second, robots may introduce new points of access to historically private spaces such as the home. At least one study has shown that several of today’s commercially available robots can be remotely hacked, granting the attacker access to video and audio of the home. With sufficient process, governments will also be able to access robots connected to the Internet.

There are clearly ways to mitigate these implications. Strict policies could reign in police use of robots for surveillance, for instance; consumer protection laws could require adequate security. But there is a third way robots implicate privacy, related to their social meaning, that is not as readily addressed.

Study after study has shown that we are hardwired to react to anthropomorphic technology such as robots as though a person were actually present. Reports have emerged of soldiers risking their lives on the battlefield to save a robot under enemy fire. No less than people, therefore, the presence of a robot can interrupt solitude—a key value privacy protects. Moreover, the way we interact with these machines will matter as never before. No one much cares about the uses to which we put our car or washing machine. But the record of our interactions with a social machine might contain information that would make a psychotherapist jealous.

My chapter discusses each of these dimensions—surveillance, access, and social meaning—in detail. Yet it only begins a conversation. Robots hold enormous promise and we should encourage their development and adoption. Privacy must be on our minds as we do.

Originally posted @ Perspective Intelligence

Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.

The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.

Computer aided crash (proof of concept for future cyber-attack)

There has yet to be a definitive explanation of how stocks such as Proctor and Gamble plunged 47% and the normally solid Accenture plunged from a value of roughly $40 to one cent, based on no external input of information into the financial system. The SEC has issued directives in recent years boosting competition and lowering commissions, which has had the effect of fragmenting equity trading around the US and making it highly automated. This has created four leading exchanges, NYSE Euronext, Nasdaq OMX Group, Bats Global Market and Direct Edge and secondary exchanges include International Securities Exchange, Chicago Board Options Exchange, the CME Group and the Intercontinental Exchange. There are also broker-run matching systems like those run by Knight and ITG and so called ‘dark-pools’ where trades are matched privately with prices posted publicly only after trades are done. As similar picture has emerged in Europe, where rules allowing competition with established exchanges and known by the acronym “Mifid” have led to a similar explosion of types and venues.

To navigate this confusing picture traders have to rely on ‘smart order routers’ – electronic systems that seek the best price across all of the platforms. Therefore, trades are done in vast data centers – not in exchange buildings. This total automation of trading allows for the use of a variety of ‘trading algorithms’ to manage investment themes. The best known of these is a ‘Volume Algo’, which ensures throughout the day that a trader maintains his holding in a share at a pre-set percentage of that share’s overall volume, automatically adjusting buy and sell instructions to ensure that percentage remains stable whatever the market conditions. Algorithms such as this have been blamed for exacerbating the rapid price moves on May 6th. High-frequency traders are the biggest proponents of algos and they account for up to 60% of US equity trading.

The most likely cause of the collapse on May 6th was the slowing down or near stop on one side of the trading pool. So in very basic terms a large number of sell orders started backing up on one side of the system (at the speed of light) with no counter-parties taking the order on the other side of the trade. The counter-party side of the trade slowed or stopped causing this almost instant pile-up of orders. The algorithms on the other side finding no buyer for their stocks kept offering lower prices (as per their software) until they attracted a buyer. However, as no buyer’s appeared on the still slowed or stopped counter-party side prices tumbled at an alarming rate. Fingers have pointed at the NYSE for causing the slow down on one side of the trading pool as it instituted some kind of circuit breaker into the system, which caused all the other exchanges to pile-up on the other side of the trade. There has also been a focus on one particular trade, which may have been the spark igniting the NYSE ‘circuit breaker’. Whatever the precise cause, once events were set in train the system had in no way caught up with the new realities of automated trading and diversified exchanges.

More nodes same assumptions

On one level this seems to defy conventional thinking about security – more diversity greater strength – not all nodes in a network can be compromised at the same time. By having a greater number of exchanges surely the US and global financial system is more secure? However, in this case, the theory collapses quickly if thinking is switched from examining the physical to the virtual. While all of the exchanges are physically and operationally separate they all seemingly share the same software and crucially trading algorithms that all have some of the same assumptions. In this case they all assumed that because they could find no counter-party to the trade they needed to lower the price (at the speed of light). The system is therefore highly vulnerable because it relies on one set of assumptions that have been programmed into lighting fast algorithms. If a national circuit breaker could be implemented (which remains doubtful) then this could slow rapid descent but it doesn’t take away the power of the algorithms – which are always going to act in certain fundamental ways ie continue to lower the offer price if they obtain no buy order. What needs to be understood are the fundamental ways in which all the trading algorithms move in concert. All will have variances but they will all share key similarities, understanding these should lead to the design of logic circuit breakers.

New Terrorism

However, for now the system looks desperately vulnerable to both generalized and targeted cyber attack and this is the opportunity for the next generation of terrorists. There has been little discussion as to whether the events of last Thursday were prompted by malicious means but it certainly is worth mentioning. At a time when Greece was burning launching a cyber attack against this part of the US financial system would clearly have been stunningly effective. Combining political instability with a cyber attack against the US financial system would create enough doubt about the cause of a market drop for the collapse gain rapid traction. Using targeted cyber attacks to stop one side of the trade within these exchanges (which are all highly automated and networked) would, as has now been proven, cause a dramatic collapse. This could also be adapted and targeted at specific companies or asset classes to cause a collapse in price. A scenario where-by one of the exchanges slows down its trades surrounding the stock of a company the bad-actor is targeting seems both plausible and effective.

A hybrid cyber and kinetic attack could also cause similar damage – as most trades are now conducted within data-centers – it begs the question why are there armed guards outside the NYSE – of course if retains some symbolic value but security resources would be better placed outside of the data-centers where these trades are being conducted. A kinetic attack against financial data centers responsible for these trades would surely have a devastating effect. Finding the location of these data centers is as simple as conducting a Google search.

In order for terrorism to have impact in the future it needs to shift its focus from the weapons of the 20th Century to those of the present day. Using their current tactics the Pakistan Taliban and their assorted fellow-travelers cannot fundamentally damage western society. That battle is over. However, the next era of conflict motivated by a radicalism from as yet unknown grievances, fueled by a globally networked generation Y, their cyber weapons of choice and the precise application of ultra-violence and information spin has dawned. Five days in Manhattan flashed a light on this new era.

Roderick Jones

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.