Toggle light / dark theme

On Wednesday, May 9th 2001, over twenty military, intelligence, government, corporate and scientific witnesses came forward at the National Press Club in Washington, DC to establish the reality of UFOs or extraterrestrial vehicles, extraterrestrial life forms, and resulting advanced energy and propulsion technologies.

DEAFENING SILENCE: Media Response to the May 9th Event
and its Implications Regarding the Truth of Disclosure

by Jonathan Kolber

http://www.disclosureproject.org/May9response.htm

My intent is to establish that the media’s curiously limited coverage of the May 9, 2001 National Press Club briefing is highly significant.

At that event, nearly two dozen witnesses stepped forward and offered their testimony as to personal knowledge of ET’s and ET-related technologies. These witnesses claimed top secret clearances and military and civilian accomplishments of the highest order. Some brandished uncensored secret documents. The world’s major media were in attendance, yet few reported what they saw, most neglecting to even make skeptical mention.

How can this be? Major legal trials are decided based on weaker testimony than was provided that day. Prison sentences are meted out on less. The initial Watergate evidence was less, and the implications of this make Watergate insignificant by comparison. Yet the silence is deafening.

Three Possibilities:

If true, the witness testimony literally ushers in the basis for a whole new world of peace and prosperity for all. Validating the truth of Disclosure is probably the most pressing question of our times. The implications for the human future are so overwhelming that virtually everything else becomes secondary. However, the mass media have not performed validation. No investigative stories seeking to prove or disprove the witness testimony have appeared.

This cannot be due to lack of material; in the remainder of this article I will perform validation based upon material handed to the world’s media on May 9th.

In my view, only three possibilities exist: the witnesses were all lying, they were all delusional, or they were documenting the greatest cover-up in history. The reason is that if any one witness were neither lying nor delusional, then the truth of Disclosure is established. Let’s examine each possibility in turn.

If the witnesses were lying, a reasonable observer would ask, “where is the payoff?” What is the possible benefit to a liar pleading for the chance to testify before Congress under oath? The most likely payoff would be a trip to jail. These witnesses have not openly requested any financial compensation, speaking engagements or the like, and the Disclosure Project’s operation cannot support a payoff to dozens of persons. A cursory evaluation of its “products” coupled with a visit to its Charlottesville offices will establish this. Further, the parent organization, CSETI, is an IRS 501C3 nonprofit organization, and its lack of financial resources is a matter of public record. So the notion that the witnesses were doing so for material benefit is unsupported by facts at hand.

To my knowledge, large numbers of persons do not collude to lie without some compelling expected benefit. Other than money, the only such reason I can conceive in this case would be ideology. I wonder what radical extremist “ideology” could plausibly unite such a diverse group of senior corporate and military witnesses, nearly all of whom have previously displayed consistent loyalty to the United States in word and deed? I find none, and I therefore dismiss lying as implausible.

Further, the witnesses claimed impressive credentials. Among them were a Brigadier General, an Admiral, men who previously had their finger on the nuclear launch trigger, air traffic controllers, Vice Presidents of major American corporations—persons who either routinely have had our lives in their hands or made decisions affecting everyone. To my knowledge, in the half-year since May 9th, not a single claimed credential has been challenged in a public forum. Were they lying en masse, such an exposure would be a nice feather in the cap of some reporter. However, it hasn’t happened.

If all the witnesses were delusional, then a reasonable observer would presume that such “mass psychosis” did not suddenly manifest. That is, a number of witnesses would have shown psychotic tendencies in the past, in some cases probably including hospitalization. To my knowledge, this has not been alleged.

If they were documenting the greatest cover-up in history, and especially as briefing books that enumerated details of specific cases were handed out on May 9th to the dozens of reporters present, coverage should have dominated the media ever since, with a national outcry for hearings. This did not happen either.

Implications:

What do the above facts and inferences imply about the state of affairs in the media and the credibility of the witness testimony? In my view, they imply a lot.

If the witnesses were neither lying nor delusional, then the deafening media silence following May 9th implies an intentional process of failure to explore and reveal the truth. Said less politely, it implies censorship. (If I am right, this is itself an explosive statement, worthy of significant media attention—which it will not receive.) The only stories comparable in significance to May 9th would be World War III, a plague decimating millions, or the like. Yet between May 9th and September 11th, the news media was saturated with stories that are comparatively trivial.

Briefing documents were provided to reporters present. These books provided much of due diligence necessary for those reporters to explore the truth. However, neither Watergate-type coverage nor exposure of witness fraud has followed.

One of the witnesses reported how he became aware of 43 persons on the payrolls of major media organs while in fact working for the US government. Their job was to intercept ET-related stories and squelch, spin or ridicule. If we accept his testimony as factual, it provides a plausible explanation for the deafening silence following May 9th.

There is a bright spot in this situation. Some of the media did provide coverage, if only for a few days. This suggests that those who control media reporting do not have a monolithic power; they can be circumvented. The event did run on the internet and was seen by 250,000 viewers, despite “sophisticated electronic jamming” during the first hour (words attributed to the broadcast provider, not the Disclosure Project). Indeed, it continues to be fully documented at the Project’s web site.

Conclusions:

Since an expose of witness deceit or mass psychosis would itself have been a good, career-building story for some reporter, but no such story has appeared, I conclude that these witnesses are who they claim to be.

If these witnesses are who they claim to be, then they presented testimony they believe truthful. Yet no factual detail of any of that testimony has since been disputed in the media. Half a year is enough time to do the research. I believe the testimony is true as presented.

If the data is true as presented and the media are essentially ignoring what is indisputably the greatest story of our era, then the media are not performing the job they claim to do. Either they are being suppressed/censored, or they do not believe the public would find this subject interesting.

The tabloids continuously run stories on ET-related subjects, and polls show high public interest in the subject, so lack of interest value cannot be the explanation. I conclude that there is active suppression. This is corroborated by the witness claim of 43 intelligence operatives on major media payrolls.

Despite active suppression, enough coverage of the May 9th event happened in major publications and broadcast media to prove that the suppression can be thwarted. An event of significant enough impact and orchestration can break through the censorship. Millions of persons previously unaware of or dubious about ET-related technologies and their significance for ending our dependence on Arab oil have since become aware.

We live in a controlled society, one in which the control is secretive yet masquerades as openness. Yet, as proven May 9th, this control can be overcome by the concerted efforts of determined groups of persons. We must seek such opportunities again.

Jacob Haqq-Misra and Seth D. Baum (2009). The Sustainability Solution to the Fermi Paradox. Journal of the British Interplanetary Society 62: 47–51.

Background: The Fermi Paradox
According to a simple but powerful inference introduced by physicist Enrico Fermi in 1950, we should expect to observe numerous extraterrestrial civilizations throughout our galaxy. Given the old age of our galaxy, Fermi postulated that if the evolution of life and subsequent development of intelligence is common, then extraterrestrial intelligence (ETI) could have colonized the Milky Way several times over by now. Thus, the paradox is: if ETI should be so widespread, where are they? Many solutions have been proposed to account for our absence of ETI observation. Perhaps the occurrence of life or intelligence is rare in the galaxy. Perhaps ETI inevitably destroy themselves soon after developing advanced technology. Perhaps ETI are keeping Earth as a zoo!

The ‘Sustainability Solution’
The Haqq-Misra & Baum paper presents a definitive statement on a plausible but often overlooked solution to the Fermi paradox, which the authors name the “Sustainability Solution”. The Sustainability Solution states: the absence of ETI observation can be explained by the possibility that exponential or other faster-growth is not a sustainable development pattern for intelligent civilizations. Exponential growth is implicit in Fermi’s claim that ETI could quickly expand through the galaxy, an assumption based on observations of human expansion on Earth. However, as we are now learning all too well, our exponential expansion frequently proves unsustainable as we reach the limits of available resources. Likewise, because all civilizations throughout the universe may have limited resources, it is possible that all civilizations face similar issues of sustainability. In other words, unsustainably growing civilizations may inevitably collapse. This possibility is the essence of the Sustainability Solution.

Implications for the Search for Extraterrestrial Intelligence (SETI)
If the Sustainability Solution is true, then we may never observe a galactic-scale ETI civilization, for such an empire would have grown and collapsed too quickly for us to notice. SETI efforts should therefore focus on ETI that grow within the limits of their carrying capacity and thereby avoid collapse. These slower-growth ETI may possess the technological capacity for both radio broadcasts and remote interstellar exploration. Thus, SETI may be more successful if it is expanded to include a search of our Solar System for small, unmanned ETI satellites.

Implications for Human Civilization Management
Does the Sustainability Solution mean that humanity must live sustainably in order to avoid collapse? Not necessarily. Humanity could collapse even if it lives sustainably—for example, if it collides with a large asteroid. Alternatively, humanity may be able to grow rapidly for much longer—for example, until we have colonized the entire Solar System. Finally, the Sustainability Solution is only one of several possible solutions to the Fermi paradox, so it is not necessarily the case that all civilizations must grow sustainably or else face collapse. However, the possibility of the Sustainability Solution makes it more likely that humanity must live more sustainably if it is to avoid collapse.

Image from The Road film, based on Cormac McCarthy's book

How About You?
I’ve just finished reading Cormac McCarthy’s The Road at the recommendation of my cousin Marie-Eve. The setting is a post-apocalyptic world and the main protagonists — a father and son — basically spend all their time looking for food and shelter, and try to avoid being robbed or killed by other starving survivors.

It very much makes me not want to live in such a world. Everybody would probably agree. Yet few people actually do much to reduce the chances of of such a scenario happening. In fact, it’s worse than that; few people even seriously entertain the possibility that such a scenario could happen.

People don’t think about such things because they are unpleasant and they don’t feel they can do anything about them, but if more people actually did think about them, we could do something. We might never be completely safe, but we could significantly improve our odds over the status quo.

Danger From Two Directions: Ourselves and Nature.

Human technology is becoming more powerful all the time. We already face grave danger from nuclear weapons, and soon molecular manufacturing technologies and artificial general intelligence could pose new existential threats. We are also faced with slower, but serious, threats on the environmental side: Global warming, ocean acidification, deforestation/desertification, ecosystem collapse, etc.

Continue reading “I Don’t Want To Live in a Post-Apocalyptic World” | >

Announcing $35M in new funding last Friday Twitter was one of the few bright spots in a collapsing economy. The micro-blogging service has been attracting increasing attention within the mainstream, as the political classes adopt the service – most notably, congressman Pete Hokestra (R-Mich.) who produced a stream of tweets detailing his location as he traveled from Andrew’s Air Force base to Baghdad and back. Besides the disbelieving head shaking this particular series of political tweets attracted, it does highlight the amorphous nature of Twitter — it isn’t clear what it really is.

Certainly, the revenue model remains unclear, as does its true utility or even what the unintended consequences of using the service may be. In a National Security sense Twitter emerged as a powerful networked communications platform during the Mumbai terrorist attacks, when a stream of tweets marked #Mumbai (# being the global tagging system Twitter employs) gave a seemingly real-time commentary on events as they unfolded in Mumbai. Similarly, Twitter has been used to communicate the message and activity surrounding the riots in Greece using the #Griot tag. These are examples of the network effect working with a rapid communications platform and developing a powerful narrative from many different observation points. The style is anarchic but increasingly compelling.

Therefore, one argument regarding the long-term use of Twitter, in the National Security space at least, is that Twitter in conjunction with other tools, continues the trend of making ordinary citizens active producers of potentially actionable intelligence. This equally applies to Microsoft Photosynth and the meshing of user created digital platforms is a future trend, which doesn’t seem too far away. One of Twitter’s more recent high profile moments was the picture of the USAirways plane in the Hudson taken by an ordinary citizen who happened to be on a ferry, which went to the scene. This picture quickly and succinctly explained the situation to any emergency service in the area. This same principal can clearly be globally extended in terms of data and geographic reach. In fact it is the increasing penetration of mobile devices, which would seem to offer a bright future for the Twitter platform.

An area, which the Twitter platform excels in are the tools that can be used to manipulate the information within Twitter. This is where the open feel of the service suggests it somehow has more potential than the well designed social networking platforms such as Facebook. Information is messy and Twitter fits around this principle.

In order to examine Twitter we established a Twitter feed at www.twitter/In_Terrain. The idea behind this was to use the RSS feed Twitter tool TwitterFeed to push content of interest to a Twitter account and then examine ways in which this could be consumed. The results so far have been impressive. Twitterrific available for Apple products displays the security information feed in a very useful way. Tweetr for windows does a similar thing for Microsoft based systems and of course TwitterBerry enables access from a Blackberry. If users join Twitter they can chose to ‘follow’ the In_Terrain feed and receive the same information and potentially reply to specific tweets they find interesting – thus creating the ‘conversation’ Twitter, desires. Similarly, if other security and intelligence focused twitter feeds become apparent the In_Terrain twitter feed can ‘follow’ those conversations – thus beginning the network effect.

Clearly, this is still experimental and there are other avenues to explore with regard to GPS Twitter applications. The aim with the In_Terrain Twitter account is to generate tweets from mainstream information sources as well as the ‘lower frequencies’. Starting a National Security focused tweet seems like an interesting idea right now – so I welcome Blog readers to ‘join the conversation’ – and please make suggestions for improvements or content additions. Maybe it will even become useful.

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.

Abstract

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Article

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Leaving aside the lunatic fringe—those who would charge ahead guns (or labs) a-blazing—I suspect that there is broad but shallow agreement on and advocacy of the rational development of nanotechnology. That is, what is “rational” to the scientists might not be “rational” to many commercially oriented engineers, but each group would lay claim to the “rational” high ground. Neither conception of rational action is likely to be assimilated easily to the one shared by many philosophers and ethicists who, like me, have become fascinated by ethical issues in nanotechnology. And when it comes to rationality, philosophers do like to take the high ground but don’t always agree where it is to be found—except under one’s own feet. Standing on the top of the Himalayan giant K2, one may barely glimpse the top of Everest.

So in the spirit of semantic housekeeping, I’d like to introduce some slightly less abstract categories, to climb down from the heights of rationality and see if we might better agree (and more perspicuously disagree) on what to think and what to do about nanotechnology. At the risk of clumping together some altogether disparate researchers, I will posit that the three fields mentioned above—science, engineering, and philosophy—want different things from their “rational” courses of action.

The scientists, especially the academics, want knowledge of fundamental structures and processes of nanoparticles. They want to fit this knowledge into existing accounts of larger-scale particles in physics, chemistry, and biology. Or they want to understand how engineered and natural nanoparticles challenge those accounts. They want to understand why these particles have the causal properties that they do. Prudent action, from the scientific point of view, requires that we not change the received body of knowledge called science until we know what we’re talking about.

The engineers (with apologies here to academic engineers who are more interested in knowledge-creation than product-creation) want to make things and solve problems. Prudence on their view involves primarily ends-means or instrumental rationality. To pursue the wrong means to an end—for instance, to try to construct a new macro-level material from a supposed stock of a particular engineered nanoparticle, without a characterization or verification of what counts as one of those particles—is just wasted effort. For the engineers, wasted effort is a bad thing, since there are problems that want solutions, and solutions (especially to public health and environmental problems) are time sensitive. Some of these problems have solutions that are non-nanotech, and the market rewards the first through the gate. But the engineers don’t need a complete scientific understanding of nanoparticles to forge ahead with efforts. As Henry Petroski recently said in the Washington Post (1/25/09), “[s]cience seeks to understand the world as it is; only engineering can change it.”

The philosophers are of course a more troublesome lot. Prudence on their view takes on a distinctly moral tinge, but they recognize the other forms too. Philosophers are mostly concerned with the goodness of the ends pursued by the engineers, and the power of the knowledge pursued by the scientists. Ever since von Neumann’s suggestion of the technological inevitability of scientific knowledge, some philosophers have worried that today’s knowledge, set aside perhaps because of excessive risks, can become tomorrow’s disastrous products.

The key disagreement, though, is between the engineers and the philosophers, and the central issues concern the plurality of good ends, and the incompatibility of some of them with others. For example, it is certainly a good end to have clean drinking water worldwide today, and we might move towards that end by producing filtration systems with nanoscale silver or some other product. It is also a good end to have healthy aquatic ecosystems today, and to have viable fisheries tomorrow, and future people to benefit from them. These ends may not all be compatible. When we add up the good ends over many scales, the balancing problem becomes almost insurmountable. Just consider a quick accounting: today’s poor, many of whom will die from water-born disease; cancer patients sickened by the imprecise “cures” given to them, future people whose access to clean water and sustainable forms of energy hang in the balance. We could go on.

When we think about these three fields and their allegedly separate conceptions of prudent action, it becomes clear that their conceptions of prudence can be held by one and the same person, without fear of multiple personality disorder. Better, then, to consider these scientific, engineering, and philosophical mindsets, which are held in greater or lesser concentrations by many researchers. That they are held in different concentrations by the collective consciousness of the nanotechnology field is manifest, it seems, by the disagreement over the right principle of action to follow.

I don’t want to “psychologize” or explain away the debate over principles here, but isn’t it plausible to think that advocates of the Precautionary Principle have the philosophical mindset to a great degree, and so they believe that catastrophic harm to future generations isn’t worth even a very small risk? That is because they count the good ends to be lost as greater in number (and perhaps in goodness) than the good ends to be gained.

Those of the engineering mindset, on the other hand, want to solve problems for people living now, and they might not worry so much about future problems and future populations. They are apt to prefer a straightforward Cost-Benefit Principle, with serious discounting of future costs. The future, after all, will have their own engineers, and a new set of tools for the problems they face. Of course, those of us alive today will in large part create the problems faced by those future people. But we will also bequeath to them our science and engineering.

I’d like to offer a conjecture at this point about the basic insolubility of tensions between the scientific, engineering, and philosophical mindsets and their conceptions of prudent action. The conjecture is inspired by the Impossibility Theorem of the Nobel Prize winning economist Kenneth Arrow, but only informally resembles his brilliant conclusion. In a nutshell, it is this. If we believe that the nanotechnology field has to aggregate preferences for prudential action over these three mindsets, where there are multiple choices to be made over development and commercialization of nanotechnology’s products, we will not come to agreement on what counts as prudent action. This conjecture owes as much to the incommensurability of various good ends, and the means to achieve them, as it does to the kind of voting paradox of which Arrow’s is just one example.

If I am right in this conjecture, we shouldn’t be compelled to try to please all of the people all of the time. Once we give up on this “everyone wins” mentality, perhaps we can get on with the business of making difficult choices that will create different winners and losers, both now and in the future. Perhaps we will also get on with the very difficult task of achieving a comprehensive understanding of the goals of science, engineering, and ethics.

Thomas M. Powers, PhD
Director—Science, Ethics, and Public Policy Program
and
Assistant Professor of Philosophy
University of Delaware

According to the Associated Press, Abdul Qadeer Khan is now free to “move around” and is no longer under house arrest (where he was confined since 2004).

“In January 2004, Khan confessed to having been involved in a clandestine international network of nuclear weapons technology proliferation from Pakistan to Libya, Iran and North Korea. On February 5, 2004, the President of Pakistan, General Pervez Musharraf, announced that he had pardoned Khan, who is widely seen as a national hero.” (Source)

For more information about nuclear proliferation, see:

See also this recent post by Michael Anissimov, the Fundraising Director of the Lifeboat Foundation.

(This essay has been published by the Innovation Journalism Blog — here — Deutsche Welle Global Media Forum — here — and the EJC Magazine of the European Journalism Centre — here)

Thousands of lives were consumed by the November terror attacks in Mumbai.

“Wait a second”, you might be thinking. “The attacks were truly horrific, but all news reports say around two hundred people were killed by the terrorists, so thousands of lives were definitely not consumed.”

You are right. And you are wrong.

Indeed, around 200 people were murdered by the terrorists in an act of chilling exhibitionism. And still, thousands of lives were consumed. Imagine that a billion people devoted, on average, one hour of their attention to the Mumbai tragedy: following the news, thinking about it, discussing it with other people. The number is a wild guess, but the guess is far from a wild number. There are over a billion people in India alone. Many there spent whole days following the drama. One billion people times one hour is one billion hours, which is more than 100,000 years. The global average life expectancy is today 66 years. So nearly two thousand lives were consumed by news consumption. It’s far more than the number of people murdered, by any standards.

In a sense, the newscasters became unwilling bedfellows of the terrorists. One terrorist survived the attacks, confessing to the police that the original plan had been to top off the massacre by taking hostages and outlining demands in a series of dramatic calls to the media. The terrorists wanted attention. They wanted the newsgatherers to give it to them, and they got it. Their goal was not to kill a few hundred people. It was to scare billions, forcing people to change reasoning and behavior. The terrorists pitched their story by being extra brutal, providing news value. Their targets, among them luxury hotels frequented by the international business community, provided a set of target audiences for the message of their sick reality show. Several people in my professional surroundings canceled business trips to Mumbai after watching the news. The terrorists succeeded. We must count on more terror attacks on luxury hotels in the future.

Can the journalists and news organizations who were in Mumbai be blamed for serving the interests of the terrorists? I think not. They were doing their jobs, reporting on the big scary event. The audience flocked to their stories. Their business model — generating and brokering attention — was exploited by the terrorists. The journalists were working on behalf of the audience, not on behalf of the terrorists. But that did not change the outcome. The victory of the terrorists grew with every eyeball that was attracted by the news. Without doubt, one of the victims was the role of journalism as a non-involved observer. It got zapped by a paradox. It’s not the first time. Journalism always follows “the Copenhagen interpretation” of quantum mechanics: You can’t measure a system without influencing it.

Self reference is a classic dilemma for journalism. Journalism wants to observe, not be an actor. It wants to cover a story without becoming part of it. At the same time it aspires to empower the audience. But by empowering the audience, it becomes an actor on the story. Non-involvement won’t work, it is a self-referential paradox like the Epimenides paradox (the prophet from Crete who said “All Cretans are liars”). The basic self-referential paradox is the liars’ paradox (“This sentence is false”). This can be a very constructive paradox, if taken by the horns. It inspired Kurt Gödel to reinvent the foundation of mathematics, addressing self-reference. Perhaps the principles of journalism can be reinvented, too? Perhaps the paradox of non-involvement can be replaced by ethics of engagement as practiced by, for example, psychologists and lawyers?

While many classic dilemmas provide constant frustration throughout life, this one is about to get increasingly wicked. Here is why. It is only 40 years since the birth of collaboration between people sitting behind computers linked by a network, “the mother of all demos”, when Doug Engelbart and his team at SRI demoed the first computer mouse, interactive text, video conferencing, teleconferencing, e-mail and hypertext.

Only 40 years after their first demo, and only 15 years after the Internet reached beyond the walls of university campuses, Doug’s tools are in almost every home and office. Soon they’ll be built into every cell phone. We are always online. For the first time in human history, the attention of the whole world can soon be summoned simultaneously. If we summon all the attention the human species can supply, we can focus two hundred human years of attention onto a single issue in a single second. This attention comes equipped with glowing computing power that can process information in a big way.

Every human on the Net is using a computer device able to do millions or billions of operations per second. And more is to come. New computers are always more powerful than their predecessors. The power has doubled every two years since the birth of computers. This is known as Moore’s Law.

If the trend continues for another 40 years, people will be using computers one million times more powerful than today. Try imagining what you can do with that in your phone or hand-held gaming device! Internet bandwidth is also booming. Everybody on Earth will have at least one gadget. We will all be well connected. We will all be able to focus our attention, our ideas and our computational powers on the same thing at the same go. That’s pretty powerful. This is actually what Doug was facilitating when he dreamed up the Demo. The mouse — what Doug is famous for today — is only a detail. Doug says we can only solve the complex problems of today by summoning collective intelligence. Nuclear war, pandemics, global warming. These are all problems requiring collective intelligence. The key to collective intelligence is collective attention. The flow of attention controls how much of our collective intelligence gets allocated to different things.

When Doug Engelbart’s keynoted the Fourth Conference on Innovation Journalism he pointed out that journalism is the perception system of collective intelligence. He hit the nail on the head. When people share news, they have a story in common. This shapes a common picture of the world and a common set of narratives for discussing it. It is agenda setting (there is an established “agenda-setting theory” about this). Journalism is the leading mechanism for generating collective attention. Collective attention is needed for shaping a collective opinion. Collective intelligence might require a collective opinion in order to address collective issues.

Here is where innovation journalism can help. In order for collective intelligence to transform ideas into novelties, we need to be able to generate common sets of narratives around how innovation happens. How do people and organizations doing different things come together in the innovation ecosystem? Narratives addressing this question make it possible for each one of us to relate to the story of innovation. Innovation journalism turns collective attention on new things in society that will increase the value of our lives. This collective attention in turn facilitates the formuation of a collective opinion. Innovation journalism thus connects the innovation economy and democracy (or any other system of governance).

There is an upside and a downside to everything. We can now summon collective attention to track the spread of diseases. But we are also more susceptible to fads, hypes and hysterias. Will our ability to focus collective attention improve our lives or will we become victims of collective neurosis?

We are moving into the attention economy. Information is no longer a scarce commodity. But attention is. Some business strategists think ‘attention transactions’ can replace financial transactions as the focus of our economy. In this sense, the effects on society of collective attention is the macroeconomics of the attention economy. Collective attention is key for exercising collective intelligence. Journalism — the professional generator and broker of collective attention — is a key factor.

This brings us back to Mumbai. How collectively intelligent was it to spend thousands of human lifetimes of attention following the slaughter of hundreds? The jury is out on that one — it depends on the outcome of our attention. Did the collective attention benefit the terrorists? Yes, at least in the short term. Perhaps even in the long term. Did it help solve the situation in Mumbai? Unclear. Could the collective attention have been aimed in other ways at the time of the attacks, which would have had a better outcome for people and society? Yes, probably.

The more wired the world gets, the more terrorism can thrive. When our collective attention grows, the risk of collective fear and obsession follows. It is a threat to our collective mental health, one that will only increase unless we introduce some smart self-regulating mechanisms. These could direct our collective attention to the places where collective attention would benefit society instead of harm.

The dynamics between terrorism and journalism is a market failure of the attention economy.

No, I am not supporting government control over the news. Planned economy has proven to not be a solution for market failures. The problem needs to be solved by a smart feedback system. Solutions may lie in new business models for journalism that provide incentives to journalism to generate constructive and proportional attention around issues, empowering people and bringing value to society. Just selling raw eyeballs or Internet traffic by the pound to advertisers is a recipe for market failure in the attention economy. So perhaps it is not all bad that the traditional raw eyeball business models are being re-examined. It is a good time for researchers to look at how different journalism business models generate different sorts of collective attention, and how that drives our collective intelligence. Really good business models for journalism bring prosperity to the journalism industry, its audience, and the society it works in.

For sound new business models to arise, journalism needs to come to grips with its inevitable role as an actor. Instead of discussing why journalists should not get involved with sources or become parts of the stories they tell, perhaps the solution is for journalists to discuss why they should get involved. Journalists must find a way to do so without loosing the essence of journalism.

Ulrik Haagerup is the leader of the Danish National Public News Service, DR News. He is tired of seeing ‘bad news makes good news and good news makes bad news’. Haagerup is promoting the concept of “constructive journalism”, which focuses on enabling people to improve their lives and societies. Journalism can still be critical, independent and kick butt.

The key issue Haagerup pushes is that it is not enough to show the problem and the awfulness of horrible situations. That only feeds collective obsession, neurosis and, ultimately, depression. Journalism must cover problems from the perspective of how they can be solved. Then our collective attention can be very constructive. Constructive journalism will look for all kinds of possible solutions, comparing and scrutinizing them, finding relevant examples and involving the stakeholders in the process of finding solutions.

I will be working with Haagerup this summer, we will be presenting together with Willi Rütten of the European Journalism Centre a workshop on ‘constructive innovation journalism’ at the Deutsche Welle Global Media Summit, 3–5 June 2009.