Toggle light / dark theme

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.

Abstract

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Article

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Leaving aside the lunatic fringe—those who would charge ahead guns (or labs) a-blazing—I suspect that there is broad but shallow agreement on and advocacy of the rational development of nanotechnology. That is, what is “rational” to the scientists might not be “rational” to many commercially oriented engineers, but each group would lay claim to the “rational” high ground. Neither conception of rational action is likely to be assimilated easily to the one shared by many philosophers and ethicists who, like me, have become fascinated by ethical issues in nanotechnology. And when it comes to rationality, philosophers do like to take the high ground but don’t always agree where it is to be found—except under one’s own feet. Standing on the top of the Himalayan giant K2, one may barely glimpse the top of Everest.

So in the spirit of semantic housekeeping, I’d like to introduce some slightly less abstract categories, to climb down from the heights of rationality and see if we might better agree (and more perspicuously disagree) on what to think and what to do about nanotechnology. At the risk of clumping together some altogether disparate researchers, I will posit that the three fields mentioned above—science, engineering, and philosophy—want different things from their “rational” courses of action.

The scientists, especially the academics, want knowledge of fundamental structures and processes of nanoparticles. They want to fit this knowledge into existing accounts of larger-scale particles in physics, chemistry, and biology. Or they want to understand how engineered and natural nanoparticles challenge those accounts. They want to understand why these particles have the causal properties that they do. Prudent action, from the scientific point of view, requires that we not change the received body of knowledge called science until we know what we’re talking about.

The engineers (with apologies here to academic engineers who are more interested in knowledge-creation than product-creation) want to make things and solve problems. Prudence on their view involves primarily ends-means or instrumental rationality. To pursue the wrong means to an end—for instance, to try to construct a new macro-level material from a supposed stock of a particular engineered nanoparticle, without a characterization or verification of what counts as one of those particles—is just wasted effort. For the engineers, wasted effort is a bad thing, since there are problems that want solutions, and solutions (especially to public health and environmental problems) are time sensitive. Some of these problems have solutions that are non-nanotech, and the market rewards the first through the gate. But the engineers don’t need a complete scientific understanding of nanoparticles to forge ahead with efforts. As Henry Petroski recently said in the Washington Post (1/25/09), “[s]cience seeks to understand the world as it is; only engineering can change it.”

The philosophers are of course a more troublesome lot. Prudence on their view takes on a distinctly moral tinge, but they recognize the other forms too. Philosophers are mostly concerned with the goodness of the ends pursued by the engineers, and the power of the knowledge pursued by the scientists. Ever since von Neumann’s suggestion of the technological inevitability of scientific knowledge, some philosophers have worried that today’s knowledge, set aside perhaps because of excessive risks, can become tomorrow’s disastrous products.

The key disagreement, though, is between the engineers and the philosophers, and the central issues concern the plurality of good ends, and the incompatibility of some of them with others. For example, it is certainly a good end to have clean drinking water worldwide today, and we might move towards that end by producing filtration systems with nanoscale silver or some other product. It is also a good end to have healthy aquatic ecosystems today, and to have viable fisheries tomorrow, and future people to benefit from them. These ends may not all be compatible. When we add up the good ends over many scales, the balancing problem becomes almost insurmountable. Just consider a quick accounting: today’s poor, many of whom will die from water-born disease; cancer patients sickened by the imprecise “cures” given to them, future people whose access to clean water and sustainable forms of energy hang in the balance. We could go on.

When we think about these three fields and their allegedly separate conceptions of prudent action, it becomes clear that their conceptions of prudence can be held by one and the same person, without fear of multiple personality disorder. Better, then, to consider these scientific, engineering, and philosophical mindsets, which are held in greater or lesser concentrations by many researchers. That they are held in different concentrations by the collective consciousness of the nanotechnology field is manifest, it seems, by the disagreement over the right principle of action to follow.

I don’t want to “psychologize” or explain away the debate over principles here, but isn’t it plausible to think that advocates of the Precautionary Principle have the philosophical mindset to a great degree, and so they believe that catastrophic harm to future generations isn’t worth even a very small risk? That is because they count the good ends to be lost as greater in number (and perhaps in goodness) than the good ends to be gained.

Those of the engineering mindset, on the other hand, want to solve problems for people living now, and they might not worry so much about future problems and future populations. They are apt to prefer a straightforward Cost-Benefit Principle, with serious discounting of future costs. The future, after all, will have their own engineers, and a new set of tools for the problems they face. Of course, those of us alive today will in large part create the problems faced by those future people. But we will also bequeath to them our science and engineering.

I’d like to offer a conjecture at this point about the basic insolubility of tensions between the scientific, engineering, and philosophical mindsets and their conceptions of prudent action. The conjecture is inspired by the Impossibility Theorem of the Nobel Prize winning economist Kenneth Arrow, but only informally resembles his brilliant conclusion. In a nutshell, it is this. If we believe that the nanotechnology field has to aggregate preferences for prudential action over these three mindsets, where there are multiple choices to be made over development and commercialization of nanotechnology’s products, we will not come to agreement on what counts as prudent action. This conjecture owes as much to the incommensurability of various good ends, and the means to achieve them, as it does to the kind of voting paradox of which Arrow’s is just one example.

If I am right in this conjecture, we shouldn’t be compelled to try to please all of the people all of the time. Once we give up on this “everyone wins” mentality, perhaps we can get on with the business of making difficult choices that will create different winners and losers, both now and in the future. Perhaps we will also get on with the very difficult task of achieving a comprehensive understanding of the goals of science, engineering, and ethics.

Thomas M. Powers, PhD
Director—Science, Ethics, and Public Policy Program
and
Assistant Professor of Philosophy
University of Delaware

According to the Associated Press, Abdul Qadeer Khan is now free to “move around” and is no longer under house arrest (where he was confined since 2004).

“In January 2004, Khan confessed to having been involved in a clandestine international network of nuclear weapons technology proliferation from Pakistan to Libya, Iran and North Korea. On February 5, 2004, the President of Pakistan, General Pervez Musharraf, announced that he had pardoned Khan, who is widely seen as a national hero.” (Source)

For more information about nuclear proliferation, see:

See also this recent post by Michael Anissimov, the Fundraising Director of the Lifeboat Foundation.

(This essay has been published by the Innovation Journalism Blog — here — Deutsche Welle Global Media Forum — here — and the EJC Magazine of the European Journalism Centre — here)

Thousands of lives were consumed by the November terror attacks in Mumbai.

“Wait a second”, you might be thinking. “The attacks were truly horrific, but all news reports say around two hundred people were killed by the terrorists, so thousands of lives were definitely not consumed.”

You are right. And you are wrong.

Indeed, around 200 people were murdered by the terrorists in an act of chilling exhibitionism. And still, thousands of lives were consumed. Imagine that a billion people devoted, on average, one hour of their attention to the Mumbai tragedy: following the news, thinking about it, discussing it with other people. The number is a wild guess, but the guess is far from a wild number. There are over a billion people in India alone. Many there spent whole days following the drama. One billion people times one hour is one billion hours, which is more than 100,000 years. The global average life expectancy is today 66 years. So nearly two thousand lives were consumed by news consumption. It’s far more than the number of people murdered, by any standards.

In a sense, the newscasters became unwilling bedfellows of the terrorists. One terrorist survived the attacks, confessing to the police that the original plan had been to top off the massacre by taking hostages and outlining demands in a series of dramatic calls to the media. The terrorists wanted attention. They wanted the newsgatherers to give it to them, and they got it. Their goal was not to kill a few hundred people. It was to scare billions, forcing people to change reasoning and behavior. The terrorists pitched their story by being extra brutal, providing news value. Their targets, among them luxury hotels frequented by the international business community, provided a set of target audiences for the message of their sick reality show. Several people in my professional surroundings canceled business trips to Mumbai after watching the news. The terrorists succeeded. We must count on more terror attacks on luxury hotels in the future.

Can the journalists and news organizations who were in Mumbai be blamed for serving the interests of the terrorists? I think not. They were doing their jobs, reporting on the big scary event. The audience flocked to their stories. Their business model — generating and brokering attention — was exploited by the terrorists. The journalists were working on behalf of the audience, not on behalf of the terrorists. But that did not change the outcome. The victory of the terrorists grew with every eyeball that was attracted by the news. Without doubt, one of the victims was the role of journalism as a non-involved observer. It got zapped by a paradox. It’s not the first time. Journalism always follows “the Copenhagen interpretation” of quantum mechanics: You can’t measure a system without influencing it.

Self reference is a classic dilemma for journalism. Journalism wants to observe, not be an actor. It wants to cover a story without becoming part of it. At the same time it aspires to empower the audience. But by empowering the audience, it becomes an actor on the story. Non-involvement won’t work, it is a self-referential paradox like the Epimenides paradox (the prophet from Crete who said “All Cretans are liars”). The basic self-referential paradox is the liars’ paradox (“This sentence is false”). This can be a very constructive paradox, if taken by the horns. It inspired Kurt Gödel to reinvent the foundation of mathematics, addressing self-reference. Perhaps the principles of journalism can be reinvented, too? Perhaps the paradox of non-involvement can be replaced by ethics of engagement as practiced by, for example, psychologists and lawyers?

While many classic dilemmas provide constant frustration throughout life, this one is about to get increasingly wicked. Here is why. It is only 40 years since the birth of collaboration between people sitting behind computers linked by a network, “the mother of all demos”, when Doug Engelbart and his team at SRI demoed the first computer mouse, interactive text, video conferencing, teleconferencing, e-mail and hypertext.

Only 40 years after their first demo, and only 15 years after the Internet reached beyond the walls of university campuses, Doug’s tools are in almost every home and office. Soon they’ll be built into every cell phone. We are always online. For the first time in human history, the attention of the whole world can soon be summoned simultaneously. If we summon all the attention the human species can supply, we can focus two hundred human years of attention onto a single issue in a single second. This attention comes equipped with glowing computing power that can process information in a big way.

Every human on the Net is using a computer device able to do millions or billions of operations per second. And more is to come. New computers are always more powerful than their predecessors. The power has doubled every two years since the birth of computers. This is known as Moore’s Law.

If the trend continues for another 40 years, people will be using computers one million times more powerful than today. Try imagining what you can do with that in your phone or hand-held gaming device! Internet bandwidth is also booming. Everybody on Earth will have at least one gadget. We will all be well connected. We will all be able to focus our attention, our ideas and our computational powers on the same thing at the same go. That’s pretty powerful. This is actually what Doug was facilitating when he dreamed up the Demo. The mouse — what Doug is famous for today — is only a detail. Doug says we can only solve the complex problems of today by summoning collective intelligence. Nuclear war, pandemics, global warming. These are all problems requiring collective intelligence. The key to collective intelligence is collective attention. The flow of attention controls how much of our collective intelligence gets allocated to different things.

When Doug Engelbart’s keynoted the Fourth Conference on Innovation Journalism he pointed out that journalism is the perception system of collective intelligence. He hit the nail on the head. When people share news, they have a story in common. This shapes a common picture of the world and a common set of narratives for discussing it. It is agenda setting (there is an established “agenda-setting theory” about this). Journalism is the leading mechanism for generating collective attention. Collective attention is needed for shaping a collective opinion. Collective intelligence might require a collective opinion in order to address collective issues.

Here is where innovation journalism can help. In order for collective intelligence to transform ideas into novelties, we need to be able to generate common sets of narratives around how innovation happens. How do people and organizations doing different things come together in the innovation ecosystem? Narratives addressing this question make it possible for each one of us to relate to the story of innovation. Innovation journalism turns collective attention on new things in society that will increase the value of our lives. This collective attention in turn facilitates the formuation of a collective opinion. Innovation journalism thus connects the innovation economy and democracy (or any other system of governance).

There is an upside and a downside to everything. We can now summon collective attention to track the spread of diseases. But we are also more susceptible to fads, hypes and hysterias. Will our ability to focus collective attention improve our lives or will we become victims of collective neurosis?

We are moving into the attention economy. Information is no longer a scarce commodity. But attention is. Some business strategists think ‘attention transactions’ can replace financial transactions as the focus of our economy. In this sense, the effects on society of collective attention is the macroeconomics of the attention economy. Collective attention is key for exercising collective intelligence. Journalism — the professional generator and broker of collective attention — is a key factor.

This brings us back to Mumbai. How collectively intelligent was it to spend thousands of human lifetimes of attention following the slaughter of hundreds? The jury is out on that one — it depends on the outcome of our attention. Did the collective attention benefit the terrorists? Yes, at least in the short term. Perhaps even in the long term. Did it help solve the situation in Mumbai? Unclear. Could the collective attention have been aimed in other ways at the time of the attacks, which would have had a better outcome for people and society? Yes, probably.

The more wired the world gets, the more terrorism can thrive. When our collective attention grows, the risk of collective fear and obsession follows. It is a threat to our collective mental health, one that will only increase unless we introduce some smart self-regulating mechanisms. These could direct our collective attention to the places where collective attention would benefit society instead of harm.

The dynamics between terrorism and journalism is a market failure of the attention economy.

No, I am not supporting government control over the news. Planned economy has proven to not be a solution for market failures. The problem needs to be solved by a smart feedback system. Solutions may lie in new business models for journalism that provide incentives to journalism to generate constructive and proportional attention around issues, empowering people and bringing value to society. Just selling raw eyeballs or Internet traffic by the pound to advertisers is a recipe for market failure in the attention economy. So perhaps it is not all bad that the traditional raw eyeball business models are being re-examined. It is a good time for researchers to look at how different journalism business models generate different sorts of collective attention, and how that drives our collective intelligence. Really good business models for journalism bring prosperity to the journalism industry, its audience, and the society it works in.

For sound new business models to arise, journalism needs to come to grips with its inevitable role as an actor. Instead of discussing why journalists should not get involved with sources or become parts of the stories they tell, perhaps the solution is for journalists to discuss why they should get involved. Journalists must find a way to do so without loosing the essence of journalism.

Ulrik Haagerup is the leader of the Danish National Public News Service, DR News. He is tired of seeing ‘bad news makes good news and good news makes bad news’. Haagerup is promoting the concept of “constructive journalism”, which focuses on enabling people to improve their lives and societies. Journalism can still be critical, independent and kick butt.

The key issue Haagerup pushes is that it is not enough to show the problem and the awfulness of horrible situations. That only feeds collective obsession, neurosis and, ultimately, depression. Journalism must cover problems from the perspective of how they can be solved. Then our collective attention can be very constructive. Constructive journalism will look for all kinds of possible solutions, comparing and scrutinizing them, finding relevant examples and involving the stakeholders in the process of finding solutions.

I will be working with Haagerup this summer, we will be presenting together with Willi Rütten of the European Journalism Centre a workshop on ‘constructive innovation journalism’ at the Deutsche Welle Global Media Summit, 3–5 June 2009.

Sometimes what may save your life can come from the most unsuspecting places. Then sometimes, what can save your life in one circumstance may be highly risky, or at least technologically premature, in another. Lifeboat Foundation is about making those distinctions regarding emerging technologies and knowing the difference.

MIT scientists from the Institute for Soldier Nanotechnologies announced in January 2007 they had reached an elusive engineering milestone. They had successfully created a synthetic material with the same properties of spider silk.1 The combination of elasticity and strength of spider silk has been a long sought after target for synthetic manufacturing for improving materials as diverse as packaging, clothing, and medical devices. Using tiny clay disks approximately one billionth of a meter, these nanocrystals combined with rubber polymer create the stretchy but strong polymer nanocomposite.

The use of nanocomposites for the production of packaging materials or clothing seems to be a relatively safe and non-controversial because materials remain outside the body. The United States military has already indicated, according to one source, their desire to use the material for military uniforms and to improve packaging for those lovely-tasting MREs.2 In fact, this is why the Army-funded Institute for Soldier Nanotechnology is supporting the research—to develop pliable but tough body armor for soldiers in combat. Moreover, imagine, for example, a garbage bag that could hold an anvil without breaking. The commercial applications may be endless—but there should be real concern regarding the ways in which these materials might be introduced into human bodies.

Although this synthetic spider silk may conjure up images of one day being able to have the capabilities of Peter Parker or unbreakable, super-strength bones, there are some real concerns regarding the potential applications of this technology, particularly for medical purposes. Some have argued that polymer nanocomposite materials could be used as the mother of all Band-Aids or nearly indestructible stents. For hundreds of years, spider silks have been thought to have great potential for wound covering. In general, nanocomposite materials have been heralded for medical applications as diverse as bone grafts to antimicrobial surfaces for medical instruments.

While it would be ideal to have a nanocomposite that is both flexible and tough for use in bone replacements and grafts, the concern is that the in vivo use of these materials might affect the integrity and properties of the material. Moreover, what happens when the nano-stent begins to break down? Would we be able to detect nano-sized clay particles breaking away from a wound cover and rushing under the skin or racing through our blood stream from a nano-stent? Without the ability to monitor the integrity of such a device and given the fact that the composite materials of such interventions are smaller than 1000th the size of a human hair, should we really be moving toward introducing such materials into human bodies? The obvious answer is that without years of clinical trials in humans such clinical applications cannot, and will not, happen.

Although the spider silk synthetic would be ideal for certain applications, medical products ideally would be made out of biodegradable materials. This polymer nanocomposite made of clay is not. Thus, although the MIT scientists have proved the concept of polymer nanocomposites that possess the properties of spider silk, they not conclusively shown that these would be useful for certain biomedical interventions until they have completed human clinical trials which could be 5–10 years in the future.

In the meantime, however, such scientific advances should be applied to those material science problems just like the ones being addressed at the MIT Institute for Soldier Nanotechnologies. Nanomaterials used exterior to the human body or for improving consumer products are an important developments in applied nanotechnologies. They can, and will, improve the lives of service men and women, once their safety and efficacy in real world environments are tested, and eventually improve consumer products as well.

So the next time you see a spider in the corner rather than smashing it into oblivion, you may just want to look at it for a moment and say “Thank you”. (And then run, if you wish.) But stay tuned…medical applications will some day come as well. Some day a spider may just save your life.

Summer Johnson, PhD
Member, Lifeboat Foundation and Nanoethics Columnist for Nanotech-Now.com and Lifeboat Foundation

Executive Managing Editor, The American Journal of Bioethics

1. MIT News. January 17th, 2007. Nanocomposite Research Yields Strong But Stretchy Fibers

2. NanoScienceWorks. MIT Nanocomposite Research Yields Lycra-like Fibers — Strong and Stretchy Material Inspired by Spider Silk

The projected size of Barack Obama’s “stimulus package” is heading north, from hundreds of billions of dollars into the trillions. And the Obama program comes, of course, on top of the various Bush administration bailouts and commitments, estimated to run as high as $8.5 trillion.

Will this money be put to good use? That’s an important question for the new President, and an even more important question for America. The metric for all government spending ultimately comes down to a single query: What did you get for it?

If such spending was worth it, that’s great. If the country gets victory in war, or victory over economic catastrophe, well, obviously, it was worthwhile. The national interest should never be sacrificed on the altar of a balanced budget.

So let’s hope we get the most value possible for all that money–and all that red ink. Let’s hope we get a more prosperous nation and a cleaner earth. Let’s also hope we get a more secure population and a clear, strategic margin of safety for the United States. Yet how do we do all that?

There’s only one best way: Put space exploration at the center of the new stimulus package. That is, make space the spearhead rationale for the myriad technologies that will provide us with jobs, wealth, and vital knowhow in the future. By boldly going where no (hu)man has gone before, we will change life here on earth for the better.

To put it mildly, space was not high on the national agenda during 2008. But space and rocketry, broadly defined, are as important as ever. As Cold War arms-control theology fades, the practical value of missile defense–against superpowers, also against rogue states, such as Iran, and high-tech terrorist groups, such as Hezbollah and Hamas–becomes increasingly obvious. Clearly Obama agrees; it’s the new President, after all, who will be keeping pro-missile defense Robert Gates on the job at the Pentagon.

The bipartisan reality is that if missile offense is on the rise, then missile defense is surely a good idea. That’s why increasing funding for missile defense engages the attention of leading military powers around the world. And more signs appear, too, that the new administration is in that same strategic defense groove. A January 2 story from Bloomberg News, headlined “Obama Moves to Counter China With Pentagon-NASA Link,” points the way. As reported by Demian McLean, the incoming Obama administration is looking to better coordinate DOD and NASA; that only makes sense: After all, the Pentagon’s space expenditures, $22 billion in fiscal year 2008, are almost a third more than NASA’s. So it’s logical, as well as economical, to streamline the national space effort.

That’s good news, but Obama has the opportunity to do more. Much more.

Throughout history, exploration has been a powerful strategic tool. Both Spain and Portugal turned themselves into superpowers in the 15th and 16th century through overseas expansion. By contrast, China, which at the time had a technological edge over the Iberian states, chose not to explore and was put on the defensive. Ultimately, as we all know, China’s retrograde policies pushed the Middle Kingdom into a half-millennium-long tailspin.

Further, we might consider the enormous advantages that England reaped by colonizing a large portion of the world. Not only did Britain’s empire generate wealth for the homeland, albeit often cruelly, but it also inspired technological development at home. And in the world wars of the 20th century, Britain’s colonies, past and present, gave the mother country the “strategic depth” it needed for victory.

For their part, the Chinese seem to have absorbed these geostrategic lessons. They are determined now to be big players in space, as a matter of national grand strategy, independent of economic cycles. In 2003, the People’s Republic of China powered its first man into space, becoming only the third country to do so. And then, more ominously, in 2007, China shot down one of their own weather satellites, just to prove that they had robust satellite-killing capacity.

Thus the US and all the other space powers are on notice: In any possible war, the Chinese have the capacity to “blind” our satellites. And now they plan to put a man on the moon in the next decade. “The moon landing is an extremely challenging and sophisticated task,” declared Wang Zhaoyao, a spokesman for China’s space program, in September, “and it is also a strategically important technological field.”

India, the other emerging Asian superpower, is paying close attention to its rival across the Himalayas. Back in June, The Washington Times ran this thought-provoking headline: “China, India hasten arms race in space/U.S. dominance challenged.” According to the Times report, India, possessor of an extensive civilian satellite program, means to keep up with emerging space threats from China, by any means necessary. Army Chief of Staff Gen. Deepak Kapoor said that his country must “optimize space applications for military purposes,” adding, “the Chinese space program is expanding at an exponentially rapid pace in both offensive and defensive content.” In other words, India, like every other country, must compete–because the dangerous competition is there, like it or not.

India and China have fought wars in the past; they obviously see “milspace” as another potential theater of operations. And of course, Japan, Russia, Brazil, and the European Union all have their own space programs.

Space exploration, despite all the bonhomie about scientific and economic benefit for the common good, has always been driven by strategic competition. Beyond mere macho “bragging rights” about being first, countries have understood that controlling the high ground, or the high frontier, is a vital military imperative. So we, as a nation, might further consider the value of space surveillance and missile defense. It’s hard to imagine any permanent peace deal in the Middle East, for example, that does not include, as an additional safeguard, a significant commitment to missile and rocket defense, overseen by impervious space satellites. So if the U.S. and Israel, for example, aren’t there yet, well, they need to get there.

Americans, who have often hoped that space would be a demilitarized preserve for peaceful cooperation, need to understand that space, populated by humans and their machines, will be no different from earth, populated by humans and their machines. That is, every virtue, and every evil, that is evident down here will also be evident up there. If there have been, and will continue to be, arms races on earth, then there will be arms races in space. As we have seen, other countries are moving into space in a big way–and they will continue to do so, whether or not the U.S. participates.

Meanwhile, in the nearer term, if the Bush administration’s “forward strategy of freedom”–the neoconservative idea that we would make America safe by transforming the rest of the world–is no longer an operative policy, then we will, inevitably, fall back on “defense” as the key idea for making America safe.

But in the short run, of course, the dominant issue is the economy. Aside from the sometimes inconvenient reality that national defense must always come first, the historical record shows that high-tech space work is good for the economy; the list of spinoffs from NASA, spanning the last half-century, is long and lucrative.

Moreover, a great way to guarantee that the bailout/stimulus money is well spent is to link it to a specific goal–a goal which will in turn impose discipline on the spenders. During the New Deal, for example, there were many accusations of malfeasance against FDR’s “alphabet soup” of agencies, and yet the tangible reality, in the 30s, was that things were actually getting done. Jobs were created, and, just as more important, enduring projects were being built; from post offices to Hoover Dam to the Tennessee Valley Authority, America was transformed.

Even into the 50s and 60s, the federal government was spending money on ambitious and successful projects. The space program was one, but so was the interstate highway program, as well as that new government startup, ARPANET.

Indeed, it could be argued that one reason the federal government has grown less competent and more flabby over the last 30 years is the relative lack of “hard” Hamiltonian programs–that is, nuts and bolts, cement and circuitry–to provide a sense of bottom-line rigor to the spending process.

And so, for example, if America were to succeed in building a space elevator–in its essence a 22,000-mile cable, operating like a pulley, dangling down from a stationary satellite, a concept first put forth in the late 19th century–that would be a major driver for economic growth. Japan has plans for just such a space elevator; aren’t we getting a little tired of losing high-tech economic competitions to the Japanese?

So a robust space program would not only help protect America; it would also strengthen our technological economy.

But there’s more. In the long run, space spending would be good for the environment. Here’s why:

History, as well as common sense, tells us that the overall environmental footprint of the human race rises alongside wealth. That’s why, for example, the average American produces five times as much carbon dioxide per year as the average person dwelling anywhere else on earth. Even homeless Americans, according to an MIT study–and even the most scrupulously green Americans–produce twice as much CO2, per person, as the rest of the world. Around the planet, per capita carbon dioxide emissions closely track per capita income.

A holistic understanding of homo sapiens in his environment will acknowledge the stubbornly acquisitive and accretive reality of human nature. And so a truly enlightened environmental policy will acknowledge another blunt reality: that if the carrying capacity of the earth is finite, then it makes sense, ultimately, to move some of the population of the earth elsewhere–into the infinity of space.

The ZPG and NPG advocates have their own ideas, of course, but they don’t seem to be popular in America, let alone the world. But in the no-limits infinity of space, there is plenty of room for diversity and political experimentation in the final frontier, just as there were multiple opportunities in centuries past in the New World. The main variable is developing space-traveling capacity to get up there–to the moon, Mars, and beyond–to see what’s possible.

Instead, the ultimately workable environmental plan–the ultimate vision for preserving the flora, the fauna, and the ice caps–is to move people, and their pollution, off this earth.

Indeed, space travel is surely the ultimate plan for the survival of our species, too. Eventually, through runaway WMD, or runaway pollution, or a stray asteroid, or some Murphy-esque piece of bad luck, we will learn that our dominion over this planet is fleeting. That’s when we will discover the grim true meaning of Fermi’s Paradox.

In various ways, humankind has always anticipated apocalypse. And so from Noah’s Ark to “Silent Running” to “Wall*E,” we have envisioned ways for us and all other creatures, great and small, to survive. The space program, stutteringly nascent as it might be, can be seen as a slow-groping understanding that lifeboat-style compartmentalization, on earth and in the heavens, is the key to species survival. It’s a Darwinian fitness test that we ought not to flunk.

Barack Obama, who has blazed so many trails in his life, can blaze still more, including a track to space, over the far horizon of the future. In so doing, he would be keeping faith with a figure that he in many ways resembles, John F. Kennedy. It was the 35th President who declared that not only would America go to the moon, but that we would lead the world into space.

As JFK put it so ringingly back in 1962:

The vows of this Nation can only be fulfilled if we in this Nation are first, and, therefore, we intend to be first. In short, our leadership in science and in industry, our hopes for peace and security, our obligations to ourselves as well as others, all require us to make this effort, to solve these mysteries, to solve them for the good of all men, and to become the world’s leading space-faring nation.

Today the 44th President must spend a lot of money to restore our prosperity, but he must spend it wisely. He must also keep America secure against encroaching threats, even as he must improve the environment in the face of a burgeoning global economy.

Accomplishing all these tasks is possible, but not easy. Yes, of course he will need new ideas, but he will also need familiar and proven ideas. One of the best is fostering and deploying profound new technology in pursuit of expansion and exploration.

The stars, one might hope, are aligning for just such a rendezvous with destiny.

Tracking your health is a growing phenomenon. People have historically measured and recorded their health using simple tools: a pencil, paper, a watch and a scale. But with custom spreadsheets, streaming wifi gadgets, and a new generation of people open to sharing information, this tracking is moving online. Pew Internet reports that 70–80% of Internet users go online for health reasons, and Health 2.0 websites are popping up to meet the demand.

David Shatto, an online health enthusiast, wrote in to CureTogether, a health-tracking website, with a common question: “I’m ‘healthy’ but would be interested in tracking my health online. Not sure what this means, or what a ‘healthy’ person should track. What do you recommend?”

There are probably as many answers to this question as there are people who track themselves. The basic measure that apply to most people are:
- sleep
- weight
- calories
- exercise
People who have an illness or condition will also measure things like pain levels, pain frequency, temperature, blood pressure, day of cycle (for women), and results of blood and other biometric tests. Athletes track heart rate, distance, time, speed, location, reps, and other workout-related measures.

Another answer to this question comes from Karina, who writes on Facebook: “It’s just something I do, and need to do, and it’s part of my life. So, in a nutshell, on most days I write down what I ate and drank, how many steps I walked, when I went to bed and when I woke up, my workouts and my pain/medication/treatments. I also write down various comments about meditative activities and, if it’s extreme, my mood.”

David’s question is being asked by the media too. Thomas Goetz, deputy editor of Wired Magazine, writes about it in his blog The Decision Tree. Jamin Brophy-Warren recently wrote about the phenomenon of personal data collection in the Wall Street Journal, calling it the “New Examined Life”. Writers and visionaries Kevin Kelly and Gary Wolf have started a growing movement called The Quantified Self, which holds monthly meetings about self-tracking activities and devices. And self-experimenters like David Ewing Duncan (aka “Experimental Man”) and Seth Roberts (of the “Shangri-La Diet”) are writing books about their experiences.

In the end, what to track really depends on what each person wants to get out of it:
- Greater self-awareness and a way to stick to New Year’s resolutions?
- Comparing data to other self-trackers to see where you fit on the health curve?
- Contributing health data to research into finding cures for chronic conditions?

Based on answers to these questions, you can come up with your own list of things to track, or take some of the ideas listed above. Whatever the reason, tracking is the new thing to do online and can be a great way to optimize and improve your health.

Alexandra Carmichael is co-founder of CureTogether, a Mountain View, CA startup that launched in 2008 to help people optimize their health by anonymously comparing symptoms, treatments, and health data. Its members track their health online and share their experience with 186 different health conditions. She is also the author of The Collective Well and Ecnalab blogs, and a guest blogger at the Quantified Self.

The year 2008 saw the hype fall away from virtual worlds but in contrast social networks are going from strength to strength and are being increasingly used as protest vehicles around the world. While the utility of Facebook and Twitter (using the #griot descriptor to report on the riots in Greece) have been widely reported upon some of the more interesting and interactive information can still be found in Second Life, which bodes well for the future of virtual worlds. Full report and links relating to this phenomena over at the MetaSecurity blog. Whether it be web-forums, Facebook or Second Life, virtual communities will continue to be an increasingly important part of the National Security picture in 2009.

In the volume “Global catastrophic risks” you could find excellent article of Milan Circovic “Observation selection effects and global catastrophic risks”, where he shows that we can’t use information from past records to estimating future rate of global catastrophes.
This has one more consequence which I investigate in my article: “Why antropic principle stops to defend us. Observation selection, future rate of natural disasters and fragility of our environment” — that is we could be in the end of the long period of stability, and some catastrophes may be long overdue and what is most important we could underestimate fragility of our environment which could be on the verge of bifurcation. It is because origination of intellectual life on the Earth is very rare event and it means that some critical parameters may lay near their bounds of stability and small anthropogenic influences could start catastrophic process in this century.

http://www.scribd.com/doc/8729933/Why-antropic-principle-stops-to-defend-us-Observation-selection-and-fragility-of-our-environment–

Why antropic principle stops to defend us
Observation selection, future rate of natural disasters and fragility of our environment.

Alexei Turchin,
Russian Transhumanist movement

The previous version of this article was published on Russian in «Problems of management of risks and safety», Works of Institute of the System Analysis of the Russian Academy of Sciences, v. 31, 2007, p. 306–332.

Abstract:

The main idea of this article is not only that observation selection leads to underestimation of future rate of natural disasters, but that our environment is much more fragile to antropic influences (like overinflated toy balloon), also because of observation selection, and so we should much more carefully think about global warming and deep earth drilling.
The main idea of antropic principle (AP) is that our Universe has qualities that allow existence of the observers. In particular this means that global natural disasters that could prevent developing of intellectual life on the Earth never happened here. This is true only for the past but not for the future. So we cannot use information about frequency of global natural disasters in the past for extrapolation it to the future, except some special cases then we have additional information, as Circovic shoes in his paper. Therefore, an observer could find that all the important parametres for his/her survival (sun, temperature, asteroid risk etc.) start altogether inexplicably and quickly deteriorating – and possibly we could already find the signs of this process. In a few words: The anthropic principle has stopped to ‘defend’ humanity and we should take responsibility for our survival. Moreover, as origination of intellectual life on the Earth is very rare event it means that some critical parameters may lay near their bounds of stability and small antropogenic influences could start catastrophic process in this century.

Nuclear warheads

Martin Hellman is a professor at Stanford, one of the co-inventors of public-key cryptography, and the creator of NuclearRisks.org. He has recently published an excellent essay about the risks of failure of nuclear deterrence: Soaring, Cryptography and Nuclear Weapons. (also available on PDF)

I highly recommend that you read it, along with the other resources on NuclearRisks.org, and also subscribe to their newsletter (on the left on the frontpage).

There are also chapters on Nuclear War and Nuclear Terrorism in Global Catastrophic Risks (intro freely available as PDF here).

Update: Here’s a Martin Hellman quote from a piece he wrote called Work on Technology, War & Peace:

You have a right to know the risk of locating a nuclear power plant near your home and to object if you feel that risk is too high. Similarly, you should have a right to know know the risk of relying on nuclear weapons for our national security and to object if you feel that risk is too high. But almost no effort has gone into estimating that risk. To remedy that lack of information, this effort urgently calls for in-depth studies of the risk associated with nuclear deterrence.

While this new project may seem to have a much more modest goal than Beyond War, there is tremendous hidden potential: My preliminary analysis indicates that the risk from relying on nuclear weapons is thousands of times greater than is prudent. If the results of the proposed studies are anywhere near my preliminary estimate, those studies then become merely the first step in a long-term process of risk reduction. Because many later steps in that process seem impossible from our current vantage point, it is better to leave them to be discovered as the process unfolds, thereby removing objections that the effort is not rooted in reality.