Menu

Blog

Feb 9, 2009

Nanotech Development: You Can’t Please All of the People, All of the Time

Posted by in categories: ethics, nanotechnology, policy

Abstract

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Article

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Leaving aside the lunatic fringe—those who would charge ahead guns (or labs) a-blazing—I suspect that there is broad but shallow agreement on and advocacy of the rational development of nanotechnology. That is, what is “rational” to the scientists might not be “rational” to many commercially oriented engineers, but each group would lay claim to the “rational” high ground. Neither conception of rational action is likely to be assimilated easily to the one shared by many philosophers and ethicists who, like me, have become fascinated by ethical issues in nanotechnology. And when it comes to rationality, philosophers do like to take the high ground but don’t always agree where it is to be found—except under one’s own feet. Standing on the top of the Himalayan giant K2, one may barely glimpse the top of Everest.

So in the spirit of semantic housekeeping, I’d like to introduce some slightly less abstract categories, to climb down from the heights of rationality and see if we might better agree (and more perspicuously disagree) on what to think and what to do about nanotechnology. At the risk of clumping together some altogether disparate researchers, I will posit that the three fields mentioned above—science, engineering, and philosophy—want different things from their “rational” courses of action.

The scientists, especially the academics, want knowledge of fundamental structures and processes of nanoparticles. They want to fit this knowledge into existing accounts of larger-scale particles in physics, chemistry, and biology. Or they want to understand how engineered and natural nanoparticles challenge those accounts. They want to understand why these particles have the causal properties that they do. Prudent action, from the scientific point of view, requires that we not change the received body of knowledge called science until we know what we’re talking about.

The engineers (with apologies here to academic engineers who are more interested in knowledge-creation than product-creation) want to make things and solve problems. Prudence on their view involves primarily ends-means or instrumental rationality. To pursue the wrong means to an end—for instance, to try to construct a new macro-level material from a supposed stock of a particular engineered nanoparticle, without a characterization or verification of what counts as one of those particles—is just wasted effort. For the engineers, wasted effort is a bad thing, since there are problems that want solutions, and solutions (especially to public health and environmental problems) are time sensitive. Some of these problems have solutions that are non-nanotech, and the market rewards the first through the gate. But the engineers don’t need a complete scientific understanding of nanoparticles to forge ahead with efforts. As Henry Petroski recently said in the Washington Post (1/25/09), “[s]cience seeks to understand the world as it is; only engineering can change it.”

The philosophers are of course a more troublesome lot. Prudence on their view takes on a distinctly moral tinge, but they recognize the other forms too. Philosophers are mostly concerned with the goodness of the ends pursued by the engineers, and the power of the knowledge pursued by the scientists. Ever since von Neumann’s suggestion of the technological inevitability of scientific knowledge, some philosophers have worried that today’s knowledge, set aside perhaps because of excessive risks, can become tomorrow’s disastrous products.

The key disagreement, though, is between the engineers and the philosophers, and the central issues concern the plurality of good ends, and the incompatibility of some of them with others. For example, it is certainly a good end to have clean drinking water worldwide today, and we might move towards that end by producing filtration systems with nanoscale silver or some other product. It is also a good end to have healthy aquatic ecosystems today, and to have viable fisheries tomorrow, and future people to benefit from them. These ends may not all be compatible. When we add up the good ends over many scales, the balancing problem becomes almost insurmountable. Just consider a quick accounting: today’s poor, many of whom will die from water-born disease; cancer patients sickened by the imprecise “cures” given to them, future people whose access to clean water and sustainable forms of energy hang in the balance. We could go on.

When we think about these three fields and their allegedly separate conceptions of prudent action, it becomes clear that their conceptions of prudence can be held by one and the same person, without fear of multiple personality disorder. Better, then, to consider these scientific, engineering, and philosophical mindsets, which are held in greater or lesser concentrations by many researchers. That they are held in different concentrations by the collective consciousness of the nanotechnology field is manifest, it seems, by the disagreement over the right principle of action to follow.

I don’t want to “psychologize” or explain away the debate over principles here, but isn’t it plausible to think that advocates of the Precautionary Principle have the philosophical mindset to a great degree, and so they believe that catastrophic harm to future generations isn’t worth even a very small risk? That is because they count the good ends to be lost as greater in number (and perhaps in goodness) than the good ends to be gained.

Those of the engineering mindset, on the other hand, want to solve problems for people living now, and they might not worry so much about future problems and future populations. They are apt to prefer a straightforward Cost-Benefit Principle, with serious discounting of future costs. The future, after all, will have their own engineers, and a new set of tools for the problems they face. Of course, those of us alive today will in large part create the problems faced by those future people. But we will also bequeath to them our science and engineering.

I’d like to offer a conjecture at this point about the basic insolubility of tensions between the scientific, engineering, and philosophical mindsets and their conceptions of prudent action. The conjecture is inspired by the Impossibility Theorem of the Nobel Prize winning economist Kenneth Arrow, but only informally resembles his brilliant conclusion. In a nutshell, it is this. If we believe that the nanotechnology field has to aggregate preferences for prudential action over these three mindsets, where there are multiple choices to be made over development and commercialization of nanotechnology’s products, we will not come to agreement on what counts as prudent action. This conjecture owes as much to the incommensurability of various good ends, and the means to achieve them, as it does to the kind of voting paradox of which Arrow’s is just one example.

If I am right in this conjecture, we shouldn’t be compelled to try to please all of the people all of the time. Once we give up on this “everyone wins” mentality, perhaps we can get on with the business of making difficult choices that will create different winners and losers, both now and in the future. Perhaps we will also get on with the very difficult task of achieving a comprehensive understanding of the goals of science, engineering, and ethics.

Thomas M. Powers, PhD
Director—Science, Ethics, and Public Policy Program
and
Assistant Professor of Philosophy
University of Delaware

Comments are closed.