Toggle light / dark theme

Here is a piece of news from early last month, via CNN:

WASHINGTON (AP) — Hackers briefly overwhelmed at least three of the 13 computers that help manage global computer traffic Tuesday in one of the most significant attacks against the Internet since 2002.

Experts said the unusually powerful attacks lasted for hours but passed largely unnoticed by most computer users, a testament to the resiliency of the Internet.

Behind the scenes, computer scientists worldwide raced to cope with enormous volumes of data that threatened to saturate some of the Internet’s most vital pipelines.

!

Experts said the hackers appeared to disguise their origin, but vast amounts of rogue data in the attacks were traced to South Korea.

The attacks appeared to target UltraDNS, the company that operates servers managing traffic for Web sites ending in “org” and some other suffixes, experts said. Company officials did not immediately return telephone calls from The Associated Press.

Among the targeted “root” servers that manage global Internet traffic were ones operated by the Defense Department and the Internet’s primary oversight body.

!

It is not likely that the South Korean government or a large company had anything to do with the attack. The crime was probably perpetrated by a relatively small hacker group, which underscores the potential for asymmetric cyberwarfare. I’m happy that the federal government has many people on the full-time job of defending cybersecurity.

From Physorg.com:

With a typical launch cost for a spaceship around $20 million, it’s difficult to practically conceive of a space industry beyond federally funded agencies. Nevertheless, many people believe that expanding space travel—whether for research purposes, entertainment, or even colonization—is not impractical. Bridging the economic hurdle may be technologies such as the maglev launch assist. According to an analysis, the cost of launching payloads into the low earth orbit with maglev may be achieved with only hundreds of dollars per pound (John Olds and Peter Bellini).

Most recently, researchers in a group including Wenjiang Yang and his colleagues from the Beijing University of Aeronautics and Astronautics and the Chinese Academy of Sciences have investigated the possibility of the “Maglifter,” a maglev launch assist vehicle originally proposed in the 1980s. In this system, a spaceship would be magnetically levitated over a track and accelerated up an incline, lifting off when it reaches a velocity of 1,000 km/hr (620 miles/hr). The main cost-saving areas would come from reduced fuel consumption and the reduced mass of the spaceship.

“Magnetic levitation is a promising technology for future space transportation,” Yang told PhysOrg.com. “The most expensive part of space missions to low-Earth orbit is the first few seconds—getting off the ground.”

!

Obviously, cost-to-orbit is highly relevant to Lifeboat’s push to build a space ark. Some might find it hard to imagine how a non-governmental organization has even a chance of building a space station in the foreseeable future, but that’s because cost-to-orbit has historically been over $10,000 per pound. With new launch technologies like maglev-assist, the cost could come down to hundreds per pound or below. Dropping costs in launch technologies are something that we can expect to accelerate once it really gets started — especially with the growing interest in private space travel.

Take a look at the Lifeboat Foundation EM Launch Competition!

“The importance of the space sector can be emphasized by the number of spacecrafts launched. In the period from 1957 till 2005, 6376 spacecraft have been launched at an average of 133 per year. The has been a decrease in the number of spacecrafts launched in the recent years with 78 launched in 2005. Of the 6378 launches, 56.8% were military spacecrafts and 43.2 were civilian. 245 manned missions have been launched in this period. 1674 communication or weather satellites were also launched. The remaining spacecraft launches has been exploration missions.”

Read the entire report here (requires free registration)

Graduate student (University of Alabama Huntsville) Blake Anderton wrote his master’s thesis on “Application of Mode-locked lasers to asteroid characterization and mitigation.” Undergraduate Gordon Aiken won a prize at a recent student conference for his poster and presentation “Space positioned LIDAR system for characterization and mitigation of Near Earth Objects.” And members of the group are building a laser system “that is the grandfather of the laser that will push the asteroids,” Fork said.

Anderton’s mode locked lasers could characterize asteroids up to 1 AU away (1.5 x 10 to the 11 meters). Arecibo and other radar observatories can only detect objects up to 0.1 AU away, so in theory a laser would represent a vast improvement over radar.

A one page powerpoint describes their asteroid detection and deflection approach About 12 of the 1AU detection volumes (around the sun in the asteroid belt) would be needed to cover the main areas for near earth asteroids.

!

40KW femtosecond lasers could deflect an asteroid the size of Apophis (320meters, would hit with 880 megaton force) given one year of illumination and an early start in the trajectory.

Asteroid shields are a project of the Lifeboat Foundation

There are 67 kilowatt solid state lasers and modular laser systems & mirrors for reflecting lasers to achieve more laser power from smaller modules

!


The Ballistic Missile Early Warning Radar System (BMEWS) at Fylingdales, U.K.

The ongoing debate on the proposed missile defense shield in Europe is heating up. Poland and the Czech Republic are among the possible sites and the UK is now showing interest in supporting the missile shield. Fears over the destabilising effects of such a shield was confirmed by a Russian general who said that they would target the system.

Vladimir Putin, Russia’s president, said America would trigger an “inevitable arms race” if it deployed interceptors in Europe to knock ballistic missiles out of the sky. A senior Russian general rumbled that Russian missiles would target any interceptors in eastern Europe. Poland’s prime minister told his people that Russia was trying to “scare” them. The Czech foreign minister (a prince with a splendid moustache) complained of Russian “blackmail”.

“The aim is to break ground on a European site in 2008, and for its interceptors to become operational in 2012. This week the Polish and Czech prime ministers said they were keen on hosting the missile-defence sites. That is a change: talks with the Poles have dragged on for years, thanks to elaborate Polish demands for things such as extra missile defences for their own country. Yet both Mr Blair and his Polish rivals face objections from three sources: from Russia, from many of their own voters and from fellow European leaders.”

Source: “Missile-defence systems: Expect Fireworks”, Economist.

“In 2003, the U.K. agreed to allow the U.S. to upgrade radar stations at the Fylingdales Royal Air Force Base in northern England, one of the steps to allowing the missile shield. At the time, then-Defense Secretary Geoff Hoon said the U.K. would keep its options open about Britain taking the U.S. missile shield.”

Source: “Blair Wants Part of U.S. Missile Shield Based in U.K.”, Bloomberg.

Read more about the RAF Fylingdales base from Wikipedia

Like the Lifeboat Foundation, The Bulletin of Atomic Scientists is an organization formed to address catastrophic technological risks. In catastrophic risk management, vision and foresight are essential. You take at technological, social, and political trends which are happening today — for example, steps towards mechanical chemistry, increasing transparency, or civil atomic programs — and brainstorm with as many experts as possible about what these trends indicate about what is coming 5, 10, or 20 years down the road. Because catastrophic risk management is a long-term enterprise, one where countermeasures are ideally deployed before a threat has even materialized, the further and more clearly you try to see into the future, the better.

Traditionally, The Bulletin has focused on the risk from nuclear warfare. Lately, they have expanded their attention to all large-scale technological risks, including global warming and future risks from emerging technologies. However, the language and claims used on their website show that the organization’s members are only just beginning to get informed about the emerging technologies, and the core of their awareness still lies with the nuclear issue.

!

From The Bulletin’s statement regarding their decision to move the clock 5 minutes to midnight, from the “emerging technologies” section specifically:

The emergence of nanotechnology — manufacturing at the molecular or atomic level — presents similar concerns, especially if coupled with chemical and biological weapons, explosives, or missiles. Such combinations could result in highly destructive missiles the size of an insect and microscopic delivery systems for dangerous pathogens.

“Highly destructive missiles the size of an insect”? Depressingly, statements like this are a red flag that the authors and fact-checkers at The Bulletin are poorly informed about nanotechnology and molecular manufacturing. To my knowledge, no one in the entire defense research industry has ever proposed creating highly destructive missiles the size of an insect. Highly destructive missiles the size of an insect are impossible for the same reason that meals in a pill are impossible — chemical bonds only let you pack so much energy into a given space. We cannot improve the energy density of explosives like we can improve the speed of computers or the resolution of satellite imagery. There can be incremental improvements, yes, but suggesting that nanotechnology has something to do with highly destructive missiles the size of insects is not just dubious from the point of view of physics, but particularly embarassing because it seems to have been made up from scratch, and was missed by everyone in the organization that reviewed the statement.

The general phrasing of the statement makes it seem like the scientists that wrote it are still stuck in the way of thinking that says “molecular manufacturing has to do with molecules, and molecules are small, so the products of molecular manufacturing will be small”. This is also the bias frequently seen displayed by the general media, although early products based on nanotechnology (not molecular manufacturing), including stainless pants and sunscreen, also subtly direct the popular perception of nanotech. It’s natural to think that nanotechnology, and therefore, molecular manufacturing, means small. However, this natural tendency is flawed. We should recall that the world’s largest organisms, up to 6,600 tons in weight, were manufactured by the molecular machines called ribosomes.

Molecular manufacturing (MM) would greatly boost manufacturing throughput and lower the cost of large products. While some associate MM with smallness, it is better thought of in connection with size and grandeur. Although microscopic killing machines built by MM will definitely become a risk by 2015–2020, the greatest risk will come from the size, performance, and sheer quantity of products. Because a nanofactory would need to be able to output its own weight in product in less than a 12 or so hours or it wouldn’t have been developed in the first place (scaling up from a single molecular manipulator to many trillions requires 33 or so doublings — which could take a long time if the product cycle is not measured in hours), these factories, given raw materials and energy, could produce new factories at an exponential rate. Assuming a doubling time of 12 hours, a 100 kg-size tabletop nanofactory could be used to produce 819,200 kg worth of nanofactory in only a week. As long as the nanofactories can support their own weight and be supplied with adequate matter and energy, they can be made almost arbitrarily large. Minimal labor would be necessary because the manufacturing components are so small, they must be automated to work at all. Regulations and structural challenges from excess height can be circumvented by fabricating nanofactories that are long and wide rather than tall and fragile. Once created, these factories could be programmed to produce whatever products are technologically possible with the tools at hand — at the very least, products at least as sophisticated as the nanofactories themselves. Unscrupulous governments could use the technology to mass produce missiles, helicopters, tanks, and entirely new weapons, as long as their engineers are capable of designing diamondoid versions of these products. Their rate of production, and quality of hardware, would outclass that of non-nano-equipped nations by many orders of magnitude.

!

Because unregulated, exponentially replicating molecular manufacturing units would create a severe threat to global security, it seems prudent to regulate them with care. Restrictions should be placed on what products can be manufactured and in what quantity and quality. Just as permits and inspections are required to operate industrial machinery, restrictions should be placed on industrial-scale molecular manufacturing. In some cases, preexisting regulatory infrastructure will be sufficient. In others, we’ll need to augment or expand the purview of historical regulations and customize them to address the specific challenges that MM represents.

Further Reading:

30 Essential Nanotechnology Studies
Lifeboat Foundation NanoShield
Nanotechnology Category on Accelerating Future

From Yahoo News:

RIYADH (Reuters) — Saudi Arabia, the world’s biggest oil exporter and a key U.S. ally, said on Wednesday that the kingdom does not see any obstacle to cooperating with Russia on developing a nuclear energy program.

“There is no obstacle to cooperate with Russia on… nuclear energy,” Foreign Minister Prince Saud al-Faisal told a news conference.

Analysts said the plan by Sunni bastion Saudi Arabia is a warning shot to Shi’ite Iran that it could enter the regional arms race and start developing nuclear capability.

!

Russian President Vladimir Putin said on Monday during a visit to Saudi Arabia that his country would consider helping the kingdom with a possible atomic energy program.

“On nuclear energy, there was a (Russian) contact with the kingdom and the Gulf Cooperation Council,” he said when asked if Saudi Arabia and Russia had made any agreements.

Saudi Arabia and fellow GCC members Qatar, Bahrain, Oman, Kuwait and the United Arab Emirates, said in December they would study embarking on a joint civil atomic program.

!

The announcement by the GCC, a loose economic and political alliance, raised concern of a regional arms race with analysts saying the Arab bloc wanted to match Iran’s nuclear program.

The question of whether GCC members can develop civil nuclear power without spinning off a nuclear weapons program is a controversial one. Most analysts see nuclear programs as a threat to world peace, because the temptation of developing weapons is so great. As we saw in the recent deal with North Korea, nuclear programs can be used to bribe other countries for free energy. This may actually increase the incentive to start nuclear programs. Russia blames the United States for kicking off a global arms race, but seems to be participating in that arms race by offering nuclear support to GCC nations. A solution that would make everyone happy would be the development of thorium nuclear reactors, which can produce electricity without making the sort of enriched uranium that can be used in a bomb. Thorium reactors are a 50 year old technology, well within the reach of these countries, given Russian assistance.

A giant asteroid named Apophis has a one in 45,000 chance of hitting the Earth in 2036. If it did hit the earth it could destroy a city or a region. A slate of new proposals for addressing the asteroid menace was presented today at a recent meeting of the American Association for the Advancement of Science in San Francisco.

One of the Lifeboat Foundation projects is an Asteroid Shield and the issues and points discussed are in direct alignment with Lifeboat. The specific detection and deflection projects are in the Lifeboat Asteroid Shield project.

Edward Lu of NASA has proposed “gravitational tractor” is a spacecraft—up to 20 tons (18 metric tons)—that it could divert an asteroid’s path just by thrusting its engines in a specific direction while in the asteroid’s vicinity.

!

Scientists also described two massive new survey-telescope projects to detect would-be killer asteroids.

One, dubbed Pan-STARRS, is slated to begin operation later this year. The project will use an array of four 6-foot-wide (1.8-meter-wide) telescopes in Hawaii to scan the skies.

The other program, the Large Synoptic Survey Telescope in Chile, will use a giant 27.5-foot-wide (8.4-meter-wide) telescope to search for killer asteroids. This telescope is scheduled for completion sometime between 2010 and 2015.

!


David Morrison, an astronomer at NASA’s Ames Research Center, said that “the rate of discoveries is going to ramp up. We’re going to see discoveries being made at 50 to 100 times the current rate.”

“You can expect asteroids like Apophis [to be found] every month.”

Schweickart, the former astronaut, thinks the United Nations needs to draft a treaty detailing standardized international measures that will be carried out in response to any asteroid threat.

His group, the Association of Space Explorers, has started building a team of scientists, risk specialists, and policymakers to draft such a treaty, which will be submitted to the UN for consideration in 2009.

!

Whether we like it or not, geoengineering — a process I’ve taken to calling “(re)terraforming the Earth” — is now on the table as a strategy for dealing with onrushing climate disaster. This isn’t because it’s a particularly good idea; as far as we presently know, the potential negative impacts of geoengineering projects seem to significantly outweigh any benefits. Nonetheless, (re)terraforming has drawn an increasing amount of attention over the past few months. One key reason is that, if it could be made to work, it wouldn’t just moderate climate change — i.e., slow it or stop it — it would actually serve as a climate change remediation method, reversing global warming.

The cynical and the insipid apparently believe that pursuing the geoengineering option would allow us to avoid making any changes in technology or behavior intended to reduce greenhouse gas output. This sort of logic is wrong, utterly wrong. For any plausible geoengineering project to succeed, we’d have to have already stabilized the climate. As it turns out, the brilliant and clearly-needed advances in technology and changes in behavior supported by those of us who proudly wear the label “bright green” will do exactly this, reducing, even eventually eliminating, anthropogenic emissions of greenhouse gases. We need to do this as quickly as possible. As the saying goes, if you want to get out of the hole you’re in, the first thing to do is stop digging.

But none of the bright green solutions — ultra-efficient buildings and vehicles, top-to-bottom urban redesigns, local foods, renewable energy systems, and the like — will do anything to reduce the anthropogenic greenhouse gases that have already been emitted. The best result we get is stabilizing at an already high greenhouse gas level. And because of ocean thermal inertia and other big, slow climate effects, the Earth will continue to warm for a couple of decades even after we stop all greenhouse gas emissions. Transforming our civilization into a bright green wonderland won’t be easy, and under even the most optimistic estimates will take at least a decade; by the time we finally stop putting out additional greenhouse gases, we could well have gone past a point where globally disastrous results are inevitable. In fact, given the complexity of climate feedback systems, we may already have passed such a tipping point, even if we stopped all emissions today.

!

In other words, while stopping digging is absolutely necessary, it won’t actually refill the hole.

I’m hopeful that eliminating anthropogenic greenhouse gas emissions will be enough; if more optimistic scenarios are correct, ceasing to emit additional greenhouse gases in the next decade or two will be sufficient to avoid real disaster. This would be a wonderful outcome, and not just because we would have dodged the global warming bullet. Many of the best steps we can take along these lines are distributed, incremental, collaborative, and quite often make use of open systems and standards: all very good things, with larger social implications than just for climate moderation, and the heart of what my blog Open the Future is all about.

But if we learn that we’ve already passed the climate disaster tipping point, if we want to avoid a civilization-threatening outcome, we’ll have to figure out how to refill the hole — to reduce overall temperature increases, or to remove methane, CO2 or other greenhouse gases from the atmosphere. And that means that we’d have to look at geoengineering.

!

Or, to be more accurate, we’ll have to keep looking at geoengineering. As it happens, the “(re)terraforming to fix global warming” genie is already out of the bottle. It happened just last week.

On February 9, 2007, Virgin Corporation honcho Richard Branson announced that he would give $25 million to the winner of the “Virgin Earth Challenge:”

The Virgin Earth Challenge will award $25 million to the individual or group who are able to demonstrate a commercially viable design which will result in the net removal of anthropogenic, atmospheric greenhouse gases each year for at least ten years without countervailing harmful effects. This removal must have long term effects and contribute materially to the stability of the Earth’s climate.

Reaction in the green blogosphere has been cautiously optimistic, with most responses noting a comparison to the “X-Prize” for private space flight, and some observing that air travel, such as that provided by Virgin Airways, remains a big source of greenhouse gases. Much to my surprise, however, none of the major green blogs noted the most significant aspect of this competition:

This is explicitly a call for geoengineering projects.

The Virgin Earth Challenge isn’t simply looking for better ways to reduce or eliminate new greenhouse gas emissions, it’s looking for ways to remove existing CO2 and other greenhouse gases from the atmosphere — that’s what “net removal” means. This competition seeks ways to make an active, substantial change to the Earth’s geophysical systems. Richard Branson is underwriting terraforming, and given that the consensus new mainstream environmentalist position is to be solidly anti-geoengineering, the lack of reaction to what is essentially the “Terraforming Challenge” is a bit surprising.

!

But if we’re already looking at geoengineering, and may potentially need to consider it as a necessary path to survival, how can we do it in a way that has the best chance to avoid making matters worse?

I’ve already given away the answer in the title: open up the process.

I’ve long argued that openness is the best way to ensure the safe development and deployment of transformative technologies like molecular nanotechnology, general machine intelligence, and radical human bioenhancements. Geoengineering technologies should be added to this list. The reasons are clear: the more people who can examine and evaluate the geotechnological proposals, the greater the likelihood of finding subtle flaws or dangers, and the greater the pool of knowledge that can offer solutions.

!

As I put it in my 2003 essay for the final Whole Earth magazine (and the source of my blog’s name), “Open the Future,”

Opening the books on emerging technologies, making the information about how they work widely available and easily accessible, in turn creates the possibility of a global defense against accidents or the inevitable depredations of a few. Openness speaks to our long traditions of democracy, free expression, and the scientific method, even as it harnesses one of the newest and best forces in our culture: the power of networks and the distributed-collaboration tools they evolve.Broad access to… [transformative] tools and knowledge would help millions of people examine and analyze emerging information, nano- and biotechnologies [and geotechnologies], looking for errors and flaws that could lead to dangerous or unintended results. This concept has precedent: it already works in the world of software, with the “free software” or “open source” movement. A multitude of developers, each interested in making sure the software is as reliable and secure as possible, do a demonstrably better job at making hard-to-attack software than an office park’s worth of programmers whose main concerns are market share, liability, and maintaining trade secrets.

[…]The more people participate, even in small ways, the better we get at building up our knowledge and defenses. And this openness has another, not insubstantial, benefit: transparency. It is far more difficult to obscure the implications of new technologies (or, conversely, to oversell their possibilities) when people around the world can read the plans.

The idea of opening transformative technologies is controversial. One argument often leveled against it is that it puts dangerous “knowledge-enabled” technologies into the hands of people who would abuse them. Fortunately, such a charge isn’t likely to apply in any significant way to discussions of geotechnology, largely because the industrial capacity required to take advantage of these technologies is well beyond most countries, let alone super-empowered individuals and small groups. Another criticism of the open approach attacks it for undermining the market. But concerns about proprietary information and profit potential are hard to fathom with terraforming — there would be no plausible way to limit access to climate change remediation only to those who pay for it. Ultimately, the downsides of making potential geoengineering methods open are tiny, while the benefits are massive.

It’s not entirely clear if an open source approach for terraforming technology would be allowed within the Virgin Earth Challenge rules. The “terms and conditions” appear to require secrecy during the development process, but leave open the possibility of a variety of licensing conditions afterwards. Presumably, this would include open source/free access licenses. This is better than nothing, but the secrecy-during-development requirements should have an exception for open source competitors. The value of the “many eyes” approach is enhanced if it isn’t limited to after-the-fact analysis. Discovery of a flaw requiring a redesign is less costly — and less likely to be ignored — if it happens early in the development process.

Let me be clear: I am not calling for geoengineering as the solution to global warming. We know nowhere near enough to make (re)terraforming a plausible or safe option. Our best pathway to avoiding climate disaster remains the rapid reduction and elimination of anthropogenic greenhouse gases. But I am calling for us to learn more about geotechnologies. Like it or not, we’ve entered the era of intentional geoengineering. The people who believe that (re)terraforming is a bad idea need to be part of the discussion about specific proposals, not simply sources of blanket condemnations. We need their insights and intelligence. The best way to make that happen, the best way to make sure that any terraforming effort leads to a global benefit, not harm, is to open the process of studying and developing geotechnological tools.

!

It may well be the best example yet seen of the importance of opening the future.

From Physorg.com:

Humanity has long since established a foothold in the Artic and Antarctic, but extensive colonization of these regions may soon become economically viable. If we can learn to build self-sufficient habitats in these extreme environments, similar technology could be used to live on the Moon or Mars.

The average temperature of the Antarctic coast in winter is about −20 °C. As if this weren’t enough, the region suffers from heavy snowfall, strong winds, and six-month nights. How can humanity possibly survive in such a hostile environment?

So far we seem to have managed well; Antarctica has almost forty permanently staffed research stations (with several more scheduled to open by 2008). These installations are far from self-sufficient, however; the USA alone spent 125 million dollars in 1995 on maintenance and operations.[1] All vital resources must be imported—construction materials, food, and especially fuel for generating electricity and heat.

!

Modern technology and construction techniques may soon permit the long-term, self-sufficient colonization of such extreme environments.

Why would anyone want to live there? Exceptional scientific research aside, the Arctic is though to be rich in mineral resources (oil in particular). The Antarctic is covered by an ice sheet over a mile thick, making any mineral resources it may have difficult to access. Its biological resources, however, have great potential. Many organisms adapted to extreme cold have evolved unusual biochemical processes, which can be leveraged into valuable industrial or medical techniques.[2] Alexander Bolonkin and Richard Cathcart are firm believers in the value of this chilling territory. “Many people worldwide, especially in the Temperate Zones, muse on the possibility of humans someday inhabiting orbiting Space Settlements and Moon Bases, or a terraformed Mars” Bolonkin points out, “but few seem to contemplate an increased use of ~25% of Earth’s surface—the Polar Regions.”

Indeed, the question of space exploration is intriguing. We would all like to know whether there is life on Mars, but robot probes can only perform the experiments they take along with them. Only humans are flexible enough to explore a new territory in detail and determine whether there are enough resources to sustain a long-term presence. Does modern technology really permit the design of lightweight, energy-efficient habitats suitable for other worlds?

!

That would be cool if it did! Although a few domed cities in the polar regions couldn’t hurt mankind’s overall survivability, space — and developing effective countermeasures — have a lot more security to offer.