Toggle light / dark theme

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

Asteroid hazard in the context of technological development

It is easy to notice that the direct risks of collisions with asteroids decreases with technological development. First, they (or, exactly, our estimation of risks) decrease due to more accurate measurement of them — that is, at the expense of more accurate detection of dangerous asteroids and measurements of their orbits we could finally find that the real chance of impact is 0 in the next 100 year. (If, however, will be confirmed the assumption that we live during the episode of comet bombardment, the assessment of risk would increase 100 times to the background.) Second, it decreases due to an increase in our ability to reject asteroids.
On the other hand, the impact of falling asteroids become larger with time — not only because the population density increases, but also because the growing connectedness of the world system, resulting in that damage in one place can spread across the globe. In other words, although the probability of collisions is reducing, the indirect risks associated with the asteroid danger is increasing.
The main indirect risks are:
A) The destruction of hazardous industries in the place of the fall — for example, nuclear power plant. The entire mass of the station in such a case would evaporated and the release of radiation would be higher than in Chernobyl. In addition, there may be additional nuclear reactions because of sudden compression of the station when it is struck by asteroid. Yet the chances of a direct hit of an asteroid in the nuclear plants are small, but they grow with the growing number of stations.
B) There is a risk that even a small group of meteors, moving a specific angle in a certain place in the earth’s surface could lead to lunch of the system for the Prevention of rocket attacks and lead to an accidental nuclear war. Similar consequences could have a small air explosion of an asteroid (a few meters in size). The first option is more likely for developed superpowers system of warning (but which has flaws or unsecured areas in their ABM system, as in the Russian Federation), while the second — for the regional nuclear powers (like India and Pakistan, North Korea, etc.) which are not able to track missiles by radars, but could react to a single explosion.
C) The technology to drive asteroids in the future will create a hypothetical possibility to direct asteroids not only from Earth, but also on it. And even if there will be accidental impact of the asteroid, there will be talks about that it was sent on purpose. Yet hardly anyone will be sent to Earth asteroids, because such action can easily be detected, the accuracy is low and it need to be prepared for decades before event.
D) Deviations of hazardous asteroids will require the creation of space weapons, which could be nuclear, laser or kinetic. Such weapons could be used against the Earth or the spacecrafts of an opponent. Although the risk of applying it against the ground is small, it still creates more potential damage than the falling asteroids.
E) The destruction of the asteroid with nuclear explosion would lead to an increase in its affecting power at the expense of its fragments – to the greater number of blasts over a larger area, as well as the radioactive contamination of debris.
Modern technological means give possibility to move only relatively small asteroids, which are not global threat. The real danger is black comets in size of several kilometers which are moving on elongated elliptical orbits at high speeds. However, in the future, space can be quickly and cheaply explored through self-replicating robots based on nanoteh. This will help to create huge radio telescopes in space to detect dangerous bodies in the solar system. In addition, it is enough to plant one self-replicating microrobot on the asteroid, to multiply it and then it could break the asteroid on parts or build engines that will change its orbit. Nanotehnology will help us to create self-sustaining human settlements on the Moon and other celestial bodies. This suggests that the problem of asteroid hazard will in a few decades be outdated.
Thus, the problem of preventing collisions of the Earth with asteroids in the coming decades can only be a diversion of resources from the global risks:
First, because we are still not able to change orbits of those objects which actually can lead to the complete extinction of humanity.
Secondly, by the time (or shortly thereafter), when the nuclear missile system for destruction of asteroids will be created, it will be obsolete, because nanotech can quickly and cheaply harness the solar system by the middle of 21 century, and may, before .
And third, because such system at time when Earth is divided into warring states will be weapon in the event of war.
And fourthly, because the probability of extinction of humanity as a result of the fall of an asteroid in a narrow period of time when the system of deviation of the asteroids will be deployed, but powerful, nanotechnology is not yet established, is very small. This time period may be equal to 20 years, say from 2030 — until 2050, and the chances of falling bodies of 10 km size during this time, even if we assume that we live in a period comet bombardment, when the intensity is 100 times higher — is at 1 to 15 000 (based on an average frequency of the fall of bodies every 30 million years). Moreover, given the dynamics, we can reject the indeed dangerous objects only at the end of this period, and perhaps even later, as larger the asteroid, the more extensive and long-term project for its deviation is required. Although 1 to 15 000 is still unacceptable high risk, it is commensurate with the risk of the use of space weapons against the Earth.
In the fifth, anti-asteroid protection diverts attention from other global issues, the limited human attention and financial resources. This is due to the fact that the asteroid danger is very easy for understanding — it is easy to imagine, it is easy to calculate the probabilities and it is clear to the public. And there is no doubt of its reality, and there are clear ways for protection. (e.g. the probability of volcanic disaster comparable to the asteroid impact by various estimates, is from 5 to 20 times higher at the same level of energy – but we have no idea how it can be prevented.) So it differs from other risks that are difficult to imagine, that are impossible quantify, but which may mean the probability of complete extinction of tens of percent. These are the risks of AI, biotech, nanotech and nuclear weapons.
In the sixth, when talking about relatively small bodies like Apophis, it may be cheaper to evacuate the area of the fall than to deviate the asteroid. A likely the area of the impact will be ocean.
But I did not call to abandon antiasterod protection, because we first need to find out whether we live in the comet bombardment period. In this case, the probability of falling 1 km body in the next 100 years is equal to 6 %. (Based on data on the hypothetical fall in the last 10 000 years, like a comet Klovis http://en.wikipedia.org/wiki/Younger_Dryas_impact_event , traces of which can be 500 000 in the craters of similar entities called Carolina Bays http://en.wikipedia.org/wiki/Carolina_bays crater, and around New Zealand in 1443 http://en.wikipedia.org/wiki/Mahuika_crater and others 2 impacts in last 5 000 years , see works of http://en.wikipedia.org/wiki/Holocene_Impact_Working_Group ). We must first give power to the monitoring of dark comets and analysis of fresh craters.

Here’s a story that should concern anyone wanting to believe that the military has a complete and accurate inventory of chemical and biological warfare materials.

“An inventory of deadly germs and toxins at an Army biodefense lab in Frederick found more than 9,200 vials of material that was unaccounted for in laboratory records, Fort Detrick officials said Wednesday. The 13 percent overage mainly reflects stocks left behind in freezers by researchers who retired or left Fort Detrick since the biological warfare defense program was established there in 1943, said Col. Mark Kortepeter, deputy commander of the U.S. Army Medical Research Institute of Infectious Diseases.”

The rest of the story appears here:
http://abcnews.go.com/Health/wireStory?id=7863828

Given that “The material was in tiny, 1mm vials that could easily be overlooked,” and included serum from Korean hemorrhagic fever patients, the lack of adequate inventory controls to this point creates the impression that any number of these vials could be outside their lab. Of course, they assure us they have it all under control. Which will be cold comfort if we don’t have a lifeboat.

Many years ago, in December 1993 to be approximate, I noticed a space-related poster on the wall of Eric Klien’s office in the headquarters of the Atlantis Project. We chatted for a bit about the possibilities for colonies in space. Later, Eric mentioned that this conversation was one of the formative moments in his conception of the Lifeboat Foundation.

Another friend, filmmaker Meg McLain has noticed that orbital hotels and space cruise liners are all vapor ware. Indeed, we’ve had few better depictions of realistic “how it would feel” space resorts since 1968’s Kubrick classic “2001: A Space Odyssey.” Remember the Pan Am flight to orbit, the huge hotel and mall complex, and the transfer to a lunar shuttle? To this day I know people who bought reservation certificates for whenever Pan Am would begin to fly to the Moon.

In 2004, after the X Prize victory, Richard Branson announced that Virgin Galactic would be flying tourists by 2007. So far, none.

A little later, Bigelow announced a fifty million dollar prize if only tourists could be launched to orbit by January 2010. I expect the prize money won’t be claimed in time.

Why? Could it be that the government is standing in the way? And if tourism in space can’t be “permitted” what of a lifeboat colony?

Meg has set out to make a documentary film about how the human race has arrived four decades after the Moon landing and still no tourist stuff. Two decades after Kitty Hawk, a person could fly across the country; three decades, across any ocean.

Where are the missing resorts?

Here is the link to her film project:
http://www.freewebs.com/11at40/

Jim Davies of Strike the Root writes about Galt’s Gulch and some gulch-like projects. These appeal to him because of the exponential trends in government power and abuse of power. He writes, in part,

“We have the serious opportunity in our hands right now of terminating the era of government absolutely, and so of removing from the human race the threat of ever more brutal tyranny ending only with WMD annihilation–while opening up vistas of peaceful prosperity and technological progress which even a realist like myself cannot find words to describe. ”

http://www.strike-the-root.com/91/davies/davies11.html

Avoiding those terrible events is what building our Lifeboat is all about. Got Lifeboat?

It sounds like cryonics is working, at least for microbes. But could any humans now alive have resistance to ancient organisms?

Rational Review carried a link to this story:

http://www.foxnews.com/story/0,2933,526460,00.html

“After more than 120,000 years trapped beneath a block of ice in Greenland, a tiny microbe has awoken. … The new bacteria species was found nearly 2 miles (3 km) beneath a Greenland glacier, where temperatures can dip well below freezing, pressure soars, and food and oxygen are scarce. ‘We don’t know what state they were in,’ said study team member Jean Brenchley of Pennsylvania State University. ‘They could’ve been dormant, or they could’ve been slowly metabolizing, but we don’t know for sure.’”

It is yet another interesting possibility against which humans should prepare to protect ourselves. Where is our Lifeboat?

Hack-Jet

When there is a catastrophic loss of an aircraft in any circumstances, there are inevitably a host of questions raised about the safety and security of the aviation operation. The loss of Air France flight 447 off the coast of Brazil with little evidence upon which to work inevitably raises the level of speculation surrounding the fate of the flight. Large-scale incidents such as this create an enormous cloud of data, which has to be investigated in order to discover the pattern of events, which led to the loss (not helped when some of it may be two miles under the ocean surface). So far French authorities have been quick to rule out terrorism it has however, emerged that a bomb hoax against an Air France flight had been made the previous week flying a different route from Argentina. This currently does not seem to be linked and no terrorist group has claimed responsibility. Much of the speculation regarding the fate of the aircraft has focused on the effects of bad weather or a glitch in the fly-by-wire systemthat could have caused the plane to dive uncontrollably. There is however another theory, which while currently unlikely, if true would change the global aviation security situation overnight. A Hacked-Jet.

Given the plethora of software modern jets rely on it seems reasonable to assume that these systems could be compromised by code designed to trigger catastrophic systemic events within the aircraft’s navigation or other critical electronic systems. Just as aircraft have a physical presence they increasingly have a virtual footprint and this changes their vulnerability. A systemic software corruption may account for the mysterious absence of a Mayday call — the communications system may have been offline. Designing airport and aviation security to keep lethal code off civilian aircraft would in the short-term, be beyond any government civil security regime. A malicious code attack of this kind against any civilian airliner would, therefore be catastrophic not only for the airline industry but also for the wider global economy until security caught up with this new threat. The technical ability to conduct an attack of this kind remains highly specialized (for now) but the knowledge to conduct attacks in this mold would be as deadly as WMD and easier to spread through our networked world. Electronic systems on aircraft are designed for safety not security, they therefore do not account for malicious internal actions.

While this may seem the stuff of fiction in January 2008 this broad topic was discussed due to the planned arrival of the Boeing 787, which is designed to be more ‘wired’ –offering greater passenger connectivity. Air Safety regulations have not been designed to accommodate the idea of an attack against on-board electronic systems and the FAA proposed special conditions , which were subsequently commented upon by the Air Line Pilots Association and Airbus. There is some interesting back and forth in the proposed special conditions, which are after all only to apply to the Boeing 787. In one section, Airbus rightly pointed out that making it a safety condition that the internal design of civilian aircraft should ‘prevent all inadvertent or malicious changes to [the electronic system]’ would be impossible during the life cycle of the aircraft because ‘security threats evolve very rapidly’.Boeing responded to these reports in an AP article stating that there were sufficient safeguards to shut out the Internet from internal aircraft systems a conclusion the FAA broadly agreed with - Wired Magazine covered much of the ground. During the press surrounding this the security writer Bruce Schneier commented that, “The odds of this being perfect are zero. It’s possible Boeing can make their connection to the Internet secure. If they do, it will be the first time in the history of mankind anyone’s done that.” Of course securing the airborne aircraft isn’t the only concern when maintenance and diagnostic systems constantly refresh while the aircraft is on the ground. Malicious action could infect any part of this process. While a combination of factors probably led to the tragic loss of flight AF447 the current uncertainty serves to highlight a potential game-changing aviation security scenario that no airline or government is equipped to face.

Comments on Hack-Jet:

(Note — these are thoughts on the idea of using software hacks to down commercial airliners and are not specifically directed at events surrounding the loss of AF447).


From the author of Daemon Daniel Suarez:

It would seem like the height of folly not to have physical overrides in place for the pilot — although, I realize that modern aircraft (especially designs like the B-2 bomber) require so many minute flight surface corrections every second to stay aloft, that no human could manage it. Perhaps that’s what’s going on with upcoming models like the 787. And I don’t know about the Airbus A330.

I did think it was highly suspicious that the plane seems to have been lost above St. Peter & Paul’s Rocks. By the strangest of coincidences, I had been examining that rock closely in Google Earth a few weeks ago for a scene in the sequel (which was later cut). It’s basically a few huge rocks with a series of antennas and a control hut — with nothing around it for nearly 400 miles.

Assuming the theoretical attacker didn’t make the exploit time-based or GPS-coordinate-based, they might want to issue a radio ‘kill’ command in a locale where there would be little opportunity to retrieve the black box (concealing all trace of the attack). I wonder: do the radios on an A330 have any software signal processing capability? As for the attackers: they wouldn’t need to physically go to the rocks–just compromise the scientific station’s network via email or other intrusion, etc. and issue the ‘kill’ command from a hacked communication system. If I were an investigator, I’d be physically securing and scouring everything that had radio capabilities on those rocks. And looking closely at any record of radio signals in the area (testing suspicious patterns against a virtual A330’s operating system). Buffer overrun (causing the whole system to crash?). Injecting an invalid (negative) speed value? Who knows… Perhaps the NSA’s big ear has a record of any radio traffic issued around that time.

The big concern, of course, is that this is a proof-of-concept attack — thus, the reason for concealing all traces of the compromise.


From John Robb - Global Guerillas:

The really dangerous hacking, in most situations, is done by disgruntled/postal/financially motivated employees. With all glass cockpits, fly by wire, etc. (the Airbus is top of its class in this) it would be easy for anybody on the ground crew to crash it. No tricky mechanical sabotage.


External hacks? That is of course, trickier. One way would be to get into the diagnostic/mx computers the ground crew uses. Probably by adding a hack to a standard patch/update. Not sure if any of the updates to these computers are delivered “online.”

Flight planning is likely the most “connected” system. Easier to access externally. Pilots get their plans for each flight and load them into the plane. If the route has them flying into the ground mid flight, it’s possible they won’t notice.

In flight hacks? Not sure that anything beyond outbound comms from the system is wireless. If so, that would be one method.

Another would be a multidirectional microwave/herf burst that fries controls. Might be possible, in a closed environment/fly by wire system to do this with relatively little power.

—-

There has been continuous discussion of the dangers involved with fly-by-wire systems in Peter Neumann’s Risk Digest since the systems were introduced in the late 1980s. The latest posting on the subject is here.

Investigator: Computer likely caused Qantas plunge


People have been worried about nanotechnology for quite some time now; nano-asbestos, advanced nano-enabled weapons, and self-replicating “gray goo” nanobots that accidentally go out of control. But what if everything goes right? What if nanotubes and nanoparticles are functionalized to stay out of the ecosystem? What if there are no major wars? What if nanoreplicators are never built, or if they are, they use modern error correction software to never mutate? What happens if nanotechnology fulfills humanity’s desires perfectly?

In the next decade or so, a new type of desktop appliance will be developed—a nanofactory that consists of very many productive nanosystems—atomically precise nanoscale machines that work together to build bulk amounts of atomically precise products.

The Foresight Technology Roadmap for Productive Nanosystems has identified a number of different approaches for building these atomically precise systems of machines that can produce other nanosystems http://www.foresight.org/roadmaps/. These approaches include Paul Rothemund’s DNA Origami, Christopher Schafmeister’s Bis-proteins, Joe Lynden’s Patterned Atomic Layer Epitaxy, and Robert Freitas and Ralph Merkle’s Diamondoid Mechanosynthesis http://www.rfreitas.com/Nano/JNNDimerTool.pdf, http://e-drexler.com/d/05/00/DC10C-mechanosynthesis.pdf, and http://www.molecularassembler.com/Papers/JCTNPengFeb06.pdf. Each of these approaches has the potential of building the numerous nanoscale electronic, mechanical, and structural components that comprise productive nanosystems.

The ultimate result will be a nanofactory that can build virtually anything—limited only by the laws of physics, the properties of the input feedstock, and the software that controls the device.

The concern is that this relatively primitive application—if successfully deployed as expected—will pose significant challenges, even if nobody accidentally makes a mistake or puts it to evil ends. Consider the simple, safe, and optimistic possibilities made possible by a nanofactory that can build a wide variety of atomically precise, large-scale products out of a few different input elements (say carbon, hydrogen, oxygen, iron, silicon, germanium, boron, phosphorus, and titanium) http://www.MolecularAssembler.com/Nanofactory. The factory itself would not be nano-sized; it would be an appliance that is approximately the same size as a desktop printer. However, its multi-material 3D output products would be atomically precise at the nanoscale.

The first and most valuable product of a nanofactory will be another nanofactory. The second most valuable product will be a system that refills the nanofactory’s “inkjet cartridge” using inexpensive feedstock, and the third will be a machine that turns sand into photovoltaic solar cells (with which to power the nanofactory). It is not clear what would one would print next. Programmable material for a holodeck? Wearable supercomputers? A few pounds of medical nanobots?

In any case, a few months to a few years after the first commercial release of a nanofactory, almost everyone will have one. It is not clear what the price might be—perhaps $1000. The price could not drop to zero, though it might approach the cost of dirt, sunshine, and the time required to print a nanofactory.

Diamond and its carbon-based relatives are an engineer’s best friend; being 50 times stronger than steel, only their atomic structure differentiates it from coal. Once people have a printer that can inexpensively make arbitrary, atomically perfect diamondoid nanostructures out of carbon, they are going to make everything out of it—from wearable supercomputers and skyscrapers that reach Low Earth Orbit to medical nanobots and flying cars—anything that doesn’t violate the laws of physics and has a CAD file description available on the web. Therefore, any cheap sources of carbon will be snatched up quickly.

Because human desire is essentially infinite, the demand for carbon will reach very high levels fairly quickly.

Air is free, and so is the carbon dioxide in it.

If taking carbon dioxide out of the air became economically favorable (and with inexpensive solar power it probably will be), then the result will be a ‘tragedy of the commons’. This would solve CO2-caused global warming with a vengeance, but would result in global freezing—and worse. If enough carbon dioxide in the air was removed, plant life would start to die.

Futurist Keith Henson has predicted that to counteract this outcome, the Sierra Club will frantically strip-mine all the coal under Wyoming and burn it as dirty a manner possible to save the rain forests. If Henson is correct, then Congress might pass laws that make it illegal to take CO2 from the air. But how will the government enforce a ban against unauthorized CO2 extraction?

Nanotechnology, of course.

Unfortunately, a government with unfettered nanotechnology-enhanced enforcement powers would likely be a dictatorship that makes the totalitarian regime of Orwell’s 1984 look like a kindergarten playground.

An alternative to a dictatorship would involve ownership of air. This sounds strange and preposterous until we remember that the American Indians thought that land ownership was strange and preposterous.

A more jarring alternative might involve the re-engineering of plants so that they can live without carbon dioxide, perhaps by using silica as a structure material (as diatoms do). Do we really trust ourselves to recreate Earth’s biosphere in such a drastic manner? Some optimists will tell us not to worry about such drastic genetic modification on the ecosystem; we will back up the whole thing on the web somewhere, and use modern software revision-tracking software to keep it safe http://tortoisesvn.net/.

Admittedly, these scenarios seem rather far-fetched. However, as Foresight Institute co-founder Christine Peterson put it, “If you look out into the long-term future and what you see looks like science fiction, it might be wrong. But if it doesn’t look like science fiction, it’s definitely wrong” http://www.washingtonpost.com/wp-dyn/content/article/2008/04…28_pf.html.

We are not yet at the level of technological maturity at which we can confidently assert that widescale nanofactory development and distribution is inevitable. Of the four main approaches to Productive Nanosystems, only the most rudimentary lab demos have proven the concepts. Therefore, the suggestion that nanofactories will alter the conditions of anthropogenic global warming may be met with skepticism — as it should. However, in light of the exponential progress in nanotechnology in the past few years, it is likely that some version of the carbon dioxide tragedy of the commons will happen in some form or another. Researchers, policy makers, and the public at large must become aware of these possibilities, and thoughtfully analyze them. Otherwise disruptive events may cause panic, as most scenarios predict a quick transition from initial invention to wide distribution of these technologies.

Ultimately, this prediction means two things. First, that wasting precious time, money, and effort on stopping global warming will increase the risk of other, more serious catastrophes. Second, we will need to set aside any conservative values regarding the preservation of the Earth’s ecosystem as it currently exists. Change will happen. The good news is that a Space Pier http://autogeny.org/tower/tower.html and other low-cost methods to orbit will be available for conservatives who are intent on preserving the status quo biosphere elsewhere in the solar system. Of course, these are the same people who are probably the most emotionally resistant to leaving, which might lead to conflicts.

Howard Bloom gently points out that “Nature is not a motherly protector”. Putting it more bluntly and extending the anthropomorphism, Mother Nature is a brutal psychopath who uncaringly tortures and slaughters her children. She does not build nice little eco-friendly Gardens of Eden. In fact, there have been 148 major die-offs, and six much bigger mass extinctions (in which over 90% of species on this planet were wiped out—each and every time). Those die-offs resulted from natural physical disturbances in a universe that is fine-tuned to allow carbon-based life to emerge. It’s a mixed message, but the message is simple: Adapt or die. Nanotechnology will not change that message. However, it will provide us with better biotech tools that will enable us to (for better or worse) manipulate our bodies and brains.

As the nanotechnology revolution begins, we will need to think hard about its secondary effects and ethical implications. The sheer magnitude of changes will cause us to consider carefully our ultimate role in the universe, our essential nature as human persons, the meaning of our lives, and what we really, really desire.

Tihamer Toth-Fejel, MS
General Dynamics Advanced Information Systems
Michigan Research and Development Center

An unmanned beast that cruises over any terrain at speeds that leave an M1A Abrams in the dust

Mean Machine: Troops could use the Ripsaw as an advance scout, sending it a mile or two ahead of a convoy, and use its cameras and new sensor technology to sniff out roadside bombs or ambushes John B. Carnett

Today’s featured Invention Award winner really requires no justification–it’s an unmanned, armed tank faster than anything the US Army has. Behold, the Ripsaw.

Cue up the Ripsaw’s greatest hits on YouTube, and you can watch the unmanned tank tear across muddy fields at 60 mph, jump 50 feet, and crush birch trees. But right now, as its remote driver inches it back and forth for a photo shoot, it’s like watching Babe Ruth forced to bunt with the bases loaded. The Ripsaw, lurching and belching black puffs of smoke, somehow seems restless.

Like their creation, identical twins Geoff and Mike Howe, 34, don’t like to sit still for long. At age seven, they built a log cabin. Ten years later, they converted a school bus into a drivable, transforming stage for their heavy-metal band, Two Much Trouble. In 2000 they couldn’t agree on their next project: Geoff favored a jet-turbine-powered off-road truck; Mike, the world’s fastest tracked vehicle. “That weekend, Mike calls me down to his garage,” Geoff says. “He’s already got the suspension built for the Ripsaw. So we went with that.”

Every engineer they consulted said they couldn’t best the 42mph top speed of an M1A Abrams, the most powerful tank in the world. Other tanks are built to protect the people inside, with frames made of heavy armored-steel plates. Designed for rugged unmanned missions, the Ripsaw just needed to go fast, so the brothers started trimming weight. First they built a frame of welded steel tubes, like the ones used by Nascar, that provides 50 percent more strength at half the weight.

Ripsaw: How It Works: To glide over rough terrain at top speed, the Ripsaw has shock absorbers that provide 14 inches of travel. But when the suspension compresses, it creates slack that could cause a track to come off, potentially flipping the vehicle. So the inventors devised a spring-loaded wheel at the front that extends to keep the tracks taut. The Ripsaw has never thrown a track Bland Designs

Behind the Wheel: The Ripsaw’s six cameras send live, 360-degree video to a control room, where program manager Will McMaster steers the tank John B. Carnett

When you reinvent the tank, finding ready-made parts is no easy task, and a tread light enough to spin at 60 mph and strong enough to hold together at that speed didn’t exist. So the Howes hand-shaped steel cleats and redesigned the mechanism for connecting them in a track. (Because the patent for the mechanism, one of eight on Ripsaw components, is still pending, they will reveal only that they didn’t use the typical pin-and-bushing system of connecting treads.) The two-pound cleats weigh about 90 percent less than similarly scaled tank cleats. With the combined weight savings, the Ripsaw’s 650-horsepower V8 engine cranks out nine times as much horsepower per pound as an M1A Abrams.

While working their day jobs — Mike as a financial adviser, Geoff as a foreman at a utilities plant — the self-taught engineers hauled the Ripsaw prototype from their workshop in Maine to the 2005 Washington Auto Show, where they showed it to army officials interested in developing weaponized unmanned ground vehicles (UGVs). That led to a demonstration for Maine Senator Susan Collins, who helped the Howes secure $1.25 million from the Department of Defense.The brothers founded Howe and Howe Technologies in 2006 and set to work upgrading various Ripsaw systems, including a differential drive train that automatically doles out the right amount of power to each track for turns. The following year they handed it over to the Army’s Armament Research Development and Engineering Center (ARDEC), which paired it with a remote-control M240 machine gun and put the entire system through months of strenuous tests. “What really set it apart from other UGVs was its speed,” says Bhavanjot Singh, the ARDEC project manager overseeing the Ripsaw’s development. Other UGVs top out at around 20 mph, but the Ripsaw can keep up with a pack of Humvees.

Over the Hill: Despite the best efforts of inventors Mike [left] and Geoff Howe, the Ripsaw has proven unbreakable. It did once break a suspension mount — and drove on for hours without trouble John B. Carnett

Back on the field, the tank has been readied for the photo. The program manager for Howe and Howe Technologies, Will McMaster, who is sitting at the Ripsaw’s controls around the corner and roughly a football field away, drives it straight over a three-foot-tall concrete wall. The brothers think that when the $760,000 Ripsaw is ready for mass production this summer, feats like this will give them a lead over other companies vying for a military UGV contract. “Every other UGV is small and uses [artificial intelligence] to avoid obstacles,” Mike says. “The Ripsaw doesn’t have to avoid obstacles; it drives over them.“

Singularity Hub

Create an AI on Your Computer

Written on May 28, 2009 – 11:48 am | by Aaron Saenz |

If many hands make light work, then maybe many computers can make an artificial brain. That’s the basic reasoning behind Intelligence Realm’s Artificial Intelligence project. By reverse engineering the brain through a simulation spread out over many different personal computers, Intelligence Realm hopes to create an AI from the ground-up, one neuron at a time. The first waves of simulation are already proving successful, with over 14,000 computers used and 740 billion neurons modeled. Singularity Hub managed to snag the project’s leader, Ovidiu Anghelidi, for an interview: see the full text at the end of this article.

The ultimate goal of Intelligence Realm is to create an AI or multiple AIs, and use these intelligences in scientific endeavors. By focusing on the human brain as a prototype, they can create an intelligence that solves problems and “thinks” like a human. This is akin to the work done at FACETS that Singularity Hub highlighted some weeks ago. The largest difference between Intelligence Realm and FACETS is that Intelligence Realm is relying on a purely simulated/software approach.

Which sort of makes Intelligence Realm similar to the Blue Brain Project that Singularity Hub also discussed. Both are computer simulations of neurons in the brain, but Blue Brain’s ultimate goal is to better understand neurological functions, while Intelligence Realm is seeking to eventually create an AI. In either case, to successfully simulate the brain in software alone, you need a lot of computing power. Blue Brain runs off a high-tech supercomputer, a resource that’s pretty much exclusive to that project. Even with that impressive commodity, Blue Brain is hitting the limit of what it can simulate. There’s too much to model for just one computer alone, no matter how powerful. Intelligence Realm is using a distributed computing solution. Where one computer cluster alone may fail, many working together may succeed. Which is why Intelligence Realm is looking for help.

The AI system project is actively recruiting, with more than 6700 volunteers answering the call. Each volunteer runs a small portion of the larger simulation on their computer(s) and then ships the results back to the main server. BOINC, the Berkeley built distributed computing software that makes it all possible, manages the flow of data back and forth. It’s the same software used for SETI’s distributed computing processing. Joining the project is pretty simple: you just download BOINC, some other data files, and you’re good to go. You can run the simulation as an application, or as part of your screen saver.

Baby Steps

So, 6700 volunteers, 14,000 or so platforms, 740 billion neurons, but what is the simulated brain actually thinking? Not a lot at the moment. The same is true with the Blue Brain Project, or FACETS. Simulating a complex organ like the brain is a slow process, and the first steps are focused on understanding how the thing actually works. Inputs (Intelligence Realm is using text strings) are converted into neuronal signals, those signals are allowed to interact in the simulation and the end state is converted back to an output. It’s a time and labor (computation) intensive process. Right now, Intelligence Realm is just building towards simple arithmetic.

Which is definitely a baby step, but there are more steps ahead. Intelligence Realm plans on learning how to map numbers to neurons, understanding the kind of patterns of neurons in your brain that represent numbers, and figuring out basic mathematical operators (addition, subtraction, etc). From these humble beginnings, more complex reasoning will emerge. At least, that’s the plan.

Intelligence Realm isn’t just building some sort of biophysical calculator. Their brain is being designed so that it can change and grow, just like a human brain. They’ve focused on simulating all parts of the brain (including the lower reasoning sections) and increasing the plasticity of their model. Right now it’s stumbling towards knowing 1+1 = 2. Even with linear growth they hope that this same stumbling intelligence will evolve into a mental giant. It’s a monumental task, though, and there’s no guarantee it will work. Building artificial intelligence is probably one of the most difficult tasks to undertake, and this early in the game, it’s hard to see if the baby steps will develop into adult strides. The simulation process may not even be the right approach. It’s a valuable experiment for what it can teach us about the brain, but it may never create an AI. A larger question may be, do we want it to?

Knock, Knock…It’s Inevitability

With the newest Terminator movie out, it’s only natural to start worrying about the dangers of artificial intelligence again. Why build these things if they’re just going to hunt down Christian Bale? For many, the threats of artificial intelligence make it seem like an effort of self-destructive curiosity. After all, from Shelley’s Frankenstein Monster to Adam and Eve, Western civilization seems to believe that creations always end up turning on their creators.

AI, however, promises rewards as well as threats. Problems in chemistry, biology, physics, economics, engineering, and astronomy, even questions of philosophy could all be helped by the application of an advanced AI. What’s more, as we seek to upgrade ourselves through cybernetics and genetic engineering, we will become more artificial. In the end, the line between artificial and natural intelligence may be blurred to a point that AIs will seem like our equals, not our eventual oppressors. However, that’s not a path that everyone will necessarily want to walk down.

Will AI and Humans learn to co-exist?

Will AI and Humans learn to co-exist?

The nature of distributed computing and BOINC allow you to effectively vote on whether or not this project will succeed. Intelligence Realm will eventually need hundred of thousands if not millions of computing platforms to run their simulations. If you believe that AI deserves a chance to exist, give them a hand and recruit others. If you think we’re building our own destroyers, then don’t run the program. In the end, the success or failure of this project may very well depend on how many volunteers are willing to serve as mid-wives to a new form of intelligence.

Before you make your decision though, make sure to read the following interview. As project leader, Ovidiu Anghelidi is one of the driving minds behind reverse engineering the brain and developing the eventual AI that Intelligence Realm hopes to build. He’s didn’t mean for this to be a recruiting speech, but he makes some good points:

SH: Hello. Could you please start by giving yourself and your project a brief introduction?

OA: Hi. My name is Ovidiu Anghelidi and I am working on a distributed computing project involving thousands of computers in the field of artificial intelligence. Our goal is to develop a system that can perform automated research.

What drew you to this project?

During my adolescence I tried understanding the nature of question. I used extensively questions as a learning tool. That drove me to search for better understanding methods. After looking at all kinds of methods, I kinda felt that understanding creativity is a worthier pursuit. Applying various methods of learning and understanding is a fine job, but finding outstanding solutions requires much more than that. For a short while I tried understanding how creativity is done and what exactly is it. I found out that there is not much work done on this subject, mainly because it is an overlapping concept. The search for creativity led me to the field of AI. Because one of the past presidents of the American Association of Artificial Intelligence dedicated an entire issue to this subject I started pursuing that direction. I looked into the field of artificial intelligence for a couple of years and at some point I was reading more and more papers that touched the subject of cognition and brain so I looked briefly into neuroscience. After I read an introductory book about neuroscience, I realized that understanding brain mechanisms is what I should have done all along, for the past 20 years. To this day I am pursuing this direction.

What’s your time table for success? How long till we have a distributed AI running around using your system?

I have been working on this project for about 3 years now, and I estimate that we will need another 7–8 years to finalize the project. Nonetheless we do not need that much time to be able to use some its features. I expect to have some basic features that work within a couple of months. Take for example the multiple simulations feature. If we want to pursue various directions in different fields (i.e. mathematics, biology, physics) we will need to set up a simulation for each field. But we do not need to get to the end of the project, to be able to run single simulations.

Do you think that Artificial Intelligence is a necessary step in the evolution of intelligence? If not, why pursue it? If so, does it have to happen at a given time?

I wouldn’t say necessary, because we don’t know what we are evolving towards. As long as we do not have the full picture from beginning to end, or cases from other species to compare our history to, we shouldn’t just assume that it is necessary.

We should pursue it with all our strength and understanding because soon enough it can give us a lot of answers about ourselves and this Universe. By soon I mean two or three decades. A very short time span, indeed. Artificial Intelligence will amplify a couple of orders of magnitude our research efforts across all disciplines.

In our case it is a natural extension. Any species that reaches a certain level of intelligence, at some point in time, they would start replicating and extending their natural capacities in order to control their environment. The human race did that for the last couple thousands of years, we tried to replicate and extend our capacity to run, see, smell and touch. Now it reached thinking. We invented vehicles, television sets, other devices and we are now close to have artificial intelligence.

What do you think are important short term and long term consequences of this project?

We hope that in short term we will create some awareness in regards to the benefits of artificial intelligence technology. Longer term it is hard to foresee.

How do you see Intelligence Realm interacting with more traditional research institutions? (Universities, peer reviewed Journals, etc)

Well…, we will not be able to provide full details about the entire project because we are pursuing a business model, so that we can support the project in the future, so there is little chance of a collaboration with a University or other research institution. Down the road, as we we will be in an advanced stage with the development, we will probably forge some collaborations. For the time being this doesn’t appear feasible. I am open to collaborations but I can’t see how that would happen.

I submitted some papers to a couple of journals in the past, but I usually receive suggestions that I should look at other journals, from other fields. Most of the work in artificial intelligence doesn’t have neuroscience elements and the work in neuroscience contains little or no artificial intelligence elements. Anyway, I need no recognition.

Why should someone join your project? Why is this work important?

If someone is interested in artificial intelligence it might help them having a different view on the subject and seeing what components are being developed over time. I can not tell how important is this for someone else. On a personal level, I can say that because my work is important to me and by having an AI system I will be able to get answers to many questions, I am working on that. Artificial Intelligence will provide exceptional benefits to the entire society.

What should someone do who is interested in joining the simulation? What can someone do if they can’t participate directly? (Is there a “write-your-congressman” sort of task they could help you with?)

If someone is interested in joining the project they need to download the Boinc client from the http://boinc.berkeley.edu site and then attach to the project using the master Url for this project, http://www.intelligencerealm.com/aisystem. We appreciate the support received from thousands of volunteers from all over the world.

If someone can’t participate directly I suggest to him/her to keep an open mind about what AI is and how it can benefit them. He or she should also try to understand its pitfalls.

There is no write-your-congressman type of task. Mass education is key for AI success. This project doesn’t need to be in the spotlight.

What is the latest news?

We reached 14,000 computers and we simulated over 740 billion neurons. We are working on implementing a basic hippocampal model for learning and memory.

Anything else you want to tell us?

If someone considers the development of artificial intelligence impossible or too far into the future to care about, I can only tell him or her, “Embrace the inevitable”. The advances in the field of neuroscience are increasing rapidly. Scientists are thorough.

Understanding its benefits and pitfalls is all that is needed.

Thank you for your time and we look forward to covering Intelligence Realm as it develops further.

Thank you for having me.