Toggle light / dark theme

The Greenland Ice Sheet Melt: Irreversible Implications

It is of course widely accepted that the Greenland icesheet is melting at an alarming rate, accelerating, and is an irreversible process, and when it finally does melt will contribute to a rise in sea levels globally by 7 meters. This is discounting the contribution of any melt from the West Antarctic ice sheet which could contribute a further 5 meters, and the more long term risk of East Antarctic ice sheet melt, which is losing mass at a rate of 57 billion tonnes per year, and if melted in entirety would see sea levels rise by a further 60 meters.

In this light it is rather ‘cute’ that the site here dedicated to existential risks to society is called the Lifeboat Foundation when one of our less discussed risks is that of world-wide flooding of a massive scale to major coastal cities/ports & industries right across the world.

Why do we still continue to grow our cities below a safe limit of say 10 meters above sea level when cities are built to last thousands of years, but could now be flooded within hundreds. How many times do we have to witness disaster scenarios such as the Oklahoma City floods before we contemplate this occurring irreversibly to hundreds of cities across the world in the future. Is it feasible to take the approach of building large dams to preserve these cities, or is it a case of eventually evacuating and starting all over again? In the latter case, how do we safely contain chemical & nuclear plants that would need to be abandoned in a responsible and non-environmentally damaging procedure?

Let’s be optimistic here — the Antarctic ice sheets are unlikely to disappear in time scales we need to worry about today — but the Greenland ice sheet is topical. Can it be considered an existential risk if the process takes hundreds of years and we can slowly step out of the way though so much of the infrastructure we rely on is being relinquished? Will we just gradually abandon our cities to higher ground as insurance companies refuse to cover properties in coastal flooding areas? Or will we rise to a challenge and take first steps to create eco-bubbles & ever larger dams to protect cities?

I would like to hear others thoughts on this topic of discussion here - particularly if anyone feels that the Greenland ice sheet situation is reversible…

Electro-magnetic Vortex phenomena & Industrial Implications…

I wouldn’t have paid much attention to the following topic except for the article appearing in an otherwise credible international news agency (MINA).

http://macedoniaonline.eu/content/view/17115/56/
http://wiki.answers.com/Q/What_is_the_gulf_of_aden_vortex

Whilst electro-magnetic disturbances occur naturally — all the time, the suggestion that one in particular has allegedly arose through industrial practices (ionospheric research, wormhole research(??)) lends to curiosity. If anyone on one of the advisory boards for the various science disciplines has a strong knowledge of electro-magnetic vortex type features that can occur in nature, please explain the phenomena, whether there are any implications of these and whether industry of any sort (in particular directed ionospheric heating) can cause such anomalies to appear from time to time.

I understand that there can be certain fluctuations and weakening in build up to magnetic pole reversals, for example (though please correct me if I’m wrong here). That besides one may enjoy the alleged reaction of certain defense forces (surely spoof) which is at least good satire on how leaders of men can often fear the unknown.

Verne, Wells, and the Obvious Future Part 2

I am taking the advice of a reader of this blog and devoting part 2 to examples of old school and modern movies and the visionary science they portray.

Things to Come 1936 — Event Horizon 1997
Things to Come was a disappointment to Wells and Event Horizon was no less a disappointment to audiences. I found them both very interesting as a showcase for some technology and social challenges.… to come- but a little off the mark in regards to the exact technology and explicit social issues. In the final scene of Things to Come, Raymond Massey asks if mankind will choose the stars. What will we choose? I find this moment very powerful- perhaps the example; the most eloguent expression of the whole genre of science fiction. Event Horizon was a complete counterpoint; a horror movie set in space with a starship modeled after a gothic cathedral. Event Horizon had a rescue crew put in stasis for a high G several month journey to Neptune on a fusion powered spaceship. High accelleration and fusion brings H-bombs to mind, and though not portrayed, this propulsion system is in fact a most probable future. Fusion “engines” are old hat in sci-fi despite the near certainty the only places fusion will ever work as advertised are in a bomb or a star. The Event Horizon, haunted and consigned to hell, used a “gravity drive” to achieve star travel by “folding space.” Interestingly, a recent concept for a black hole powered starship is probably the most accurate forecast of the technology that will be used for interstellar travel in the next century. While ripping a hole in the fabric of space time may be strictly science fantasy, for the next thousand years at least, small singularity propulsion using Hawking radiation to achieve a high fraction of the speed of light is mathematically sound and the most obvious future.

https://lifeboat.com/blog/2012/09/only-one-star-drive-can-work-so-far

That is, if humanity avoids an outbreak of engineered pathogens or any one of several other threats to our existence in that time frame.

Hand in hand with any practical method of journeys to other star systems in the concept of the “sleeper ship.” Not only as inevitable as the submarine or powered flight was in the past, the idea of putting human beings in cold storage would bring tremendous changes to society. Suspended animation using a cryopreservation procedure is by far the most radical and important global event possible, and perhpas probable, in the near future. The ramifications of a revivable whole body cryopreservation procedure are truly incredible. Cryopreservation would be the most important event in the history of mankind. Future generations would certainly mark it as the beginning of “modern” civilization. Though not taken seriously anymore than the possiblility of personal computers were, the advances in medical technology make any movies depicting suspended animation quite prophetic.

The Thing 1951/Them 1954 — Deep Impact 1998/Armegeddon 1998
These four movies were essentially about the same.…thing. Whether a space vampire not from earth in the arctic, mutated super organisms underneath the earth, or a big whatever in outer space on a collision course with earth, the subject was a monstrous threat to our world, the end of humankind on earth being the common theme. The lifeboat blog is about such threats and the The Thing and Them would also appeal to any fan of Barbara Ehrenreich’s book, Blood Rites. It is interesting that while we appreciate in a personal way what it means to face monsters or the supernatural, we just do not “get” the much greater threats only recently revealed by impact craters like Chixculub. In this way these movies dealing with instinctive and non-instinctive realized threats have an important relationship to each other. And this connection extends to the more modern sci-fi creature features of past decades. Just how much the The Thing and Them contributed to the greatest military sci-fi movie of the 20th century (Aliens, of course) will probably never be known. Director James Cameron once paid several million dollars out of court to sci-fi writer Harlan Ellison after admitting during an interview to using Ellison’s work- so he will not be making that mistake again. The second and third place honors, Starship Troopers and Predator, were both efforts of Dutch Film maker Paul Verhoeven.

While The Thing and Them still play well, and Deep Impact, directed by James Cameron’s ex-wife, is a good flick and has uncanny predictive elements such as a black president and a tidal wave, Armegeddon is worthless. I mention this trash cinema only because it is necessary for comparison and to applaud the 3 minutes when the cryogenic fuel transfer procedure is seen to be the farce that it is in actuality. Only one of the worst movie directors ever, or the space tourism industry, would parade such a bad idea before the public.
Ice Station Zebra 1968 — The Road 2009
Ice Station Zebra was supposedly based on a true incident. This cold war thriller featured Rock Hudson as the penultimate submarine commander and was a favorite of Howard Hughes. By this time a recluse, Hughes purchased a Las Vegas TV station so he could watch the movie over and over. For those who have not seen it, I will not spoil the sabotage sequence, which has never been equaled. I pair Ice Station Zebra and The Road because they make a fine quartet, or rather sixtet, with The Thing/Them and Deep Impact/Armegeddon.

The setting for many of the scenes in these movies are a wasteland of ice, desert, cometoid, or dead forest. While Armegeddon is one of the worst movies ever made on a big budget, The Road must be one of the best on a small budget- if accuracy is a measure of best. The Road was a problem for the studio that produced it and release was delayed due to the reaction of the test audiences. All viewers left the theatre profoundly depressed. It is a shockingly realistic movie and disturbed to the point where I started writing about impact deflection. The connection between Armegeddon and The Road, two movies so different, is the threat and aftermath of an asteroid or comet impact. While The Road never specifies an impact as the disaster that ravaged the planet, it fits the story perfectly. Armegeddon has a few accurate statements about impacts mixed in with ludicrous plot devices that make the story a bad experience for anyone concerned with planetary protection. It seems almost blasphemous and positively criminal to make such a juvenile for profit enterprise out of an inevitable event that is as serious as serious gets. Do not watch it. Ice Station Zebra, on the other hand, is a must see and is in essence a showcase of the only tools available to prevent The Road from becoming reality. Nuclear weapons and space craft- the very technologies that so many feared would destroy mankind, are the only hope to save the human race in the event of an impending impact.

Part 3:
Gog 1954 — Stealth 2005
Fantastic Voyage 1966 — The Abyss 1989
And notable moments in miscellaneous movies.

Verne, Wells, and the Obvious Future Part 1

Steamships, locomotives, electricity; these marvels of the industrial age sparked the imagination of futurists such as Jules Verne. Perhaps no other writer or work inspired so many to reach the stars as did this Frenchman’s famous tale of space travel. Later developments in microbiology, chemistry, and astronomy would inspire H.G. Wells and the notable science fiction authors of the early 20th century.

The submarine, aircraft, the spaceship, time travel, nuclear weapons, and even stealth technology were all predicted in some form by science fiction writers many decades before they were realized. The writers were not simply making up such wonders from fanciful thought or childrens ryhmes. As science advanced in the mid 19th and early 20th century, the probable future developments this new knowledge would bring about were in some cases quite obvious. Though powered flight seems a recent miracle, it was long expected as hydrogen balloons and parachutes had been around for over a century and steam propulsion went through a long gestation before ships and trains were driven by the new engines. Solid rockets were ancient and even multiple stages to increase altitude had been in use by fireworks makers for a very long time before the space age.

Some predictions were seen to come about in ways far removed yet still connected to their fictional counterparts. The U.S. Navy flagged steam driven Nautilus swam the ocean blue under nuclear power not long before rockets took men to the moon. While Verne predicted an electric submarine, his notional Florida space gun never did take three men into space. However there was a Canadian weapons designer named Gerald Bull who met his end while trying to build such a gun for Saddam Hussien. The insane Invisible Man of Wells took the form of invisible aircraft playing a less than human role in the insane game of mutually assured destruction. And a true time machine was found easily enough in the mathematics of Einstein. Simply going fast enough through space will take a human being millions of years into the future. However, traveling back in time is still as much an impossibillity as the anti-gravity Cavorite from the First Men in the Moon. Wells missed on occasion but was not far off with his story of alien invaders defeated by germs- except we are the aliens invading the natural world’s ecosystem with our genetically modified creations and could very well soon meet our end as a result.

While Verne’s Captain Nemo made war on the death merchants of his world with a submarine ram, our own more modern anti-war device was found in the hydrogen bomb. So destructive an agent that no new world war has been possible since nuclear weapons were stockpiled in the second half of the last century. Neither Verne or Wells imagined the destructive power of a single missile submarine able to incinerate all the major cities of earth. The dozens of such superdreadnoughts even now cruising in the icy darkness of the deep ocean proves that truth is more often stranger than fiction. It may seem the golden age of predictive fiction has passed as exceptions to the laws of physics prove impossible despite advertisments to the contrary. Science fiction has given way to science fantasy and the suspension of disbelief possible in the last century has turned to disappointment and the distractions of whimsical technological fairy tales. “Beam me up” was simply a way to cut production costs for special effects and warp drive the only trick that would make a one hour episode work. Unobtainium and wishalloy, handwavium and technobabble- it has watered down what our future could be into childish wish fulfillment and escapism.

The triumvirate of the original visionary authors of the last two centuries is completed with E.E. Doc Smith. With this less famous author the line between predictive fiction and science fantasy was first truly crossed and the new genre of “Space Opera” most fully realized. The film industry has taken Space Opera and run with it in the Star Wars franchise and the works of Canadian film maker James Cameron. Though of course quite entertaining, these movies showcase all that is magical and fantastical- and wrong- concerning science fiction as a predictor of the future. The collective imagination of the public has now been conditioned to violate the reality of what is possible through the violent maiming of basic scientific tenets. This artistic license was something Verne at least tried not to resort to, Wells trespassed upon more frequently, and Smith indulged in without reservation. Just as Madonna found the secret to millions by shocking a jaded audience into pouring money into her bloomers, the formula for ripping off the future has been discovered in the lowest kind of sensationalism. One need only attend a viewing of the latest Transformer movie or download Battlestar Galactica to appreciate that the entertainment industry has cashed in on the ignorance of a poorly educated society by selling intellect decaying brain candy. It is cowboys vs. aliens and has nothing of value to contribute to our culture…well, on second thought, I did get watery eyed when the young man died in Harrison Ford’s arms. I am in no way criticizing the profession of acting and value the talent of these artists- it is rather the greed that corrupts the ancient art of storytelling I am unhappy with. Directors are not directors unless they make money and I feel sorry that these incredibly creative people find themselves less than free to pursue their craft.

The archetype of the modern science fiction movie was 2001 and like many legendary screen epics, a Space Odyssey was not as original as the marketing made it out to be. In an act of cinema cold war many elements were lifted from a Soviet movie. Even though the fantasy element was restricted to a single device in the form of an alien monolith, every artifice of this film has so far proven non-predictive. Interestingly, the propulsion system of the spaceship in 2001 was originally going to use atomic bombs, which are still, a half century later, the only practical means of interplanetary travel. Stanly Kubrick, fresh from Dr. Strangelove, was tired of nukes and passed on portraying this obvious future.

As with the submarine, airplane, and nuclear energy, the technology to come may be predicted with some accuracy if the laws of physics are not insulted but rather just rudely addressed. Though in some cases, the line is crossed and what is rude turns disgusting. A recent proposal for a “NautilusX” spacecraft is one example of a completely vulgar denial of reality. Chemically propelled, with little radiation shielding, and exhibiting a ridiculous doughnut centrifuge, such advertising vehicles are far more dishonest than cinematic fabrications in that they decieve the public without the excuse of entertaining them. In the same vein, space tourism is presented as space exploration when in fact the obscene spending habits of the ultra-wealthy have nothing to do with exploration and everything to do with the attendent taxpayer subsidized business plan. There is nothing to explore in Low Earth Orbit except the joys of zero G bordellos. Rudely undressing by way of the profit motive is followed by a rude address to physics when the key private space scheme for “exploration” is exposed. This supposed key is a false promise of things to come.

While very large and very expensive Heavy Lift Rockets have been proven to be successful in escaping earth’s gravitational field with human passengers, the inferior lift vehicles being marketed as “cheap access to space” are in truth cheap and nasty taxis to space stations going in endless circles. The flim flam investors are basing their hopes of big profit on cryogenic fuel depots and transfer in space. Like the filling station every red blooded American stops at to fill his personal spaceship with fossil fuel, depots are the solution to all the holes in the private space plan for “commercial space.” Unfortunately, storing and transferring hydrogen as a liquified gas a few degrees above absolute zero in a zero G environment has nothing in common with filling a car with gasoline. It will never work as advertised. It is a trick. A way to get those bordellos in orbit courtesy of taxpayer dollars. What a deal.

So what is the obvious future that our present level of knowledge presents to us when entertaining the possible and the impossible? More to come.

8D Problem Solving for Transhumanists

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome

Open Letter to Ray Kurzweil

Dear Ray;

I’ve written a book about the future of software. While writing it, I came to the conclusion that your dates are way off. I talk mostly about free software and Linux, but it has implications for things like how we can have driverless cars and other amazing things faster. I believe that we could have had all the benefits of the singularity years ago if we had done things like started Wikipedia in 1991 instead of 2001. There is no technology in 2001 that we didn’t have in 1991, it was simply a matter of starting an effort that allowed people to work together.

Proprietary software and a lack of cooperation among our software scientists has been terrible for the computer industry and the world, and its greater use has implications for every aspect of science. Free software is better for the free market than proprietary software, and there are many opportunities for programmers to make money using and writing free software. I often use the analogy that law libraries are filled with millions of freely available documents, and no one claims this has decreased the motivation to become a lawyer. In fact, lawyers would say that it would be impossible to do their job without all of these resources.

My book is a full description of the issues but I’ve also written some posts on this blog, and this is probably the one most relevant for you to read: https://lifeboat.com/blog/2010/06/h-conference-and-faster-singularity

Once you understand this, you can apply your fame towards getting more people to use free software and Python. The reason so many know Linus Torvalds’s name is because he released his code as GPL, which is a license whose viral nature encourages people to work together. Proprietary software makes as much sense as a proprietary Wikipedia.

I would be happy to discuss any of this further.

Regards,

-Keith
—————–
Response from Ray Kurzweil 11/3/2010:

I agree with you that open source software is a vital part of our world allowing everyone to contribute. Ultimately software will provide everything we need when we can turn software entities into physical products with desktop nanofactories (there is already a vibrant 3D printer industry and the scale of key features is shrinking by a factor of a hundred in 3D volume each decade). It will also provide the keys to health and greatly extended longevity as we reprogram the outdated software of life. I believe we will achieve the original goals of communism (“from each according to their ability, to each according to their need”) which forced collectivism failed so miserably to achieve. We will do this through a combination of the open source movement and the law of accelerating returns (which states that the price-performance and capacity of all information technologies grows exponentially over time). But proprietary software has an important role to play as well. Why do you think it persists? If open source forms of information met all of our needs why would people still purchase proprietary forms of information. There is open source music but people still download music from iTunes, and so on. Ultimately the economy will be dominated by forms of information that have value and these two sources of information – open source and proprietary – will coexist.
———
Response back from Keith:
Free versus proprietary isn’t a question about whether only certain things have value. A Linux DVD has 10 billion dollars worth of software. Proprietary software exists for a similar reason that ignorance and starvation exist, a lack of better systems. The best thing my former employer Microsoft has going for it is ignorance about the benefits of free software. Free software gets better only as more people use it. Proprietary software is an inferior development model and an anathema to science because it hinders people’s ability to work together. It has infected many corporations, and I’ve found that PhDs who work for public institutions often write proprietary software.

Here is a paragraph from my writings I will copy here:

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

We’ve known approximately what a neural network should look like for many decades. We need “places” for people to work together to hash out the details. A free software repository provides such a place. We need free software, and for people to work in “official” free software repositories.

“Open source forms of information” I have found is a separate topic from the software issue. Software always reads, modifies, and writes data, state which lives beyond the execution of the software, and there can be an interesting discussion about the licenses of the data. But movies and music aren’t science and so it doesn’t matter for most of them. Someone can only sell or give away a song after the software is written and on their computer in the first place. Some of this content can be free and some can be protected, and this is an interesting question, but mostly this is a separate topic. The important thing to share is scientific knowledge and software.

It is true that software always needs data to be useful: configuration parameters, test files, documentation, etc. A computer vision engine will have lots of data, even though most of it is used only for testing purposes and little used at runtime. (Perhaps it has learned the letters of the alphabet, state which it caches between executions.) Software begets data, and data begets software; people write code to analyze the Wikipedia corpus. But you can’t truly have a discussion of sharing information unless you’ve got a shared codebase in the first place.

I agree that proprietary software is and should be allowed in a free market. If someone wants to sell something useful that another person finds value in and wants to pay for, I have no problem with that. But free software is a better development model and we should be encouraging / demanding it. I’ll end with a quote from Linus Torvalds:

Science may take a few hundred years to figure out how the world works, but it does actually get there, exactly because people can build on each others’ knowledge, and it evolves over time. In contrast, witchcraft/alchemy may be about smart people, but the knowledge body never “accumulates” anywhere. It might be passed down to an apprentice, but the hiding of information basically means that it can never really become any better than what a single person/company can understand.
And that’s exactly the same issue with open source (free) vs proprietary products. The proprietary people can design something that is smart, but it eventually becomes too complicated for a single entity (even a large company) to really understand and drive, and the company politics and the goals of that company will always limit it.

The world is screwed because while we have things like Wikipedia and Linux, we don’t have places for computer vision and lots of other scientific knowledge to accumulate. To get driverless cars, we don’t need any more hardware, we don’t need any more programmers, we just need 100 scientists to work together in SciPy and GPL ASAP!

Regards,

-Keith

Technology Readiness Levels for Non-rocket Space Launch

An obvious next step in the effort to dramatically lower the cost of access to low Earth orbit is to explore non-rocket options. A wide variety of ideas have been proposed, but it’s difficult to meaningfully compare them and to get a sense of what’s actually on the technology horizon. The best way to quantitatively assess these technologies is by using Technology Readiness Levels (TRLs). TRLs are used by NASA, the United States military, and many other agencies and companies worldwide. Typically there are nine levels, ranging from speculations on basic principles to full flight-tested status.

The system NASA uses can be summed up as follows:

TRL 1 Basic principles observed and reported
TRL 2 Technology concept and/or application formulated
TRL 3 Analytical and experimental critical function and/or characteristic proof-of concept
TRL 4 Component and/or breadboard validation in laboratory environment
TRL 5 Component and/or breadboard validation in relevant environment
TRL 6 System/subsystem model or prototype demonstration in a relevant environment (ground or space)
TRL 7 System prototype demonstration in a space environment
TRL 8 Actual system completed and “flight qualified” through test and demonstration (ground or space)
TRL 9 Actual system “flight proven” through successful mission operations.

Progress towards achieving a non-rocket space launch will be facilitated by popular understanding of each of these proposed technologies and their readiness level. This can serve to coordinate more work into those methods that are the most promising. I think it is important to distinguish between options with acceleration levels within the range human safety and those that would be useful only for cargo. Below I have listed some non-rocket space launch methods and my assessment of their technology readiness levels.

Spacegun: 6. The US Navy’s HARP Project launched a projectile to 180 km. With some level of rocket-powered assistance in reaching stable orbit, this method may be feasible for shipments of certain forms of freight.

Spaceplane: 6. Though a spaceplane prototype has been flown, this is not equivalent to an orbital flight. A spaceplane will need significantly more delta-v to reach orbit than a suborbital trajectory requires.

Orbital airship: 2. Though many subsystems have been flown, the problem of atmospheric drag on a full scale orbital airship appears to prevent this kind of architecture from reaching space.

Space Elevator: 3. The concept may be possible, albeit with major technological hurdles at the present time. A counterweight, such as an asteroid, needs to be positioned above geostationary orbit. The material of the elevator cable needs to have a very high tensile strength/mass ratio; no satisfactory material currently exists for this application. The problem of orbital collisions with the elevator has also not been resolved.

Electromagnetic catapult: 4. This structure could be built up the slope of a tall mountain to avoid much of the Earth’s atmosphere. Assuming a small amount of rocket power would be used after a vehicle exits the catapult, no insurmountable technological obstacles stand in the way of this method. The sheer scale of the project makes it difficult to develop the technology past level 4.

Are there any ideas we’re missing here?

Critical Request to CERN Council and Member States on LHC Risks

Experts regard safety report on Big Bang Machine as insufficient and one-dimensional

International critics of the high energy experiments planned to start soon at the particle accelerator LHC at CERN in Geneva have submitted a request to the Ministers of Science of the CERN member states and to the delegates to the CERN Council, the supreme controlling body of CERN.

The paper states that several risk scenarios (that have to be described as global or existential risks) cannot currently be excluded. Under present conditions, the critics have to speak out against an operation of the LHC.

The submission includes assessments from expertises in the fields markedly missing from the physicist-only LSAG safety report — those of risk assessment, law, ethics and statistics. Further weight is added because these experts are all university-level experts – from Griffith University, the University of North Dakota and Oxford University respectively. In particular, it is criticised that CERN’s official safety report lacks independence – all its authors have a prior interest in the LHC running and that the report uses physicist-only authors, when modern risk-assessment guidelines recommend risk experts and ethicists as well.

As a precondition of safety, the request calls for a neutral and multi-disciplinary risk assessment and additional astrophysical experiments – Earth based and in the atmosphere – for a better empirical verification of the alleged comparability of particle collisions under the extreme artificial conditions of the LHC experiment and relatively rare natural high energy particle collisions: “Far from copying nature, the LHC focuses on rare and extreme events in a physical set up which has never occurred before in the history of the planet. Nature does not set up LHC experiments.”

Even under greatly improved circumstances concerning safety as proposed above, big jumps in energy increase, as presently planned by a factor of three compared to present records, without carefully analyzing previous results before each increase of energy, should principally be avoided.

The concise “Request to CERN Council and Member States on LHC Risks” (Pdf with hyperlinks to the described studies) by several critical groups, supported by well known critics of the planned experiments:

http://lhc-concern.info/wp-content/uploads/2010/03/request-t…5;2010.pdf

The answer received by now does not consider these arguments and studies but only repeats again that from the side of the operators everything appears sufficient, agreed by a Nobel Price winner in physics. LHC restart and record collisions by factor 3 are presently scheduled for March 30, 2010.

Official detailed and well understandable paper and communication with many scientific sources by ‘ConCERNed International’ and ‘LHC Kritik’:

http://lhc-concern.info/wp-content/uploads/2010/03/critical-…ed-int.pdf

More info:
http://lhc-concern.info/

Artificial brain ’10 years away’

Artificial brain ’10 years away’

By Jonathan Fildes
Technology reporter, BBC News, Oxford

A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.

Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.

He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.

Around two billion people are thought to suffer some kind of brain impairment, he said.

“It is not impossible to build a human brain and we can do it in 10 years,” he said.

“And if we do succeed, we will send a hologram to TED to talk.”

‘Shared fabric’

The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.

In particular, his team has focused on the neocortical column — repetitive units of the mammalian brain known as the neocortex.

Neurons

The team are trying to reverse engineer the brain

“It’s a new brain,” he explained. “The mammals needed it because they had to cope with parenthood, social interactions complex cognitive functions.

“It was so successful an evolution from mouse to man it expanded about a thousand fold in terms of the numbers of units to produce this almost frightening organ.”

And that evolution continues, he said. “It is evolving at an enormous speed.”

Over the last 15 years, Professor Markram and his team have picked apart the structure of the neocortical column.

“It’s a bit like going and cataloguing a bit of the rainforest — how may trees does it have, what shape are the trees, how many of each type of tree do we have, what is the position of the trees,” he said.

“But it is a bit more than cataloguing because you have to describe and discover all the rules of communication, the rules of connectivity.”

The project now has a software model of “tens of thousands” of neurons — each one of which is different — which has allowed them to digitally construct an artificial neocortical column.

Although each neuron is unique, the team has found the patterns of circuitry in different brains have common patterns.

“Even though your brain may be smaller, bigger, may have different morphologies of neurons — we do actually share the same fabric,” he said.

“And we think this is species specific, which could explain why we can’t communicate across species.”

World view

To make the model come alive, the team feeds the models and a few algorithms into a supercomputer.

“You need one laptop to do all the calculations for one neuron,” he said. “So you need ten thousand laptops.”

Computer-generated image of a human brain

The research could give insights into brain disease

Instead, he uses an IBM Blue Gene machine with 10,000 processors.

Simulations have started to give the researchers clues about how the brain works.

For example, they can show the brain a picture — say, of a flower — and follow the electrical activity in the machine.

“You excite the system and it actually creates its own representation,” he said.

Ultimately, the aim would be to extract that representation and project it so that researchers could see directly how a brain perceives the world.

But as well as advancing neuroscience and philosophy, the Blue Brain project has other practical applications.

For example, by pooling all the world’s neuroscience data on animals — to create a “Noah’s Ark”, researchers may be able to build animal models.

“We cannot keep on doing animal experiments forever,” said Professor Markram.

It may also give researchers new insights into diseases of the brain.

“There are two billion people on the planet affected by mental disorder,” he told the audience.

The project may give insights into new treatments, he said.

The TED Global conference runs from 21 to 24 July in Oxford, UK.


Electron Beam Free Form Fabrication process — progress toward self sustaining structures

For any assembly or structure, whether an isolated bunker or a self sustaining space colony, to be able to function perpetually, the ability to manufacture any of the parts necessary to maintain, or expand, the structure is an obvious necessity. Conventional metal working techniques, consisting of forming, cutting, casting or welding present extreme difficulties in size and complexity that would be difficult to integrate into a self sustaining structure.

Forming requires heavy high powered machinery to press metals into their final desired shapes. Cutting procedures, such as milling and lathing, also require large, heavy, complex machinery, but also waste tremendous amounts of material as large bulk shapes are cut away emerging the final part. Casting metal parts requires a complex mold construction and preparation procedures, not only does a negative mold of the final part need to be constructed, but the mold needs to be prepared, usually by coating in ceramic slurries, before the molten metal is applied. Unless thousands of parts are required, the molds are a waste of energy, resources, and effort. Joining is a flexible process, and usually achieved by welding or brazing and works by melting metal between two fixed parts in order to join them — but the fixed parts present the same manufacturing problems.

Ideally then, in any self sustaining structure, metal parts should be constructed only in the final desired shape but without the need of a mold and very limited need for cutting or joining. In a salient progressive step toward this necessary goal, NASA demonstrates the innovative Electron Beam Free Forming Fabrication (http://www.aeronautics.nasa.gov/electron_beam.htm) Process. A rapid metal fabrication process essentially it “prints” a complex three dimensional object by feeding a molten wire through a computer controlled gun, building the part, layer by layer, and adding metal only where you desire it. It requires no molds and little or no tooling, and material properties are similar to other forming techniques. The complexity of the part is limited only by the imagination of the programmer and the dexterity of the wire feed and heating device.

Electron beam freeform fabrication process in action
Electron beam freeform fabrication process in action

According to NASA materials research engineer Karen Taminger, who is involved in developing the EBF3 process, extensive simulations and modeling by NASA of long duration space flights found no discernable pattern to the types of parts which failed, but the mass of the failed parts remained remarkably consistent throughout the studies done. This is a favorable finding to in-situe parts manufacturing and because of this the EBF³ team at NASA has been developing a desktop version. Taminger writes:

“Electron beam freeform fabrication (EBF³) is a cross-cutting technology for producing structural metal parts…The promise of this technology extends far beyond its applicability to low-cost manufacturing and aircraft structural designs. EBF³ could provide a way for astronauts to fabricate structural spare parts and new tools aboard the International Space Station or on the surface of the moon or Mars”

NASA’s Langley group working on the EBF3 process took their prototype desktop model for a ride on the microgravity simulating NASA flight and found the process works just fine even in micro gravity, or even against gravity.

A structural metal part fabricated from EBF³
A structural metal part fabricated from EBF³

The advantages this system offers are significant. Near net shape parts can be manufactured, significantly reducing scrap parts. Unitized parts can be made — instead of multiple parts that need riveting or bolting, final complex integral structures can be made. An entire spacecraft frame could be ‘printed’ in one sitting. The process also creates minimal waste products and is highly energy and feed stock efficient, critical to self sustaining structures. Metals can be placed only where they are desired and the material and chemistry properties can be tailored through the structure. The technical seminar features a structure with a smooth transitional gradient from one alloy to another. Also, structures can be designed specifically for their intended purposes, without needing to be tailored to manufacturing process, for example, stiffening ridges can be curvilinear, in response to the applied forces, instead of typical grid patterns which facilitate easy conventional manufacturing techniques. Manufactures, such as Sciaky Inc, (http://www.sciaky.com/64.html) are all ready jumping on the process

In combination with similar 3D part ‘printing’ innovations in plastics and other materials, the required complexity for sustaining all the mechanical and structural components of a self sustaining structure is plummeting drastically. Isolated structures could survive on a feed stock of scrap that is perpetually recycled as worn parts are replaced by free form manufacturing and the old ones melted to make new feed stock. Space colonies could combine such manufacturing technologies and scrap feedstock with resource collection creating a viable minimal volume and energy consuming system that could perpetually repair the structure – or even build more. Technologies like these show that the atomic level control that nanotechnology manufacturing proposals offer are not necessary to create self sustaining structure, and that with minor developments of modern technology, self sustaining structures could be built and operated successfully.

/* */