Toggle light / dark theme

1) CERN officially attempted to produce ultraslow miniature black holes on earth. It has announced to continue doing so after the current more than a year-long break for upgrading.

2) Miniature black holes possess radically new properties according to published scientific results that go unchallenged in the literature for 5 years: no Hawking evaporation; unchargedness; invisibility to CERN’s detectors; enhanced chance of being produced.

3) Of the millions of miniature black holes hoped to have been produced, at least one is bound to be slow enough to stay inside earth to circulate there.

4) This miniature black hole circulates undisturbed – until it captures the first charged quark. From then on it grows exponentially doubling in size in months at first, later in weeks.

5) As a consequence, after about 100 doublings, earth will start showing manifest signs of “cancer.” And she will – after first losing her atmosphere – die within months to leave nothing but a 2-cm black hole in her wake that still keeps the moon on its course.

6) CERN’s roundabout-way safety argument of 2008, invoking the observed longevity of neutron stars as a guarantee for earth, got falsified on the basis of quantum mechanics in a paper published in mid-2008.

7) CERN’s second roundabout-way safety argument of 2008, invoking the observed longevity of white dwarf stars as a guarantee for earth, likewise got falsified in scientific papers the first of which was published in mid-2008. CERN overlooked the enlarged-cross section principle valid for ultra-slow artificial, compared to ultrafast natural, miniature black holes. The same effect is frighteningly familiar from the slow “cold” neutrons in nuclear fission.

In summary, seven coincidences of “bad luck” were found to cooperate like Macbeth’s fateful 3 witches. CERN decided to accept the blemish of not up-dating its safety report for 5 years so far. Also it steadfastly refuses the safety conference publicly requested on the web on April 18, 2008 (“Honey, I shrunk the earth”). Most significantly, CERN up to this day refuses to heed a Cologne Court’s advice, handed-out to CERN’s representatives standing before it on January the 27th of 2011, to hold a “safety conference.”

Unless there is a safety guarantee that CERN keeps a secret from the whole world while mentioning it only behind closed doors to bring the World Press Council and the UN Security Council to refrain from doing their otherwise inalienable duty, the above-sketched scenario has no parallel in history.

Not a single scientific publication world-wide claims to falsify one of the above-sketched results (points 2–7). Only a very charismatic scientist may be able to call back the media and the mighty behind closed doors. I have a hunch who this could be. But I challenge him to no longer hide so the world can see to whom she owes her hopefully beneficial fate.

Has there ever been a more unsettling story kept from the citizens of this planet?

For J.O.R.

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.

Mind Uploading is (Largely) Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.

This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.

If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.

Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.

Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.

If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.

The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Could Upload+AGI be easier to implement than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”

If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.

Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.

Conclusion:

Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
  2. The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
    • If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!

franco 2 essay 5

References:

[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: http://www.kurzweilai.net/runaway-artificial-intelligence [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: https://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008–3. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3…report.pdf [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.

Medical science has changed humanity. It changed what it means to be human, what it means to live a human life. So many of us reading this (and at least one person writing it) owe their lives to medical advances, without which we would have died.

Live expectancy is now well over double what it was for the Medieval Briton, and knocking hard on triple’s door.

What for the future? Extreme life extension is no more inherently ridiculous than human flight or the ability to speak to a person on the other side of the world. Science isn’t magic – and ageing has proven to be a very knotty problem – but science has overcome knotty problems before.

A genuine way to eliminate or severely curtail the influence of ageing on the human body is not in any sense inherently ridiculous. It is, in practice, extremely difficult, but difficult has a tendency to fall before the march of progress. So let us consider what implications a true and seismic advance in this area would have on the nature of human life.

keep-calm-and-be-forever-young-138

One absolutely critical issue that would surround a breakthrough in this area is the cost. Not so much the cost of research, but the cost of application. Once discovered, is it expensive to do this, or is it cheap? Do you just have to do it once? Is it a cure, or a treatment?

If it can be produced cheaply, and if you only need to do it once, then you could foresee a future where humanity itself moves beyond the ageing process.

The first and most obvious problem that would arise from this is overpopulation. A woman has about 30–35 years of life where she is fertile, and can have children. What if that were extended to 70–100 years? 200 years?

Birth control would take on a vastly more important role than it does today. But then, we’re not just dropping this new discovery into a utopian, liberal future. We’re dropping it into the real world, and in the real world there are numerous places where birth control is culturally condemned. I was born in Ireland, a Catholic nation, where families of 10 siblings or more are not in any sense uncommon.

What of Catholic nations – including some staunchly conservative, and extremely large Catholic societies in Latin America – where birth control is seen as a sin?

Of course, the conservatism of these nations might (might) solve this problem before it arises – the idea of a semi-permanent extension of life might be credibly seen as a deeper and more blasphemous defiance of God than wearing a condom.

But here in the West, the idea that we are allowed to choose how many children we have is a liberty so fundamental that many would baulk to question it.

We may have to.

quizzical baby

There is another issue. What about the environmental impact? We’re already having a massive impact on the environment, and it’s not looking pretty. What if there were 10 times more of us? 100 times more? What about the energy consumption needs, in a world running out of petrol? The food needs? The living space? The household waste?

There are already vast flotillas of plastic waste the size of small nations that float across the surface of the Pacific. Carbon dioxide levels in the atmosphere have just topped 400 parts per million. We are pushing hard at the envelope of what the world of capable of sustaining, and a massive boost in population would only add to that ever-increasing pressure.

Of course, science might well sort out the answer to those things – but will it sort it out in time? The urgency of environmental science, and cultural change, suddenly takes on a whole new level of importance in the light of a seismic advance in addressing the problem of human ageing.

These are problems that would arise if the advance produced a cheap treatment that could (and would) be consumed by very large numbers of people.

But what if it wasn’t a cure? What if it wasn’t cheap? What if it was a treatment, and a very expensive one?

All of a sudden, we’re looking at a very different set of problems, and the biggest of all centres around something Charlie Chaplin said in the speech he gave at the end of his film, The Great Dictator. It is a speech from the heart, and a speech for the ages, given on the eve of mankind’s greatest cataclysm to date, World War 2.

In fact, you’d be doing yourself a favour if you watched the whole thing, it is an astounding speech.

chaplin great dictator

The quote is this:

“To those who can hear me, I say — do not despair.

The misery that is now upon us is but the passing of greed, the bitterness of men who fear the way of human progress. The hate of men will pass, and dictators die, and the power they took from the people will return to the people. And so long as men die, liberty will never perish.”

And so long as men die, liberty will never perish.

What if Stalin were immortal? And not just immortal, but immortally young?

Immortally vigourous, able to amplify the power of his cult of personality with his literal immortality.

This to me seems a threat of a very different kind, but of no less importance, than the dangers of overpopulation. That so long as men die, liberty will never perish. But what if men no longer die?

And of course, you could very easily say that those of us lucky enough to live in reasonably well-functioning democracies wouldn’t have to worry too much about this. It doesn’t matter if you live to be 1000, you’re still not getting more than 8 years of them in the White House.

But there is something in the West that would be radically changed in nature. Commercial empires.

What if Rupert Murdoch were immortal?

It doesn’t matter how expensive that treatment for ageing is. If it exists, he’d be able to afford it, and if he were able to buy it, he’d almost certainly do so.

If Fox News was run by an immortal business magnate, with several lifetimes worth of business experience and skill to know how to hold it all together, keep it going, keep it growing? What then?

Charles-Montgomery-Burns--007

Not perhaps the sunny utopia of a playground of immortals that we might hope for.

This is a different kind of issue. It’s not an external issue – the external impact of population on the environment, or the external need of a growing population to be fed. These problems might well sink us, but science has shown itself extremely adept at finding solutions to external problems.

What this is, is an internal problem. A problem of humanity. More specifically, the fact that extreme longevity would allow tyranny to achieve a level of entrenchment that it has so far never been capable of.

But then a law might be passed. Something similar to the USA’s 8 year term limit on Presidents. You can’t be a CEO for longer than 30 years, or 40 years, or 50. Something like that might help, might even become urgently necessary over time. Forced retirement for the eternally young.

Not an unproblematic idea, I’m sure you’ll agree. Quite the culture shock for Western societies loathe to accept government intervention in private affairs.

But it is a new category of problem. A classic problem of humanity, amplified by immortality. The centralisation of control, power and influence in a world where the people it centres upon cannot naturally die.

This, I would say, is the most obvious knotty problem that would arise, for humanity, in the event of an expensive, but effective, treatment for ageing.

But then, let’s just take a quick look back at the other side of the coin. Is there a problem inherent in humanity that would be amplified were ageing to be overcome, cheaply, worldwide?

Let me ask you a question.

Do people, generally speaking, become more open to new things, or less open to new things, as they age?

Do older people – just in general terms – embrace change or embrace stasis?

Well, it’s very obvious that some older people do remain young at heart. They remain passionate, humble in their beliefs, they are open to new things, and even embrace them. Some throw the influence and resources they have accrued throughout their lifetimes into this, and are instrumental to the march of progress.

More than this, they add a lifetime of skill, experience and finesse to their passion, a melding of realism and hope that is one of the most precious and potent cocktails that humanity is capable of mixing.

But we’re not talking about the few. We’re talking about the many.

Is it fair to say that most older people take this attitude to change? Or is it fairer to say that older people who retain that passion and spark, who not only have retained it, but have spent a lifetime fuelling it into a great blaze of ability and success – is it fair to say that these people are a minority?

I would say yes. They are incredibly precious, but part of that preciousness is the fact that they are not common.

Perhaps one day we will make our bodies forever young. But what of our spirit? What of our creativity?

I’m not talking about age-related illnesses like Parkinson’s, or Alzheimer’s disease. I’m talking about the creativity, passion and fire of youth.

The temptation of the ‘comfort zone’ for all human beings is a palpable one, and one that every person who lives well, who breaks the mold, who changes the future, must personally overcome.

Do the majority of people overcome it? I would argue no. And more than this, I would argue that living inside a static understanding of the world – even working to protect that understanding in the face of naked and extreme challenges from reality itself – is now, and has historically been, through all human history, the norm.

Those who break the mold, brave the approbation of the crowd, and look to the future with wonder and hope, have always been a minority.

mind closed till further notice

Now add in the factor of time. The retreat into the comforting, the static and the known has a very powerful pull on human beings. It is also not a binary process, but an analogue process – it’s not just a case of you do or you don’t. There are degrees of retreat, extremes of intellectual conservatism, just as there are extremes of intellectual curiosity, and progress.

But which extremes are the more common? This matters, because if all people could live to 200 years old or more, what would that mean for a demographic shift in cultural desire away from change and toward stasis?

A worrying thought. And it might seem that in the light of all this, we should not seek to open the Pandora’s box of eternal life, but should instead stand against such progress, because of the dangers it holds.

But, frankly, this is not an option.

The question is not whether or not human beings should seek to conquer death.

The question is whether or not conquering death is possible.

If it is possible, it will be done. If it is not, it will not be.

But the obvious problem of longevity – massive population expansion – is something that is, at least in principle, amenable to other solutions arising from science as it now practiced. Cultural change is often agonising, but it does happen, and scientific progress may indeed solve the issues of food supply and environmental impact. Perhaps not, but perhaps.

At the very least, these sciences take on a massively greater importance to the cohesion of the human future than they already have, and they are already very important indeed.

But there is another, deeper problem of a very different kind. The issue of the human spirit. If, over time, people (on average) become more calcified in their thinking, more conservative, less likely to take risks, or admit to new possibilities that endanger their understanding, then longevity, distributed across the world, can only lead to a culture where stasis is far more valued than change.

Pandora’s box is already open, and its name is science. Whether it is now, or a hundred years from now, if it is possible for human beings to be rendered immortal through science, someone is going to crack it.

We cannot flinch the future. It would be churlish and naive to assume that such a seemingly impossible vision will forever remain impossible. Not after the last century we just had, where technological change ushered in a new era, a new kind of era, where the impossibilities of the past fell like wheat beneath a scythe.

Scientific progress amplifies the horizon of possible scientific progress. And we stand now at a time when what it means to be a human – something which already undergone enormous change – may change further still, and in ways more profound than any of us can imagine.

If it can be done, it will be done. And so the only sane approach is to look with clarity at what we can see of what that might mean.

The external problems are known problems, and we may yet overcome them. Maybe. If there’s a lot of work, and a lot of people take a lot of issues a lot more seriously than they are already doing.

climate-change-silence-630

But there is a different kind of issue. An issue extending from human nature itself. Can we overcome, as a people, as a species, our fear, and the things that send us scurrying back from curiosity and hope into the comforting arms of wilful ignorance, and static belief?

This, in my opinion, is the deepest problem of longevity. Who wants to live forever in a world where young bodies are filled with withered souls, beaten and embittered with the frustrations of age, but empowered to set the world in stone to justify them?

But perhaps it was always going to come to this. That at some point technological advancement would bring us to a kind of reckoning. A reckoning between the forces of human fear, and the value of human courage.

To solve the external problems of an eternal humanity, science must do what science has done so well for so long – to delve into the external, to open up new possibilities to feed the world, and balance human presence with the needs of the Earth.

But to solve the internal problems of an eternal humanity, science needs to go somewhere else. The stunning advances in the understanding of the external world must begin to be matched with new ways of charting the deeps of human nature. The path of courage, of open-mindedness, of humility, and a willingness to embrace change and leave behind the comforting arms of old static belief systems – this is not a path that many choose.

But many more must choose it in a world of immortal people, to counterbalance the conservatism of those who fail the test, and retreat, and live forever.

Einstein lived to a ripe old age, and never lost his wonder. Never lost his humility, or his courage to brave the approbation and ridicule of his peers in that task he set himself. To chart the deep simplicities of the real, and know the mind of God. The failure of the human spirit is not written in the stars, and never will be.

einstein laughing

We are none of us doomed to fail in matters of courage, curiosity, wonder or hope. But we are none of us guaranteed to succeed.

And as long as courage, hope and the ability to break new ground remain vague, hidden properties that we squeamishly refuse to interrogate, each new generation will have to start from scratch, and make their own choices.

And in a world of eternal humans, if any individual generation fails, the world will be counting that price for a very long time.

It is a common fear that if we begin to make serious headway into issues normally the domain of the spiritual, we will destroy the mystique of them, and therefore their preciousness.

Similar criticisms were, and sometimes still are, laid at the feet of Darwin’s work, and Galileo’s. But the fact is that an astronomer does not look to the sky with less wonder because of their deeper understanding, but more wonder.

Reality is both stunningly elegant, and infinitely beautiful, and in these things it is massively more amazing than the little tales of mystery humans have used to make sense of it since we came down from the trees.

In the face of a new future, where the consequences of human courage and human failure are amplified, the scientific conquest of death must be fused with another line of inquiry. The scientific pioneering of the fundamental dynamics of courage in living, and humility to the truth, over what we want to believe.

It will never be a common path, and no matter how clear it is made, or how wide it is opened, there will always be many who will never walk it.

But the wider it can be made, the clearer it can be made, the more credible it can be made as an option.

And we will need that option. We need it now.

And our need will only grow greater with time.

This essay was originally published at Transhumanity.

They don’t call it fatal for nothing. Infatuation with the fat of fate, duty to destiny, and belief in any sort of preordainity whatsoever – omnipotent deities notwithstanding – constitutes an increase in Existential Risk, albeit indirectly. If we think that events have been predetermined, it follows that we would think that our actions make no difference in the long run and that we have no control over the shape of those futures still fetal. This scales to the perceived ineffectiveness of combating or seeking to mitigate existential risk for those who have believe so fatalistically. Thus to combat belief in fate, and resultant disillusionment in our ability to wreak roiling revisement upon the whorl of the world, is to combat existential risk as well.

It also works to undermine the perceived effectiveness of humanity’s ability to mitigate existential risk along another avenue. Belief in fate usually correlates with the notion that the nature of events is ordered with a reason on purpose in mind, as opposed to being haphazard and lacking a specific projected end. Thus believers-in-fate are not only more likely to doubt the credibility of claims that existential risk could even occur (reasoning that if events have purpose, utility and conform to a mindfully-created order then they would be good things more often than bad things) but also to feel that if they were to occur it would be for a greater underlying reason or purpose.

Thus, belief in fate indirectly increases existential risk both a. by undermining the perceived effectiveness of attempts to mitigate existential risk, deriving from the perceived ineffectiveness of humanity’s ability to shape the course and nature of events and effect change in the world in general, and b. by undermining the perceived likelihood of any existential risks culminating in humanity’s extinction, stemming from connotations of order and purpose associated with fate.

fate5Belief in fate is not only self-curtailing, but also dehumanizing insofar as it stops us from changing, affecting and actualizing the world and causes us think that we can have no impact on the course of events or control over circumstantial circumstances. This anecdotal point is rather ironic considering that Anti-Transhumanists often launch the charge that trying to take fate into our own hands is itself dehumanizing. They’re heading in an ass-forward direction.

While belief in predetermination-of-events is often associated with religion, most often with those who hold their deity to be omnipotent (as in the Abrahamic religious tradition), it can also be easily engendered by the association of scientific materialism (or metaphysical naturalism) with determinism and its connotations of alienation and seeming dehumanization. Memetic connotations of preordainity or predetermination, whether stemming from either religion or scientific-materialism, serve to undermine our perceived autonomy and our perceived ability to enact changes in the world. We must let neither curtail our perceived potential to change the world, both because the thrust towards self-determination and changing the world for the better is the heart of humanity and because perceived ineffectiveness at facilitating change in the world correlates with an indirect increase in existential risk by promoting the perceived ineffectiveness of humanity to shape events so as to mitigate such risks.

Having presented the reasoning behind the claim that belief in fate constitutes an indirect increase in existential risk, the rest of this essay will be concerned with a.) the extent with which ideas can be considered as real as physical entities, processes or “states-of-affairs”, namely for their ability to affect change and determine the nature and circumstance of such physical entities and processes, b.) a few broader charges against fate in general, and c.) possible ideohistorical sources of contemporary belief in fate.

The Ousting of Ousia:

Giddy Fortune’s furious fickle wheel,
That goddess blind, That stands upon
the rolling restless stone.
Henry V, 3.3.27), Pistol to Fluellen — Shakespeare

Ethereal beliefs can have material consequences. We intuitively feel that ideas can have less of an impact on the world for their seeming incorporeality. This may be a but specter of Aristotle’s decision to ground essence in Ousia, or Substance, and the resultant emphasis throughout the following philo-socio-historical development of the Western World on staticity and unchanging materiality as the platform for Truth and Being (i.e. essence) that it arguably promoted. Indeed, the Scholastic Tradition in medieval Europe tried to reconcile their theological system with Aristotle’s philosophic tradition more than any other. But while Aristotle’s emphasis on grounding being in ousia is predominant, Aristotle also has his Telos, working though the distance of time to impart shape and meaning upon the action of the present. Indeed, we do things for a purpose; the concerted actions contributing to my writing these words are not determined in stepwise fashion and inherent in the parts, but with the end goal of communicating and idea to people shaping and to a large extent determining the parts and present actions that proceed along the way to that projected ideal. Aristotle was presumably no stranger to the limitations of parts, as his metaphysical system can be seen in large part as a reaction against Plato’s.

One will do well to recall as well that Plato grounded the eternality of being not in sod but in sky. Plato’s Ideal Forms were eternal because they were to be found nowhere in physicality, in contrast to Aristotle’s Ousia, which were eternal and lasting for being material rather than ethereal. Plato’s lofty realm of Ideas were realer than reality for being locatable nowhere therein, save as mere approximation. And while Plato’s conceptual gestalt did indeed gestate throughout certain periods of history, including Neo-Platonism, Idealism, Transcendentalism and Process Philosophy, one can argue that the notion of the reality of ideas failed to impact popular attitudes of fate, destiny and predeterminism to the extent with which Aristotle’s notion of Ousia did.

The Ideal Real or the Real Ideal?


My stars shine darkly over me:
the malignancy of my fate might
perhaps distemper yours.
(Twelfth Night, 2.1.3), Sebastian to Antonio) — Shakespeare

I’ve thus far argued that Artistotle’s notion of Ousia as the grounds for Truth and Essence has promoted the infatuation with fate that seems pretty predominant throughout history, and that Plato’s Ideal Forms have deterred such physics-fat infatuation by emphasizing the reality of ideas, and thereby vicariously promoting the notion that ideas can have as large an impact on reality as substance and real action does.

If we act as though God is watching, are not all the emergent effects (on us) of his existence, which would have been caused were he actually there watching in some sense, instantiated nonetheless or with any less vehemence than if he were not watching? If a tribe refrains from entering a local area for fear of legends about a monster situated there, are they not as controlled and affected by that belief as they would be if such a monster actually existed? The idea of the monster is as real as otherwise because the tribesmen avoid it, just as though it were real. These examples serve to illustrate the point that ideas can be as real as real states-of-affairs because by believing in their reality we can consequently instantiate all the emergent effects that would have been present were such an idea a real “state-of-affairs”.

This notion has the potential to combat the sedentizing effects that belief in fate and destiny can engender because it allows us to see not only our ideas, with which we can affect circumstances and effect changes in the world, can have material impact on the world, and to see that objectives projected into the future can have a decided impact on circumstances in the present insofar as we shape the circumstances of the present in response to that anticipated, projected objective. We do things for projected purposes which shall not exist until the actions carried out under the egis of satisfying that purpose are, indeed, carried out. It doesn’t exist until we make it exist, and we must believe that it shall exist in order to facilitate the process of its creation. This reifies the various possible futures still waiting to be actualized, and legitimizes the reality of such possible futures. Thus Plato’s ideo-embryo of Ideal Forms constitutes a liberating potential not only for making ideas (through which we shape the world) real, but also by reifying Telos and the anticipated ends and fetal futures through which we can enact the translation of such ideas into physical embodiment.

Yet Plato is not completely free of the blame for solidifying lame fate in the eyes of the world. The very eternality of his Forms at once serves to reify fate and promote public acceptance of it as well! Plato’s forms are timeless, changeless. What is time but a loosening of the strings on fortune’s sling? What is eternality but destiny determined and the fine print writ large? While Plato’s firm forms vilify fate they also valorize it by sharing with it memetic connotations of timelessness and changelessness.

So damn you Plato for spaciating time rather than temporalizing space, for making the ideal a form rather than a from and eternal rather than eturnatal, for urning where you should have turned and for grounding the ideal rather that aerating it as you should have. And damn you Aristotle — phlegmy future-forward blowback and simmerred reaction against Ur philosophic father — but a batch of Play-Doh bygone hardy and fissury — for totalizing in ontic aplomb the base base and bawdy body and for siding with ousia rather than insiding within nousia. Your petty chasm has infected the vector of our memetic trajectory with such damnbivalent gravity as to arrest our escapee velocity and hold us fast against the slow down and still to wall the wellness of our will. Your preplundurance with stuff has made your ideational kin seek suchness and understanding in what overlies the series of surfaces all the way down, without any gno of flow or recourse to the coursing middle that shifts its center and lifts its tangentail besides. Aristotle the Ptolemaic totalizer of cosmography by curation rather than creation, each moment a museum to be ripped asunder its supple matrix maternal and molested with scientific rigor-mortis in quiet dustless rooms. Being is but the jail-louse diminutive bastard-kid-brother of Becoming, which Heraclitus in his dark light saw and which Parmenides despite getting more limelight did not. But again, even Aristotle had his retro-causal final causes — the Telos televisualized…

Was Aristotle aristotally wrong, or did he just miss a right turn down the li(n)e?


Our wills and fates do so contrary run
That our devices still are overthrown; Our thoughts are ours, their ends none of our own. (Hamlet, 3.2.208), Player King — Shakespeare

Then again (…again?), Aristotle may not be as culpable as he was capable. While I argue that his notion of Ousia had predominantly reifying effects on people’s notions of the existence of fate and the irreality of ideas, thereby undermining our perceived ability to determine the conditions of our selves and the world in particular, this may have been a consequence of promiscuous misinterpretation rather than underlying motivation. Aristotle is often allied with Parmenides for deifying Being over the Becoming of Heraclitus, but Aristotle’s notion of Ousia, when considered in contrast to the Plato’s Forms (which it can be seen as a reaction against) may actually bear more similarities with Becoming-as-Being al a Heraclitus than with Being-as-Sole-Being al a Parmenides.

Plato’s Forms may have for Aristotle epitomized resolute eternality and unyielding predetermination. Indeed, essence connotes immateriality, idealism, and possibility; an airyness very easy to conflate with becoming or idealism by various semiotic channels, but for Plato essence – which he locates in his Ideal Forms — was almost antithetical to such attributes: a type of being, eternal and changeless. Aristiotle’s Being or Ousia, however, grounds Truth and Essence in the parted parts, the particulate particular and the singular segment. His Ousia may have been an attempt, in reaction against the unmoving Forms of Plato, to ground truth in the diverse, the particular and the idiosyncratic rather than the airy eternal and skybound ground unflinching. Aristotle’s Ousia then may be more correlative to Becoming-as-Being in the sense in which Heraclitus meant it, and in accordance with the notion’s potential to reify the existence, value/dignity and effectiveness of our autonomy, individuality, and difference. Indeed, the reification of these ideals, threatened by any notions framing essence as changeless, may have been Aristotle’s main gain and underlying motivation.

This brief foray into the semiotic jungles of transhistorical memetics and the ways in which notions formulated in Ancient Greece may have fermented throughout history to help form and shape our attitudes toward fate, destiny, predeterminism, and thereby our ability to affect changes in the world — and to cast away the slings and clutched crutches of fate — serves to illustrate, in a self-reflective fit of meta, how notions wholly immaterial can still matter insofar as they shape our contemporary beliefs, desires, attitudes and ideals. The two notions briefly considered here, of Plato’s Ideal Forms and Aristotle’s Ousia, have been considered in regard to the extent with which they shape contemporary belief in fate and predestination.

Conclusion: Inconclusivity is Key

My fate cries out, And makes each petty artery in this body As hardy as the Nemean lion’s nerve. (Hamlet, 1.4.91), Hamlet — Shakespeare

Indeed, infatuation with fate constitutes an increase in Existential risk by undermining the extent with which we perceive our usefulness and effectiveness in combatting Existential Risks in general, as well as by undermining the perceived likelihood of such existential risks causing serious harm and death or culminating in the extinction of humanity.

Belief in destiny is also dehumanizing and alienating. The only determinism fit for Man is self-determination, the self not in-and-of itself but within-and-for itself. The deterministic connotations inextricably associated with fate, destiny and preordainity are dehumanizing and epitomize the very antithesis of what constitutes humanity as such.

Combatting the dehumanizing and disenfranchising connotations of determinism is also imperative to increasing the public appeal of Transhumanism. It is easy to associate such connotations with technology, through an association of technology with determinism (in regards to both function and aesthetic), and since technology is very much emphasized in Transhumanism, one could even say is central to Transhumanism, this should impel us to re-legitimatize and to explicate the enchanting, mysterious, and wonder-full aspects of technology inherent in Transhumanist thinking and discourse. Technology is our platform for increased self-determination, self-realization and self-liberation. We can do the yet-to-be-possible with technology, and so technology should rightly be associated with the yet-to-be-determined, with the determinedly indeterminatal, the mysterious, the enchanting, and the awe-some. While its use as a tool of disenfranchisement and control is not impossible or nonexistent, its liberating, empowering and enchantment-instilling potentialities shouldn’t be overly undermined, or worse wholly ignored, as a result.

Whether in the form of determinism grounded in scientific materialism, or in the lofty letharge of an omnipotent god with a dogged determination to fix destiny in unflinching resolve, belief in fate increases existential risk by decreasing our perceived ability to effect affects in the world and make changes to the shape of our circumstance, as well as decreasing the perceived likelihood of a source of existential risk culminating in humanity’s extinction.

If all is fixedly viced then where lie room to revise?

FUKUSHIMA.MAKES.JAPAN.DO.MORE.ROBOTS
Fukushima’s Second Anniversary…

Two years ago the international robot dorkosphere was stunned when, in the aftermath of the Tohoku Earthquake and Tsunami Disaster, there were no domestically produced robots in Japan ready to jump into the death-to-all-mammals radiation contamination situation at the down-melting Fukushima Daiichi nuclear power plant.

…and Japan is Hard at Work.
Suffice it to say, when Japan finds out its robots aren’t good enough — JAPAN RESPONDS! For more on how Japan has and is addressing the situation, have a jump on over to AkihabaraNews.com.

Oh, and here’s some awesome stuff sourced from the TheRobotReport.com:


Larger Image
- PDF With Links

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

The human race can only go in one of two directions; space or extinction- right now we are an endangered species.

3. Thou shalt use the power of the atom to live on other worlds.

Nuclear energy is to the space age as steam was to the industrial revolution; chemical propulsion is useless for interplanetary travel and there is no solar energy in the outer solar system.

4. Thou shalt use nuclear weapons to travel through space.

Physical matter can barely contain chemical reactions; the only way to effectively harness nuclear energy to propel spaceships is to avoid containment problems completely- with bombs.

5. Thou shalt gather ice on the Moon as a shield and travel outbound.

The Moon has water for the minimum 14 foot thick radiation shield and is a safe place to light off a bomb propulsion system; it is the starting gate.

6. Thou shalt spin thy spaceships and rings and hollow spheres to create gravity and thrive.

Humankind requires Earth gravity and radiation to travel for years through space; anything less is a guarantee of failure.

7. Thou shalt harvest the Sun on the Moon and use the energy to power the Earth and propel spaceships with mighty beams.

8. Thou shalt freeze without damage the old and sick and revive them when a cure is found; only an indefinite lifespan will allow humankind to combine and survive. Only with this reprieve can we sleep and reach the stars.

9. Thou shalt build solar power stations in space hundreds of miles in diameter and with this power manufacture small black holes for starship engines.

10. Thou shalt build artificial intellects and with these beings escape the death of the universe and resurrect all who have died, joining all minds on a new plane.

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

I continue to survey the available technology applicable to spaceflight and there is little change.

The remarkable near impact and NEO on the same day seems to fly in the face of the experts quoting a probability of such coincidence being low on the scale of millenium. A recent exchange on a blog has given me the idea that perhaps crude is better. A much faster approach to a nuclear propelled spaceship might be more appropriate.

Unknown to the public there is such a thing as unobtanium. It carries the country name of my birth; Americium.

A certain form of Americium is ideal for a type of nuclear solid fuel rocket. Called a Fission Fragment Rocket, it is straight out of a 1950’s movie with massive thrust at the limit of human G-tolerance. Such a rocket produces large amounts of irradiated material and cannot be fired inside, near, or at the Earth’s magnetic field. The Moon is the place to assemble, test, and launch any nuclear mission.

Such Fission Fragment propelled spacecraft would resemble the original Tsolkovsky space train with a several hundred foot long slender skeleton mounting these one shot Americium boosters. The turn of the century deaf school master continues to predict.

Each lamp-shade-spherical thruster has a programmed design balancing the length and thrust of the burn. After being expended the boosters use a small secondary system to send them into an appropriate direction and probably equipped with small sensor packages, using the hot irradiated shell for an RTG. The Frame that served as a car of the space train transforms into a pair of satellite panels. Being more an artist than an *engineer, I find the monoplane configuration pleasing to the eye as well as being functional. These dozens and eventually thousands of dual purpose boosters would help form a space warning net.

The front of the space train is a large plastic sphere partially filled filled with water sent up from the surface of a a Robotic Lunar Polar Base. The Spaceship would split apart on a tether to generate artificial gravity with the lessening booster mass balanced by varying lengths of tether with an intermediate reactor mass.

These piloted impact threat interceptors would be manned by the United Nations Space Defense Force. All the Nuclear Powers would be represented.…..well, most of them. They would be capable of “fast missions” lasting only a month or at the most two months. They would be launched from underground silos on the Moon to deliver a nuclear weapon package towards an impact threat at the highest possible velocity and so the fastest intercept time. These ships would come back on a ballistic course with all their boosters expended to be rescued by recovery craft from the Moon upon return to the vicinity of Earth.

The key to this scenario is Americium 242. It is extremely expensive stuff. The only alternative is Nuclear Pulse Propulsion (NPP). The problem with bomb propulsion is the need to have a humungous mass for the most efficient size of bomb to react with.

The Logic Tree then splits again with two designs of bomb propelled ship; the “Orion” and the “Medusa.” The Orion is the original design using a metal plate and shock absorbing system. The Medusa is essentially a giant woven alloy parachute and tether system that replaces the plate with a much lighter “mega-sail.” In one of the few cases where compromise might bear fruit- the huge spinning ufo type disc, thousands of feet across, would serve quite well to explore, colonize, and intercept impact threats. Such a ship would require a couple decades to begin manufacture on the Moon.

Americium boosters could be built on earth and inserted into lunar orbit with Human Rated Heavy Lift Vehicles (SLS) and a mission launched well within a ten-year apollo type plan. But the Americium Infrastructure has to be available as a first step.

Would any of my hundreds of faithful followers be willing to assist me in circulating a petition?

*Actually I am neither an artist or an engineer- just a wannabe pulp writer in the mold of Edgar Rice Burroughs.

Humanities wake-up call has been ignored and we are probably doomed.

The Chelyabinsk event is a warning. Unfortunately, it seems to be a non-event in the great scheme of things and that means the human race is probably also a non-starter. For years I have been hoping for such an event- and saw it as the start of a new space age. Just as Sputnik indirectly resulted in a man on the Moon I predicted an event that would launch humankind into deep space.

Now I wait for ISON. Thirteen may be the year of the comet and if that does not impress upon us the vulnerability of Earth to impacts then only an impact will. If the impact throws enough particles into the atmosphere then no food will grow and World War C will begin. The C stands for cannibalism. If the impact hits the ring of fire it may generate volcanic effects that may have the same effect. If whatever hits Earth is big enough it will render all life above the size of microbes extinct. We have spent trillions of dollars on defense- yet we are defenceless.

Our instinctive optimism bias continues to delude us with the idea that we will survive no matter what happens. Beside the impact threat is the threat of an engineered pathogen. While naturally evolved epidemics always leave a percentage of survivors, a bug designed to be 100 percent lethal will leave none alive. And then there is the unknown- Earth changes, including volcanic activity, can also wreck our civilization. We go on as a species the same way we go on with our own lives- ignoring death for the most part. And that is our critical error.

The universe does not care if we thrive or go extinct. If we do not care then a quick end is inevitable.

I have given the world my best answer to the question. That is all I can do:

http://voices.yahoo.com/water-bombs-8121778.html?cat=15