Toggle light / dark theme

Paradise comes from the Greek paradeisos, “surrounded by walls”. In Madonna Laboris Mary labors in seclusion at the borders of Paradise, providing her scarf for souls to ascend behind its walls. “All day long I watch the gates of Paradise; I do not let anyone in, yet in the morning there are newcomers in Paradise,” Saint Peter complains to the Lord. The Lord and Peter make night rounds and see Mary with her scarf and the Lord bids Peter to “let (Mary) be”.

Attribution: Bonhams Nikolai Konstantinovich Roerich (Russian, 1874-1947)  Madonna Laboris signed with monogram and dated '1931' (lower left)  tempera on canvas 84 × 124cm (33 1/16 × 48 13/16in)
Attribution: Bonhams
Nikolai Konstantinovich Roerich (Russian, 1874–1947)
Madonna Laboris
signed with monogram and dated ‘1931’ (lower left)
tempera on canvas
84 × 124cm (33 1/16 × 48 13/16in)

That paradise means surrounded by walls rather than walls being something that surround paradise is particular. Paradise as adjective instead of paradise as noun. You can go to a place that is paradise, but you cannot go to paradise.

Many things in today’s world are surrounded by walls and we would not call them paradise. But if we were good students of etymology we would.

What does it mean to be paradise? Each time we enclose space, are we making paradise? The Earth then is paradise contained in a permeable wall of satellites …

The walls of paradise in Madonna Laboris are permeable too, perhaps in only one direction, although according to Christian mythology we know at least two who got out. Well, three, the third making his own paradise elsewhere (seen at the bottom of Madonna Laboris).

The epidermis, enclosing the body — paradise. Each cell with its membrane, paradise. Time with its calendar, also paradise.

Indeed there is a permeable quality to the walls of paradise, certainly paradise contains, but its boundaries also allow a passage through.

It becomes hard not to find that everything everywhere is somehow enclosed and by virtue of its enclosure would therefore be paradise. Is this perhaps how an adjective became a noun? Paradise describes everything and in describing everything it must then be everywhere, and so, logically follows the notion of place.

For what purpose did humans develop language? We say for communication but to communicate about what? In the beginning it must have been to direct the self toward a form of knowing that was not exactly knowledge, but an intention to generate specific referents for what was known.

Remembering, not romantically, that in the past, approaches to language, space and context were in the main more nuanced and reflective of a cosmic appreciation of reality — certainly that is not too far fetched a generalization. Seeing the enclosures and enfolding in space and maybe time, did some philosopher or scientist or religious man (perhaps embodied all in one man) observe his world and the boundaries within it natural and otherwise that gave space contour and distinction and did he then name space by its boundaries since those boundaries defined and helped to give meaning to space itself? The boundaries acting as a form of knowing.

Is paradise the original meme for all space/time and is that why today we conflate adjective and noun — paradise as definer of place and also place itself.

When Peter says he guards the boundaries of paradise and yet nonetheless there is passage across them could it be because the boundaries are only the markers of space. Maybe that is what the Lord hints at when he tells Peter to let things be.

Linked to paradise is salvation. Borrowing from Christian mythos again, we know in the apocryphal literature Jesus said (paraphrase) there is no sin, only those who commit acts we call sinful. Like paradise, a boundary potentially pointing to expansive space, salvation referents a condition of already being saved. As walls define the details of the space-time paradise, salvation notes the barriers to what is already a sacred state.

Here on Earth surrounded in space by space technology, the medium is the message and the message is the medium (Marshall McLuhan and Yoko Ono). To extend the analogy of media into all information: the materials and spaces through which we exist inform the condition of that existence while simultaneously that which exists and is conveyed becomes the vehicle of transmission itself.

By Avi Roy, University of Buckingham

In his essay “Fifty Years Hence”, Winston Churchill speculated, “We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium.”

At an event in London today, the first hamburger made entirely from meat grown through cell culture will be cooked and consumed before a live audience. In June at the TED Global conference in Edinburgh, Andras Forgacs took a step even beyond Churchill’s hopes. He unveiled the world’s first leather made from cells grown in the lab.

These are historic events. Ones that will change the discussion about lab-grown meat from blue-skies science to a potential consumer product which may soon be found on supermarket shelves and retail stores. And while some may perceive this development as a drastic shake-up in the world of agriculture, it really is part of the trajectory that agricultural technology is already following.

Creating abundance

While modern humans have been around for 160,000 years or so, agriculture only developed about 10,000 years ago, probably helping the human population to grow. A stable food source had tremendous impact on the development of our species and culture, as the time and effort once put towards foraging could now be put towards intellectual achievement and the development of our civilisation.

In recent history though, agricultural technology has developed with the goal of securing food supply. We have been using greenhouses to control the environment where crops grow. We use pesticides, fertilisers and genetic techniques to control and optimise output. We have created efficiencies in plant cultivation to produce more plants that yield more food than ever before.

These patterns in horticulture can be seen in animal husbandry too. From hunting to raising animals for slaughter and from factory farming to the use of antibiotics, hormones and genetic techniques, meat production today is so efficient that we grow more bigger animals faster than ever before. In 2012, the global herd has reached 60 billion land animals to feed 7 billion people.

The trouble with meat

Now, civilisation has come to a point where we are recognising that there are serious problems with the way we produce food. This mass produced food contributes towards our disease burden, challenges food safety, ravages the environment, and plays a major role in deforestation and loss of biodiversity. For meat production, in particular, manipulating animals has led to an epidemic of viruses, resistant bacteria and food-borne illness, apart from animal welfare issues.

But we may be seeing change brought by consumer demand. The public has started caring about the ethical, environmental and health impacts of food production. And beyond consumer demand for thoughtful products, ecological limits are forcing us to evaluate the way food is produced.

A damning report by the United Nations shows that today livestock raised for meat uses more than 80% of Earth’s agricultural land and 27% of Earth’s potable water supply. It produces 18% of global greenhouse gas emissions and the massive quantities of manure produced heavily pollute water. Deforestation and degradation of wildlife habitats happens largely in part to create feed crops, and factory farming conditions are breeding grounds for dangerous disease.

Making everyone on the planet take up vegetarianism is not an option. While there is much merit to reducing (and rejecting) meat consumption, sustainable dietary changes in the Western world will be more than compensated for by the meat intake of the growing middle class in developing countries like China and India.

The future is cultured

The logical step in the evolution of humanity’s food production capacity is to make meat from cells, rather than animals. After all, the meat we consume is simply a collection of tissues. So why should we grow the whole animals when we can only grow the part that we eat?

By doing this we avoid slaughter, animal welfare issues, disease development. This method, if commercialised, is also more sustainable. Animals do not have to be raised from birth, and no resources are shunted towards non-meat tissues. Compared to conventionally grown meat, cultured meat would require up to 99% less land, 96% less water, 45% less energy, and produce up to 96% less greenhouse gas emissions.

Also even without modern scientific tools, for hundreds of years we have been using bacterial cells, yeast and fungus for food purposes. With recent advances in tissue engineering, culturing mammalian cells for meat production seems like a sensible advancement.

Efficiency has been the primary driver of agricultural developments in the past. Now, it should be health, environment and ethics. We need for cultured meat to go beyond the proof of concept. We need it to be on supermarket shelves soon.

Avi Roy does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The Conversation

This article was originally published at The Conversation.
Read the original article.

The history of humans, short when considered in the light of all times, if such a consideration can be made, is a web of intricacies and intentions, of acts and non-acts, of silence and sound (internally and externally), of growth and decay. Although we do have records of this history, in the land, air and water, in objects, in ourselves, in text, despite the proliferation of data and information, particularly post-printing press, we still do not know everything and what we do not know outmeasures what we do and will always. So while we have stories and myths, what we have mostly is uncertainty.

Something about the human animal who in the main is attached to petty things (reputation and praise, punishment and fear, egoic notions and material satisfactions) rejects uncertainty, rebels against a blank state. The process of logically manifesting an order in response to uncertainty is tabulated within the brain. But is the brain the seat of Man or is it because over several millennia humanity has organized phenomenal existence primarily through brain activity, that we now believe it to be the natural leadership in our lives? Is it possible that although it has lead, it is not the (natural) leader? Does Man, each a vortex of inter-dimensional energy, operate optimally through one lead, anyway?

Does intelligence permeate the entirety of Man’s being and the entirety of the known cosmic habitat? Is that intelligence being? Is that being existence? And is existence what is? And if this is true, and all that is, is, why this idea that the brain is the seat of intelligence? The brain is one known interpreter, receptacle for, perceiver of (and maybe also creator) of intelligence within the human biological cosmology. The brain is also a foe when not well aired, crafting for Man a separateness from phenomena, acting as the chief architect of his differentiation. The brain is more the functionary of the literal and the common, the go-to tool for navigating physical space and for generating concepts, including calculations. But is the brain the lord of Man, even with the pineal gland?

Man in wholeness and complexity should not be particularized piecemeal. One facet of human biology, like the brain, can only be understood through its relationship with the entire biological structure. Hierarchies of organs and functions within the human body can only lead to ultimate ignorance about not only the organ or function in question and the full ecosystem within which it resides, but also that ecosystem’s relationship with the outside, both the known and unknown, seen and unseen.

Today we know the ability of machines to outpace humans in carrying out certain functions that in humans are as far as we know manifested through the brain. There is also an intrigue with human biology’s seeming inability to regenerate and to decay into a state of supposed non-being. And yet we know also that energy cannot be destroyed and that the human body is an energetic species living in an energetic world. And so for the human being does death technically exist? Do we in fact ever have “death” in the universe?

Although a machine’s adroitness at managing many functions of the human brain more efficiently than a human can manage those functions is a stellar feat, it is localized and decontextualized from that function as it occurs in a human being and is therefore not an equivalent comparison to human use of thought. A human to operate in comprehensive intelligence may or may not want or need to perform brain-based functions as a machine performs them. The machine doing what it does does not therefore make it “smarter” or “better” than a human, it is simply able to enact specific functionality in certain instances according to a set standard.

To evoke smarter and better we must introduce measurement and also a set of criteria by which we evaluate. Why do we evaluate? By what measure? By what criteria?

How is it also that we define death? Is our common notion of death not defined from a time when a majority of humanity thought the body inert matter animated by spirit? Does this notion perhaps carry into today’s desire to evade death? Is it a mistaken concept of separateness (maybe spiritual separateness), a concept characterized through the brain’s activity, that is motioning humans toward an exercise spawned for the denial of death? Is this the brain operating in egoic selfishness calling for its own immortality? The immortality of the personality? Is an enduring personality immortality? Is memory immortality? Is accumulated and preserved experience immortality? And is the brain, the generator of Man’s fantasy of independence vis-à-vis the outside world, enhanced in certain functions by rapidly processing machines, the artifact through which humans can become immortal?

A full view of the future has to consider a huge range of time scales. Freeman Dyson pointed this out (as D. Hutchinson alerted me). I borrowed his idea in the following passage from my book, The Human Race to the Future, published by the Lifeboat Foundation.


Our journey into the future begins by asking what the next hundred years will be like. Call that century-long time frame the “first generation” of future history. After a baker’s dozen or so chapters we then move to the second generation — the next order of magnitude after a hundred — the next thousand years. The seventh generation then has a ten million year horizon, the very distant future. Beyond the seventh generation are time horizons above even ten million years. This “powers of ten” scaling of future history was used by well-known physicist Freeman Dyson in chapter 4 of his 1997 book, Imagined Worlds.

Technical update on the ebook edition: Many Kindle devices and reader software systems have a menu item for jumping to the table of contents, and another menu item for jumping to the “beginning” of a book, however that is defined. I found out how to build an ebook that defines these locations so that the menu items work. You can use basic html commands. To define the location of the table of contents, you can insert into the html code of the book, right where the table of contents begins, the following html command:

<a name="toc"></a>

And now the user can click on the device or reader software’s “table of contents” menu item and they go straight to the table of contents!

To define the “beginning” of the book where you, as the author, want users to go when they click the “beginning” menu item (title page? Chapter 1? You decide), just put the following html command at that location in the ebook’s html source:

<a name="start"></a>

…and now that works too!

Of course you can use MS Word, Dreamweaver, etc., instead of editing the raw html, but ultimately those editors do it by inserting the same html commands.

The imposition of compositional structure within the craft of writing was recently pointed out to me. As students we are told repeatedly to open, elaborate and conclude a writing work. This carries on into so-called professional life. Indeed the questions that arise during the course of any given writing work are outside the scope of the work itself, the material of the work deals with facts and recommendations, which are based on our conclusions. To end a piece of professional or student work without conclusions and with questions would be seen as a lack of seriousness. We believe time invested into investigation is only worthy if we emerge with answers. And the answers we are to have begin with our original questions and are influenced by the way we approach the questions. And yet we approach the questions knowing they will need to be answered and so our opening approach is very limited. We not only formulate opening questions we feel we will have a good chance of answering, but our entire attention during the duration of looking at the question is focused on finding an answer. So where is the originality then in our thought? And where is the opportunity to explore the limitations of thought itself as it is applied to the complexity and urgency of matters in the world? If my opening point of inquiry is designed to be something I know I can find an answer for, then certainly I have no opportunity to go beyond what I know to address it, not really, and so there is nothing new. And if I begin a problem knowing I will be judged on finding an answer for that problem then I will necessarily limit or eliminate any point of fact or inquiry that takes me from that task. The generally accepted process and presentation of writing today is linear and monolithic in an academic and professional context. We talk about complexity and interrelatedness but we judge, evaluate and reward a written approach to that complexity and interrelatedness according to how well it fits into what we already know and according to the standards we have already found to be acceptable. Because we are bound to our knowledge and our processes of merit through training, repetition, various forms of aggrandizement and institutional awareness, however subtle or overt, we disregard or penalize information and modalities that fall outside our realm of knowing. Therein, the places we go to fulfill our knowing may expand (geographically or otherwise) but the way we approach and arrive at knowing remains the same. Although some may develop original technical innovations, those technical innovations will be used as tools to serve the knowledge system that is already established within any given realm of inquiry.

Our assumptions and biases about knowledge creation are interwoven with our experiences, our interpretations of those experiences, and our identification with the experiences and interpretations. Patterns emerge and we craft a self through the mosaic and soon that mosaic can stand in for our self. When that mosaic of experience and interpretation is cultivated through authority and the authority of our own experience and sense of self, we will extend our sense of authority into the realm of that which we already know. In this we are setting up a subtle preoccupation with what we know and with the familiar way we arrive at knowledge while simultaneously we derive a prejudice against what we do not know and also any unknown means to cultivate the known.

For example, pretend I am a teacher with a PhD, many people have applauded my research and I publish books, give famous lectures and have tenure at a prestigious school. I feel confident in my work and consider myself to be an authority in my field. A student comes along who does not know me and takes my class for the first time. She questions my logic and says my class is a bore. She tells me my exams do not test her knowledge of the subject but instead test her ability to repeat my version of the subject. She writes a paper calling into doubt the major premises of my field, to which I have contributed the most popularly followed lines of inquiry and she proposes an entirely new approach to the field and ends her paper with grand questions about the nature of intellectual thought. How do I approach this? In a typical situation I would question the student’s credibility as a student. I would consider her farfetched and someone who is incapable of understanding the subject matter. I would have trouble finding a way to give her a passing course score. She would be a problem to fix or to solve or to ignore. Never would I consider that perhaps she had a point. Why? I assumed the ascendency of my own knowledge based on my own sense of authority. Because the student operated outside my realm of knowledge and outside my sense of appropriateness in the acquisition of knowledge, I decided she was wrong. Invisible to me are my own assumptions of authority, including my assumption that authority has validity. Even though I have a wide set of experiences related to a branch of knowledge I am unable to see that those experiences are necessarily limited because I have only had a certain set of them, no matter how vaunted, and that knowledge itself is limited because it is always about what is already known. So I approach my student as if she is a problem instead of approaching her as a person with insight that may also be valid and should be explored. If we use something that is already known to approach what is new, how can we really approach it? The new will consistently be framed according to its relationship or lack of relationship with what has been established. And as has already been stated, what has been established is where authority has been placed, including our reverence for all the things we have already authorized.

Many of us operate in this field of inquiry, discovery and selfhood and it is apparent when we review our written forays into the realms of global problem solving discourse. So often we conclude. So often we have answers and set approaches to solving problems. So often we solicit recommendations for action. But rarely do we ponder over, except that which we have relegated to philosophy. In the realms of activity (politics, business, economics, education, health, environment, etc.) we theorize action, take action or meet to form a new activity. We say events and circumstances are too urgent to stop for too much thought, but in our haste, our actions themselves lead to further reasons to have to meet again to reorient ourselves. Our writing becomes a part of this process. We write in order to validate our next action and we guide that writing according to what we think that action should be. We rarely write to discover the appropriate terms upon which our action should be based. We rarely question the terms upon which our previous action has been based. We rarely inquire into our standards, we just try to find novel ways to meet them.

What if our grand questions about the world ended with, “I don’t know.” Would that harm us? Does not knowing have to be accompanied by feelings of panicked desperation? Must we think ourselves inert if we do not have answers? What if we started with I don’t know? What will we do with Palestine and Israel? I don’t know. What will we do about hunger and miseducation? I don’t know. How will we live peaceably, without war and conflict? I don’t know. Is not I don’t know a better place to start than our usual conclusions, ideals and ideas? How is referencing what has already happened and what has already been thought (and what has not worked) a correct way to address how to move forward from right now? Perhaps in the ground of not knowing we have more possibility to create something new. We can put aside our predispositions and knowledge and simply give matters our attention. Indeed this may take more time. Or it may take no time at all. But the lunches, dinners, breakfasts, meetings, flights and arrangements accompanying our usual fast way of gathering together, sometimes for several days or weeks, to swiftly arrive at answers also takes time and over the course of years has substantially little to show as far as solving our grand world problems. It is obvious we don’t know, by the overall state of global affairs, so saying we don’t ought not be too challenging.

Let us all stop pretending. The urge to be right and definitive is ingrained into ourselves. We adore confidence and conclusions, especially if it is accompanied by a new technology and somebody says the word science. We write long reports about everything we know and every state of being we think we should have including bullet points for things we can do to be better. But rarely do we write about the world as it actually exists right now and how we have been utterly incapable of doing anything fundamentally different in it. This is not pessimism. Saying we are optimistic and having positive thoughts is not a substitute for critical inquiry. We are not going to smile ourselves into a better world. And just because we are smiling does not mean we care.

Our writing reflects this. A tome of high sounding phrases, deferred promises and volumes of technical bureaucratic lexicon. To what end? Perhaps it is our desire to conclude that hampers us. Perhaps it is our desire to know. Perhaps it is our desire itself, or in operation with other facets of our personality. But certainly we are operating within a certain structure and we seem to be thus far unable to work ourselves beyond its limits. Maybe we can free ourselves with our writing. I am not suggesting this as a method or as an exclusive approach. But so often we come to know through the words that we read and too often it is in the realm of fiction only that we allow ourselves imagination, curiosity, the unknown. Let us introduce imagination into our non-fictive selves. When we seriously consider policy and law and action, can we remove ourselves from tradition? Can we start only with what is now? How else can we speak to the moment if not from the place it exists. Now is only informed by history if we are living in the past. This is not a call to replace an old ideology with a new one or to discount traditions which have lasted because they are just. This is simply a question about how we might write differently about the circumstances in the world burning for our attention. It is an offer to approach our writing about the world’s most serious matters from a point of doubt regarding our own understanding. It is a somewhat diffident rejection of conclusions and the way conclusions as construct have been organized into our lives as authority through the way we have been taught to write and express ourselves through writing, and finally to arrive at knowledge of ourselves and the world through what we write and what we have read. It is a question about how writing affects our thought and how our thought affects our world and how we remain ignorant of a process we have created ourselves and silently abide by.

(I feel an internal pressure to “wrap up” what I have written. But my intent was not to begin to reach a goal, an end. It was to explore a question. And for now that exploration is complete, although not in conclusion.)

Short Summary of a New Idea: Cryodynamics

Otto E. Rossler, Faculty of Science, University of Tubingen, Germany

Abstract

A brief history and description of cryodynamics is offered. While still in its infancy, it is already strong in basic findings and predictions. It is a classical science the quantum version of which still waits to be formulated. It is highly promising technologically. A new fundamental science is a rare event in history. The basic insight is to picture randomly moving hyperbolic tree trunks in Sinai’s “rolling tennis ball in an orchard game” (Harry Thomas’ term), but flipped upside down so that the trees are hollow funnels pointing downwards.

- — - -

Cryodynamics is a classical field which appears to be new. It is a sister discipline to thermodynamics and automatically has as many implications as the latter despite its belated discovery. So far but a few features have been elaborated. For example, its deterministic entropy function is identical to the Sackur-Tetrode equation as given by Diebner, but with inverted sign (“ectropy”). If confirmed it allows for a combined entropic and ectropic model of the universe. Then all direction-of-time bound models of the universe lose their validity. The problem of black hole recycling which poses itself in this case is still unsolved in spite of Hawking’s early stab.

Held against this big scenario, what is presently on hand is still limited. It is the discovery that if you subject a fast-moving low-energy classical particle to successive grazing-type encounters with attractive, rich-in-kinetic-energy particles, then the low-energy particle loses kinetic energy on average to the high-energy ones (“energetic capitalism”). This is very unexpected and paradoxical. Nevertheless the idea goes back to Zwicky in 1929 and Chandrasekhar in 1943, although it was not elaborated at the time.

The “miracle” is that if you invert the direction of time, the opposite behavior is implicit. All of the conceptual problems of thermodynamics are re-encountered. The second major feature is that the new phenomenon is numerically elusive for stiffness reasons. While the increasing disorder of entropy increase, valid in the repulsive case is a numerically stable feature in statistical thermodynamics, the decreasing order of ectropy increase, valid in the attractive case is not numerically stable. Very minor numerical deviations suffice to destroy the on-going decrease of entropy. This explains why in the thousands of multi-particle simulations done so far in galactic dynamics, to mention only this subcase, the phenomenon was never encountered numerically so far.

Another reason for the lack of resonance up until now is the fact that thermodynamics has always been understood as a statistical theory, with probability-theoretic axioms employed to describe it. While this is not false, it eschews the underlying deterministic, chaos-theoretic mechanism. The thereby incurred intrinsic inaccuracy did not cause much damage in thermodynamics so far, but cannot be transported over to cryodynamics. Cryodynamics does not emerge without prior acknowledgement of deterministic chaos as its root. (This new fact strongly constrains the accuracy of quantum mechanics — backwards in time — which is quite unexpected.)

Let me explain the simplest example which also worked numerically in the first two successful simulations so far. A fast-moving low-mass particle is subjected to encounters with a Newtonian potential trough into which it dips-in and then gets out again. If the trough is periodically or nonperiodically approaching and receding (modulated in its depth), a net effect results: a loss of energy of the traversing fast particle. If we invert time after a while, the exact opposite occurs up to the initial point, to from then on give way to the previous behavior, but now in the opposite direction of time.

The best way to understand all this is to invert the sign of the potentials. Then the opposite phenomenon, familiar from statistical thermodynamics, occurs: The periodically modulated trough is now replaced by a periodically modulated mound or tree. It is obvious that the recurrent unequal increases and decreases in the height of the hyperbolic mound amount to a qualitatively different effect in their sums.

To see this, think of a ball running frictionlessly through a forest of (at first fixed) trees with softly rising features. Then the ball will from time to time climb up a little and come down again – without losing or gaining in its net kinetic energy. Now let the trees be moving slowly at random (or periodically). Then the two cases – of the tree approaching the path of the up-climbing particle or receding from it – have different strengths (different mean heights). This explains dissipation. On inverting time after a while, the net gain becomes a net loss for the moving particle — until the initial condition is re-arrived at. Then the gaining streak sets in again, now in the new direction of time.

When we leave the repulsive case by inverting the tree stems into mirror-symmetric troughs, then the opposite thing happens to a ball running on the surface of this inverted landscape. This is the new phenomenon of cryodynamics, proved to the mental eye.

After this geometric proof, the numerical challenge clearly is on – especially so after the successful two cases published by Klaus Sonnleitner and Ramis Movassagh, respectively. The new science is waiting to be put on a broader computational basis.

Why is this important? The new cosmology that is implied clearly is not a sufficient motivation, given the fact that most everyone is happy with the old paradigm. So all that remains as a convincing reason for further research is an economically challenging application.

Such an application could be provided by the ITER, a hot-fusion reactor based on the Tokamak design: a torus-shaped, millions-of-degrees hot plasma that is magnetically confined in a metal ring. The plasma must not touch the (necessarily much colder) confining walls. This design is intrinsically unstable dynamically: The plasma tends to break out from the toroidal magnetic confinement to suddenly touch the wall somewhere to let the overall temperature collapse. No working prototype exists for decades. The current hope that following another quarter of a century, the machine will work, is being upheld with many billions of euros already sunk-in. Here, cryodynamics can be of help in principle. The paradoxical option: apply a heat bath of even hotter attractive particles at the location of the budding instability. Then these hotter attractive particles – like the inverted tree trunks – will cool the too hot nucleons, thus curbing the budding local protrusion.

“Cooling by hotter attractive particles” is the essence of cryodynamics. The hotter particles could be electrons shot-in concentrically into the budding hot spot. This is no problem in principle since even very much hotter electrons are easy to generate in small, dirigible-beam accelerators.

The idea was published under the title “Is hot fusion made feasible by the discovery of Cryodynamics?” in Advances in Intelligent Systems and Computing, Volume 192, pp. 1–4, Springer-Verlag 2013. It can still be patented since no design details were mentioned. This is a very lucrative technological proposal. No country or nation is interested so far nor are the oil companies.

Acknowledgments

Thank you that I was allowed to tell you the whole story in as brief a form as I could. I thank Dan Stein, Eric Klien, Christophe Letellier, Nico Heller, Heinz Clement and Jozsef Fortagh for discussions. Paper presented at the “CQ Colloquium” of the University of Tubingen on June 28, 2013. For J.O.R. (Submitted to Nature.)

The arXiv blog on MIT Technology Review recently reported a breakthrough ‘Physicists Discover the Secret of Quantum Remote Control’ [1] which led some to comment on whether this could be used as an FTL communication channel. In order to appreciate the significance of the paper on Quantum Teleportation of Dynamics [2], one should note that it has already been determined that transfer of information via a quantum tangled pair occurs *at least* 10,000 times faster than the speed of light [3]. The next big communications breakthrough?

Quantum Entanglement Visual

In what could turn out to be a major breakthrough for the advancement of long-distance communications in space exploration, several problems are resolved — where if a civilization is eventually established on a star system many light years away, for example, such as on one of the recently discovered Goldilocks Zone super-Earths in the Gliese 667C star system, then communications back to people on Earth may after all be… instantaneous.

However, implications do not just stop there either. As recently reported in The Register [5], researchers in Israel at the University of Jerusalem, have established that quantum tangling can be used to send data across both TIME AND SPACE [6]. Their recent paper entitled ‘Entanglement Between Photons that have Never Coexisted’ [7] describes how photon-to-photon entanglement can be used to connect with photons in their past/future, opening up an understanding into how one may be able to engineer technology to not just communicate instantaneously across space — but across space-time.

Whilst in the past many have questioned what benefits have been gained in quantum physics research and in particular large research projects such as the LHC, it would seem that the field of quantum entanglement may be one of the big pay-offs. Whist it has yet to be categorically proven that quantum entanglement can be used as a communication channel, and the majority opinion dismisses it, one can expect much activity in quantum entanglement over the next decade. It may yet spearhead the next technological revolution.

[1] www.technologyreview.com/view/516636/physicists-discover-the…te-control
[2] Quantum Teleportation of Dynamics http://arxiv.org/abs/1304.0319
[3] Bounding the speed of ‘spooky action at a distance’ http://arxiv.org/abs/1303.0614
[4] http://www.universetoday.com/103131/three-potentially-habita…iese-667c/
[5] The Register — Biting the hand that feeds IT — http://www.theregister.co.uk/
[6] http://www.theregister.co.uk/2013/06/03/quantum_boffins_get_spooky_with_time/
[7] Entanglement Between Photons that have Never Coexisted http://arxiv.org/abs/1209.4191

3j0evbm2zqijaw_small

Originally posted via The Advanced Apes

Through my writings I have tried to communicate ideas related to how unique our intelligence is and how it is continuing to evolve. Intelligence is the most bizarre of biological adaptations. It appears to be an adaptation of infinite reach. Whereas organisms can only be so fast and efficient when it comes to running, swimming, flying, or any other evolved skill; it appears as though the same finite limits are not applicable to intelligence.

What does this mean for our lives in the 21st century?

First, we must be prepared to accept that the 21st century will not be anything like the 20th. All too often I encounter people who extrapolate expected change for the 21st century that mirrors the pace of change humanity experienced in the 20th. This will simply not be the case. Just as cosmologists are well aware of the bizarre increased acceleration of the expansion of the universe; so evolutionary theorists are well aware of the increased pace of techno-cultural change. This acceleration shows no signs of slowing down; and few models that incorporate technological evolution predict that it will.

The result of this increased pace of change will likely not just be quantitative. The change will be qualitative as well. This means that communication and transportation capabilities will not just become faster. They will become meaningfully different in a way that would be difficult for contemporary humans to understand. And it is in the strange world of qualitative evolutionary change that I will focus on two major processes currently predicted to occur by most futurists.

Qualitative evolutionary change produces interesting differences in experience. Often times this change is referred to as a “metasystem transition”. A metasystem transition occurs when a group of subsystems coordinate their goals and intents in order to solve more problems than the constituent systems. There have been a few notable metasystem transitions in the history of biological evolution:

  • Transition from non-life to life
  • Transition from single-celled life to multi-celled life
  • Transition from decentralized nervous system to centralized brains
  • Transition from communication to complex language and self-awareness

All these transitions share the characteristics described of subsystems coordinating to form a larger system that solve more problems than they could do individually. All transitions increased the rate of change in the universe (i.e., reduction of entropy production). The qualitative nature of the change is important to understand, and may best be explored through a thought experiment.

Imagine you are a single-celled organism on the early Earth. You exist within a planetary network of single-celled life of considerable variety, all adapted to different primordial chemical niches. This has been the nature of the planet for well over 2 billion years. Then, some single-cells start to accumulate in denser and denser agglomerations. One of the cells comes up to you and says:

I think we are merging together. I think the remainder of our days will be spent in some larger system that we can’t really conceive. We will each become adapted for a different specific purpose to aid the new higher collective.

Surely that cell would be seen as deranged. Yet, as the agglomerations of single-cells became denser, formerly autonomous individual cells start to rely more and more on each other to exploit previously unattainable resources. As the process accelerates this integrated network forms something novel, and more complex than had previously ever existed: the first multicellular organisms.

The difference between living as an autonomous single-cell is not just quantitative (i.e., being able to exploit more resources) but also qualitative (i.e., shift from complete autonomy to being one small part of an integrated whole). Such a shift is difficult to conceive of before it actually becomes a new normative layer of complexity within the universe.

Another example of such a transition that may require less imagination is the transition to complex language and self-awareness. Language is certainly the most important phenomena that separates our species from the rest of the biosphere. It allows us to engage in a new evolution, technocultural evolution, which is essentially a new normative layer of complexity in the universe as well. For this transition, the qualitative leap is also important to understand. If you were an australopithecine, your mode of communication would not necessarily be that much more efficient than that of any modern day great ape. Like all other organisms, your mind would be essentially isolated. Your deepest thoughts, feelings, and emotions could not fully be expressed and understood by other minds within your species. Furthermore, an entire range of thought would be completely unimaginable to you. Anything abstract would not be communicable. You could communicate that you were hungry; but you could not communicate about what you thought of particular foods (for example). Language changed all that; it unleashed a new thought frontier. Not only was it now possible to exchange ideas at a faster rate, but the range of ideas that could be thought of, also increased.

And so after that digression we come to the main point: the metasystem transition of the 21st century. What will it be? There are two dominant, non-mutually exclusive, frameworks for imagining this transition: technological singularity and the global brain.

The technological singularity is essentially a point in time when the actual agent of techno-cultural change; itself changes. At the moment the modern human mind is the agent of change. But artificial intelligence is likely to emerge this century. And building a truly artificial intelligence may be the last machine we (i.e., biological humans) invent.

The second framework is the global brain. The global brain is the idea that a collective planetary intelligence is emerging from the Internet, created by increasingly dense information pathways. This would essentially give the Earth an actual sensing centralized nervous system, and its evolution would mirror, in a sense, the evolution of the brain in organisms, and the development of higher-level consciousness in modern humans.

In a sense, both processes could be seen as the phenomena that will continue to enable trends identified by global brain theorist Francis Heylighen:

The flows of matter, energy, and information that circulate across the globe become ever larger, faster and broader in reach, thanks to increasingly powerful technologies for transport and communication, which open up ever-larger markets and forums for the exchange of goods and services.

Some view the technological singularity and global brain as competing futurist hypotheses. However, I see them as deeply symbiotic phenomena. If the metaphor of a global brain is apt, at the moment the internet forms a type of primitive and passive intelligence. However, as the internet starts to form an ever greater role in human life, and as all human minds gravitate towards communicating and interacting in this medium, the internet should start to become an intelligent mediator of human interaction. Heylighen explains how this should be achieved:

the intelligent web draws on the experience and knowledge of its users collectively, as externalized in the “trace” of preferences that they leave on the paths they have traveled.

This is essentially how the brain organizes itself, by recognizing the shapes, emotions, and movements of individual neurons, and then connecting them to communicate a “global picture”, or an individual consciousness.

The technological singularity naturally fits within this evolution. The biological human brain can only connect so deeply with the Internet. We must externalize our experience with the Internet in (increasingly small) devices like laptops, smart phones, etc. However, artificial intelligence and biological intelligence enhanced with nanotechnology could form quite a deeper connection with the Internet. Such a development could, in theory, create an all-encompassing information processing system. Our minds (largely “artificial”) would form the neurons of the system, but a decentralized order would emerge from these dynamic interactions. This would be quite analogous to the way higher-level complexity has emerged in the past.

So what does this mean for you? Well many futurists debate the likely timing of this transition, but there is currently a median convergence prediction of between 2040–2050. As we approach this era we should suspect many fundamental things about our current institutions to change profoundly. There will also be several new ethical issues that arise, including issues of individual privacy, and government and corporate control. All issues that deserve a separate post.

Fundamentally this also means that your consciousness and your nature will change considerably throughout this century. The thought my sound bizarre and even frightening, but only if you believe that human intelligence and nature are static and unchanging. The reality is that human intelligence and nature are an ever evolving process. The only difference in this transition is that you will actually be conscious of the evolution itself.

Consciousness has never experienced a metasystem transition (since the last metasystem transition was towards higher-level consciousness!). So in a sense, a post-human world can still include your consciousness. It will just be a new and different consciousness. I think it is best to think about it as the emergence of something new and more complex, as opposed to the death or end of something. For the first time, evolution will have woken up.