Toggle light / dark theme

JUSTIN.SPACE.ROBOT.GUY
A Point too Far to Astronaut

It’s cold out there beyond the blue. Full of radiation. Low on breathable air. Vacuous.
Machines and organic creatures, keeping them functioning and/or alive — it’s hard.
Space to-do lists are full of dangerous, fantastically boring, and super-precise stuff.

We technological mammals assess thusly:
Robots. Robots should be doing this.

Enter Team Space Torso
As covered by IEEE a few days ago, the DLR (das German Aerospace Center) released a new video detailing the ins & outs of their tele-operational haptic feedback-capable Justin space robot. It’s a smooth system, and eventually ground-based or orbiting operators will just strap on what look like two extra arms, maybe some VR goggles, and go to work. Justin’s target missions are the risky, tedious, and very precise tasks best undertaken by something human-shaped, but preferably remote-controlled. He’s not a new robot, but Justin’s skillset is growing (video is down at the bottom there).

Now, Meet the Rest of the Gang:SPACE.TORSO.LINEUPS
NASA’s Robonaut2 (full coverage), the first and only humanoid robot in space, has of late been focusing on the ferociously mundane tasks of button pushing and knob turning, but hey, WHO’S IN SPACE, HUH? Then you’ve got Russia’s elusive SAR-400, which probably exists, but seems to hide behind… an iron curtain? Rounding out the team is another German, AILA. The nobody-knows-why-it’s-feminized AILA is another DLR-funded project from a university robotics and A.I. lab with a 53-syllable name that takes too long to type but there’s a link down below.

Why Humanoid Torso-Bots?
Robotic tools have been up in space for decades, but they’ve basically been iterative improvements on the same multi-joint single-arm grabber/manipulator. NASA’s recent successful Robotic Refueling Mission is an expansion of mission-capable space robots, but as more and more vital satellites age, collect damage, and/or run out of juice, and more and more humans and their stuff blast into orbit, simple arms and auto-refuelers aren’t going to cut it.

Eventually, tele-operable & semi-autonomous humanoids will become indispensable crew members, and the why of it breaks down like this: 1. space stations, spacecraft, internal and extravehicular maintenance terminals, these are all designed for human use and manipulation; 2. what’s the alternative, a creepy human-to-spider telepresence interface? and 3. humanoid space robots are cool and make fantastic marketing platforms.

A space humanoid, whether torso-only or legged (see: Robotnaut’s new legs), will keep astronauts safe, focused on tasks machines can’t do, and prevent space craziness from trying to hold a tiny pinwheel perfectly still next to an air vent for 2 hours — which, in fact, is slated to become one of Robonaut’s ISS jobs.

Make Sciencey Space Torsos not MurderDeathKillBots
As one is often want to point out, rather than finding ways to creatively dismember and vaporize each other, it would be nice if we humans could focus on the lovely technologies of space travel, habitation, and exploration. Nations competing over who can make the most useful and sexy space humanoid is an admirable step, so let the Global Robot Space Torso Arms Race begin!

“Torso Arms Race!“
Keepin’ it real, yo.

• • •

DLR’s Justin Tele-Operation Interface:

• • •

[JUSTIN TELE-OPERATION SITUATION — IEEE]

Robot Space Torso Projects:
[JUSTIN — GERMANY/DLRFACEBOOKTWITTER]
[ROBONAUT — U.S.A./NASAFACEBOOKTWITTER]
[SAR-400 — RUSSIA/ROSCOSMOS — PLASTIC PALSROSCOSMOS FACEBOOK]
[AILA — GERMANY/DAS DFKI]

This piece originally appeared at Anthrobotic.com on February 21, 2013.

With the recent meteor explosion over Russia coincident with the safe-passing of asteroid 2012 DA14, and an expectant spectacular approach by comet ISON due towards the end of 2013, one could suggest that the Year of the Snake is one where we should look to the skies and consider our long term safeguard against rocks from space.

Indeed, following the near ‘double whammy’ last week, where a 15 meter meteor caught us by surprise and caused extensive damage and injury in central Russia, while the larger anticipated 50 meter asteroid swept to within just 27,000 km of Earth, media reported an immediate response from astronomers with plans to create state-of-the-art detection systems to give warning of incoming asteroids and meteoroids. Concerns can be abated.
ATLAS, the Advanced Terrestrial-Impact Last Alert System is due to begin operations in 2015, and expects to give a one-week warning for a small asteroid – called “a city killer” – and three weeks for a larger “county killer” — providing time for evacuation of risk areas.

Deep Space Industries (a US Company), which is preparing to launch a series of small spacecraft later this decade aimed at surveying nearby asteroids for mining opportunities, could also be used to monitor smaller difficult-to-detect objects that threaten to strike Earth.

However — despite ISON doom-merchants — we are already in relatively safe hands. The SENTRY MONITORING SYSTEM maintains a Sentry Risk Table of possible future Earth impact events, typically tracking objects 50 meters or larger — none of which are currently expected to hit Earth. Other sources will tell you that comet ISON is not expected to pass any closer than 0.42 AU (63,000,000 km) from Earth — though it should still provide spectacular viewing in our night skies come December 2013. A recently trending threat, 140-metre wide asteroid AG5 was given just a 1-in-625 chance of hitting Earth in February 2040, though more recent measurements have reduced this risk to almost nil. The Torino Scale is currently used to rate the risk category of asteroid and comet impacts on a scale of 0 (no hazard) to 10 (globally-impacting certain collisions). At present, almost all known asteroids and comets are categorized as level 0 on this scale (AG5 was temporarily categorized at level 1 until recent measurements, and 2007 VK184, a 130 meter asteroid due for approach circa 2048–2057 is the only currently listed one categorized at level 1 or more).

An asteroid striking land will cause a crater far larger than its size. The diameter calculated in kilometers is = (energy of impact)(1÷3.4)÷106.77. As such, if an asteroid the size of AG5 (140-meter wide) were to strike Earth, it would create a crater over twice the diameter of Barringer Meteor Crater in northern Arizona and affect an area far larger — or on striking water, it would create a global-reach tsunami. Fortunately, the frequency of such an object striking Earth is quite low — perhaps once every 100,000 years. It is the smaller ones, such as the one which exploded over Russia last week which are the greater concern. These occur perhaps once every 100 years and are not easily detectable by our current methods — justifying the $5m funding NASA contributed to the new ATLAS development in Hawaii.

We are a long way from deploying a response system to deflect/destroy incoming meteors, though at least with ATLAS we will be more confident of getting out of the way when the sky falls in. More information on ATLAS: http://www.fallingstar.com/index.php

Humanities wake-up call has been ignored and we are probably doomed.

The Chelyabinsk event is a warning. Unfortunately, it seems to be a non-event in the great scheme of things and that means the human race is probably also a non-starter. For years I have been hoping for such an event- and saw it as the start of a new space age. Just as Sputnik indirectly resulted in a man on the Moon I predicted an event that would launch humankind into deep space.

Now I wait for ISON. Thirteen may be the year of the comet and if that does not impress upon us the vulnerability of Earth to impacts then only an impact will. If the impact throws enough particles into the atmosphere then no food will grow and World War C will begin. The C stands for cannibalism. If the impact hits the ring of fire it may generate volcanic effects that may have the same effect. If whatever hits Earth is big enough it will render all life above the size of microbes extinct. We have spent trillions of dollars on defense- yet we are defenceless.

Our instinctive optimism bias continues to delude us with the idea that we will survive no matter what happens. Beside the impact threat is the threat of an engineered pathogen. While naturally evolved epidemics always leave a percentage of survivors, a bug designed to be 100 percent lethal will leave none alive. And then there is the unknown- Earth changes, including volcanic activity, can also wreck our civilization. We go on as a species the same way we go on with our own lives- ignoring death for the most part. And that is our critical error.

The universe does not care if we thrive or go extinct. If we do not care then a quick end is inevitable.

I have given the world my best answer to the question. That is all I can do:

http://voices.yahoo.com/water-bombs-8121778.html?cat=15

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.

“Olemach-Theorem”: Angular-momentum Conservation implies a gravitational-redshift proportional Change of Length, Mass and Charge

Otto E. Rossler

Faculty of Natural Sciences, University of Tubingen, Auf der Morgenstelle 8, 72076 Tubingen, Germany

Abstract

There is a minor revolution going on in general relativity: a “return to the mothers“ – that is, to the “equivalence principle” of Einstein of 1907. Recently the Telemach theorem was described which says that Einstein’s time change T stands not alone (since T, L, M, Ch all change by the same factor or its reciprocal, respectively). Here now, the convergent but trivial-to-derive Olemach theorem is presented. It connects omega (rotation rate), length, mass and charge in a static gravitational field. Angular-momentum conservation alone suffices (plus E = mc² ). The list of implications shows that the “hard core” of general relativity acquires new importance. 5 surprise implications – starting with global constancy of c in general relativity – are pointed out. Young and old physicists are called upon to join in the hunt for the “inevitable fault” in Olemach. (January 31, 2013)

Introduction

“Think simple” is a modern parole (to quote HP). Much as in “ham” radio initiation the “80 meter band playground” is the optimal entry door even if greeted with derision by old hands, so in physics the trivial domain of special relativity’s equivalence principle provides the royal entry portal.

A New Question

The local slowdown of time “downstairs” in gravity is Einstein’s most astounding discovery. It follows from special relativity in the presence of constant acceleration – provided the acceleration covers a vertically extended domain. Einstein’s famous long rocketship with its continually thrusting boosters presents a perennially fertile playground for the mind. This “equivalence principle” [1] was “the happiest thought of my life” as he always claimed.

To date no one doubts any more [2,3] the surprise finding that time is slowed-down downstairs compared to upstairs. The original reason given by Einstein [1] was that all signal sequences sent upwards arrive there with enlarged temporal intervals since the rocketship’s nose has picked up a constant relative departing speed during the finite travel time of the signal from the bottom up. Famous measurements, starting in 1959 and culminating in the daily operation of the Global Positioning System, abundantly confirm Einstein’s seemingly absurd purely mentally deduced prediction. From this hard-won 1907 insight, he would later derive his “general theory of relativity.” The latter remains an intricate edifice up to this day of which not all corners are understood as of yet. For example, many mathematically allowed but unphysical transformations got appended over the years. And a well-paved road running to the right and left of the canonical winded thread is still wanting. For example, the attempt begun by Einstein’s assistant Cornelius Lanczos in 1929 to build a bridge toward Clifford’s older differential-geometric approach [4] remains unconsummated.

In an “impasse-type” situation like this it is sometimes a good strategy to go “back to the mothers” in Goethe’s words, that is, to the early days when everything was still simple and fresh in its unfamiliarity. Do there perhaps exist one or two “direct corollaries” to Einstein’s happiest thought that are likewise bound to remain valid in any later more advanced theory?

A starting point for the hunt is angular-momentum conservation. Angular momentum enjoys an undeservedly low status in general relativity Emmy Noether’s genius notwithstanding. It therefore is a legitimate challenge to be asked to check what happens when angular momentum is “explicitly assumed to be conserved” in Einstein’s long rocketship where all clocks are known to be “tired” in their ticking rate at more downstairs positions in a locally imperceptible fashion. This question appears to be new. In the following, an attempt is made to check how the conservation of angular momentum which is a well-known fact in special relativity manifests itself in the special case of Einstein’s equivalence principle.

Olemach Theorem

To find the answer, a simple thought experiment suggests itself. A frictionless, strictly horizontally rotating bicycle wheel (with its mass ideally concentrated in the rim) is assumed to be suspended at its hub from a rope – so it can be lowered reversibly from the tip to the bottom in our constantly accelerating long rocketship (or else in gravity). Imagine the famous experimentalist Walter Lewin would make this wheel the subject of one of his enlightened M.I.T. lectures distributed on the Internet. The precision of the measurements performed would have to be ideal. What is it that can be predicted?

The law of “angular momentum conservation under planar rotation reads (if a sufficiently slow “nonrelativistic” rotation speed is assumed) according to any textbook like Tipler’s: “angular momentum = rotation rate times mass times radius-squared = constant” or, written in symbols,

J = ω m r² = const. (1)

From the above-quoted paper by Einstein we learn that omega differs across height levels, in a locally imperceptible fashion, being lower downstairs [1]. This is so because a frictionless wheel in planar rotation represents an admissible realization of a “ticking” clock (you can record ticks from a pointer attached to the rim). Then the height-dependent factor which reduces the ticking rate downstairs (explicitly written down by Einstein [1]) can be called K . At the tip, K = 1 , but K > 1 and increasing as one slowly (“adiabatically”) lowers the constantly rotating wheel to a deeper level [1]. Note that K can approach infinity in principle (as when the famous “Rindler rocketship,” with its many independently boosting hollow “rocket rings” that stay together without links, approaches the length of about one light year – if this technical aside is allowed).

The present example is quite refined in its maximum simplicity. What is it that the watching students will learn? If it is true that angular momentum J stays constant despite the fact that the rotation rate ω is reduced downstairs by the Einstein clock slowdown factor K , then necessarily either m or r or both must be altered downstairs besides ω , if J is to stay constant in accordance with Eq.(1).
While infinitely many nonlinear change laws for r and m are envisionable in compensation for the change in ω , the simplest “linear” law keeping angular momentum J unchanged in Eq.(1) reads:

ω’ = ω/K
r’ = r K
m’ = m/K (2)
q’ = q/K .

Here the fourth line was added “for completeness” due to the fact that the local ratio m/q – rest mass-over-charge – is a universal constant in nature in every inertial frame, with a characteristic universal value for every kind of particle. (Note that any particle on the rim can be freshly released into free fall and then retrieved with impunity, so that the universal ratio remains valid.) The unprimed variables on the right refer to the upper-level situation (K = 1) while the primed variables on the left pertain to a given lower floor, with K monotonically increasing toward the bottom as quantitatively indicated by Einstein [1].

How can we understand Eq.(2)? The first line, with ω replaced by the proportional ticking rate t of an ordinary local clock (Einstein’s original result), yields an equivalent law that reads

t’ = t/K ‚ (2a)

with the other three lines of Eq.(2) remaing unchanged. The corresponding 4-liner was described recently under the name “Telemach” (acronym for Time, Length, Mass and Charge). Telemach possessed a fairly complicated derivation [5]. The new law, Eq.(2), has the asset that its validity can be derived directly from Eq.(1).

The prediction made by the conservation law of Eq.(1) is that any change in ω automatically entails a change in r and/or m . There obviously exist infinitely many quantitative ways to ensure the constancy of J in Eq.(1) for our two-dimensionally rotating frictionless wheel. For example, when for the fun of it we keep m constant while letting only r change, the second line of Eq.(2) is bound to read r’ = r K^½ (followed by m’ = m and q’ = q ). Infinitely many other guessed schemes are possible. Eq.(2) has the asset of being “simpler” since all change ratios are linear in K. So the change law does not depend on height; only in this linear way can grotesque consequences like divergent behavior of one variable be avoided.

Now the serious part. We start out with the third line of Eq.(2). We already know from Einstein’s paper [1] that the local photon frequency (and hence the photon mass-energy) scales linearly with 1/K . Photon mass-energy therefore necessarily obeys the third line of Eq.(2). If this is true, we can recall that according to quantum electrodynamics, photons and particles are locally inter-transformable. Einstein would not have disagreed in 1907 already. A famous everyday example known from PET scans is positronium creation and annihilation. In this special case, two 511 kilo-electron-Volt photons turn into – prove equivalent to – one positron plus one electron, in every local frame. Therefore we can be sure that the third line of Eq.(2) indeed represents an indubitable fact in modern physics, a fact which Einstein would have eagerly embraced.

The remaing second line of Eq.(2) could be explained by quantum mechanics as well (as done in ref. [5]). However, this is edundant now since once the third line of Eq.(2) is accepted, the second line is fixed via Eq.(1). The fourth line follows from the third as already stated. Hence we are finished proving the correctness of the new law of Eq.(2).

How to call it? Olemach is a variant of “Oremaq” (which at first sight is a more natural acronym for the law of Eq.(2) in view of its four left-hand sides. But the closeness in content of Eq.(2) to Telemach [4], in which length was termed L and charge termed Ch, lets the matching abbreviation “Olemach” appear more natural.

Discussion

A new fundamental equation in physics was proposed: Eq.(2). The new equation teaches us a new fact about nature: In the accelerating rocket-ship of the young Einstein as well as in general relativity proper under “ordinary conditions” (yet to be specified in detail), angular momentum conservation plays a previously underestimated – new – role.

The most important implication of the law of Eq.(2) no doubt is the fact that the speed of light, c , has become a “global constant” in the equivalence principle. Note that the first two lines of Eq.(2) can be written

T’ = TK
r’ = rK , (2b)

with T = 1/ω and T‘ = 1/ ω‘ . One sees that r’/T’ = r/T . Therefore c-upstairs = c-downstairs = c at all heights (up to the uppermost level of an infinitely long Rindler rocket with c = c-universal at its tip). Thus

c = globally constant. (3)

This result follows from the “linear” structure of Eq.(2). The global constancy of c had been given up explicitly by Einstein in the quoted 1907 paper [1]. (This maximally painful fact was presumably the reason why Einstein could not touch the topic of gravitation again for 4 years until his visiting close friend Ehrenfest helped him re-enter the pond through engulfing him in an irresistible discussion about his rotating-disk problem.) In recompense for the new global constancy of c , it is now m and q that inherit the former underprivileged role of c by being “only locally but not globally constant.” It goes without saying that there are far-reaching tertiary implications (cf. [5]).

The second-most-important point is the already mentioned fact that charge q is no longer conserved in physics in the wake of the fourth line of Eq.(2), after an uninterrupted reign of almost two centuries. This result is the most unbelievable new fact. A first direct physical implication is that the charge of neutron stars needs to be re-calculated in view of the “order-of-unity” gravitational redshift z = K – 1 valid on their surface. Since K thus is almost equal to 2 on this surface, the charge of neutron stars is reduced by a factor of almost 2. Even more strikingly, the electrical properties of quasars (including mini-quasars) are radically altered so that a renewed modeling attempt is mandatory.

Thirdly, a topological new consequence of Eq.(2): “Stretching” is now found added to “curvature” as an equally fundamental differential-geometric feature of nature valid in the equivalence principle and, by implication, in general relativity. Recall that r goes to infinity in parallel with K , in the second line of Eq.(2) when K does so. This new qualitative finding is in accordance with Clifford’s early intuition. While an arbitrarily strong curvature remains valid near the horizon of a black hole where K diverges, the singular curvature is now accompanied by an equally singular (infinite) stretching of r . Thus a novel type of “volume conservation” (more precisely speaking: “conservation of the curvature-over-stretching ratio”) becomes definable in general relativity, in the wake of Eq.(2).

A fourth major consequence is that some traditional historical additions to general relativity cease to hold true if Olemach (or Telemach) is valid. This “tree-trimming” affects previously accepted combinations of general relativity with electrodynamics. In particular, the famous Reissner-Nordström solution loses its physical validity in the wake of Eq.(2). The simple reason: charge is no longer a global invariant. Surprise further implications (like a mandatory unchargedness of black holes) follow. The beautiful mass-ejecting and charge-spitting and electricity and magnetism generating, features of active quasars acquire a radically new interpretation worth to be worked out.

As a fifth point, the mathematically beautiful “Kerr metric” when used as a description of a rotating black hole loses its physical validity by virtue of the second line of Eq.(2). The new infinite distance to the horizon valid from the outside is one reason. More importantly, the effective zero rotation rate at the horizon of a seen from the outside fast-rotating black hole necessitates the formation of a topological “Reeb foliation in space-time” encircling every rotating black hole, as well as (in unfinished form) any of its never quite finished precursors [6].

There appear to be further first-magnitude consequences of the law of angular-momentum conservation (Eq.1), applied in the equivalence principle and its general-relativistic extensions. So the second line of Eq.(2) implies, via the new global constancy of c , that gravitational waves no longer exist [5]. On the other hand, temporal changes of a gravitational potential, for example through the passing-by of a celestial body, do of course remain valid and must somehow be propagated with the speed of light. (This problem is mathematically unsolved in the context of Sudarshan’s “no interaction theorem.”) These two cases can now be confused no longer.

At this point cosmology deserves to be mentioned. The new equal rights of curving and stretching (“Yin and Yang”) suggest that only asymptotically flat solutions remain available in cosmology in the very large – a suggestion already due to Clifford as mentioned [4]. If Olemach implies that a “big bang” (based on a non-volume preserving version of general relativity) is ruled out mathematically, this new fact has tangible consequences. Recently, 24 “ad-hoc assumptions” implicit in the standard model of cosmology were collected [7]. Further new developments in the wake of an improved understanding of the role played by angular-momentum conservation in the equivalence principle, general relativity and cosmology are to be expected.

To conclude, a new big vista opens itself up when the law of angular momentum conservation is indeed valid in the equivalence principle of special relativity of 1907. An inconspicuous “linear law” (Eq.2), re-affirming the role of Einstein’s happiest thought, imposes as the natural “80-meter band” of physics” – or does it not?

Credit Due

The above result goes back to an inconspicuous abstract published in 2003 [8] and a maximally unassuming dissertation written in its wake [9].

Acknowledgment

I thank Ali Sanayei, Frank Kuske and Roland Wais for discussions. For J.O.R.

References

[1] A. Einstein, On the relativity principle and the conclusions drawn from it (in German). Jahrbuch der Radioaktivität 4, 411–462 (1907), p. 458; English translation: http://www.pitt.edu/~jdnorton/teaching/GR&Grav_2007/pdf/Einstein_1907.pdf , p. 306.

[2] M.A. Hohensee, S. Chu, A. Peters and H. Müller, Equivalence principle and gravitational redshift. Phys. Rev. Lett. 106, 151102 (2011). http://prl.aps.org/abstract/PRL/v106/i15/e151102

[3] C. Lämmerzahl, The equivalence principle. MICROSCOPE Colloquium, Paris, September 19, 2011. http://gram.oca.eu/Ressources_doc/EP_Colloquium_2011/2%20C%20Lammerzahl.pdf

[4] C. Lanczos, Space through the Ages: The Evolution of geometric Ideas from Pythagoras to Hilbert and Einstein. New York: Academic Press 1970, p. 222. (Abstract on p. 4 of: http://imamat.oxfordjournals.org/content/6/1/local/back-matter.pdf )

[5] O.E. Rossler, Einstein’s equivalence principle has three further implications besides affecting time: T-L-M-Ch theorem (“Telemach”). African Journal of Mathematics and Computer Science Research 5, 44–47 (2012), http://www.academicjournals.org/ajmcsr/PDF/pdf2012/Feb/9%20Feb/Rossler.pdf

[6] O.E. Rossler, Does the Kerr solution support the new “anchored rotating Reeb foliation” of Fröhlich? (25 January 2012). https://lifeboat.com/blog/2012/01/does-the-kerr-solution-sup…f-frohlich
[7] O.E. Rossler, Cosmos-21: Twenty-four violations of Occam’s razor healed by statistica mechanics. (Submitted.)

[8] H. Kuypers, O.E. Rossler and P. Bosetti, Matterwave-Doppler effect, a new implication of Planck’s formula (in German). Wechselwirkung 25 (No. 120), 26–27 (2003).

[9] H. Kuypers, Atoms in the gravitational field according to the de-Broglie-Schrödinger theory: Heuristic hints at a mass and size change (in German). PhD thesis, submitted to the Chemical and Pharmaceutical Faculty of the University of Tubingen 2005.

———————–

For those in Colorado who are interested in attending a talk by John Troeltzsch, Sentinel Ball Program Manager, Ball Aerospace & Technologies Corp. please R.S.V.P Chris Zeller ([email protected]) by Tuesday, 26 February 2013 for badge access. US citizenship required.

6:00 pm Thursday, February 28th 2013
6:00 pm Social, 6:30 pm Program
Ball Aerospace Boulder Campus RA7 Conference Room
1600 Commerce St
Boulder, CO 80301

It will be good to see you there.

About the Talk:
The inner solar system is populated with a half million asteroids larger than the one that struck Tunguska and yet we’ve identified and mapped only about one percent of these asteroids to date.

This month’s program will introduce the B612 Foundation and the first privately funded deep space mission–a space telescope designed to discover and track Near Earth Objects (NEO). This dynamic map of NEOs will provide the blueprint for future exploration of our Solar System, enabling potential astronaut missions and protection of the future of life on Earth.

The B612 Foundation is a California 501©(3) non-profit, private foundation dedicated to protecting the Earth from asteroid strikes. Its founding members Rusty Schweickart, Clark Chapman, Piet Hut, and Ed Lu established the foundation in 2002 with the goal of significantly altering the orbit of an asteroid in a controlled manner.

The B612 Foundation is working with Ball Aerospace, Boulder, CO, which is designing and building the Sentinel Infrared (IR) Space Telescope with the same expert team that developed the Spitzer and Kepler Space Telescopes. It will take approximately five years to complete development and testing to be ready for launch in 2017–2018.

About John Troeltzsch:
John Troeltzsch is the Sentinel mission program manager for Ball Aerospace. Troeltzsch received his Bachelor of Science in Aerospace Engineering from the University of Colorado in 1983 and was immediately hired by Ball Aerospace. While working at Ball, Troeltzsch continued his studies at C.U. and received his Masters of Science in Aerospace Engineering in 1989. He has been a member of AIAA for over 30 years. During his 29 years at Ball Aerospace, Troeltzsch has worked on three of Hubble’s science instruments and in program management for the Spitzer Space Telescope. Following Spitzer’s launch in 2003, Troeltzsch joined Ball’s Kepler team and was named program manager in 2007. For the Kepler mission, Troeltzsch has managed the Ball team, including responsibility for cost, schedule, and performance requirements.

Link to pdf copy of invitation, http://www.iseti.us/pdf/AIAA-Sentinel-Feb.pdf


LEFT: Activelink Power Loader Light — RIGHT: The Latest HAL Suit

New Japanese Exoskeleton Pushing into HAL’s (potential) Marketshare
We of the robot/technology nerd demo are well aware of the non-ironically, ironically named HAL (Hybrid Assistive Limb) exoskeletal suit developed by Professor Yoshiyuki Sankai’s also totally not meta-ironically named Cyberdyne, Inc. Since its 2004 founding in Tsukuba City, just north of the Tokyo metro area, Cyberdyne has developed and iteratively refined the force-amplifying exoskeletal suit, and through the HAL FIT venture, they’ve also created a legs-only force resistance rehabilitation & training platform.

Joining HAL and a few similar projects here in Japan (notably Toyota’s & Honda’s) is Kansai based & Panasonic-owned Activelink’s new Power Loader Light (PLL). Activelink has developed various human force amplification systems since 2003, and this latest version of the Loader looks a lot less like its big brother the walking forklift, and a lot more like the bottom half & power pack of a HAL suit. Activelink intends to connect an upper body unit, and if successful, will become HAL’s only real competition here in Japan.
And for what?

Well, along with general human force amplification and/or rehab, this:


福島第一原子力発電所事故 — Fukushima Daiichi Nuclear Disaster Site

Fukushima Cleanup & Recovery: Heavy with High-Rads
As with Cyberdyne’s latest radiation shielded self-cooling HAL suit (the metallic gray model), Activelink’s PLL was ramped up after the 2011 Tohoku earthquake, tsunami, and resulting disaster at the Fukushima Daiichi Power Plant. Cleanup at the disaster area and responding to future incidents will of course require humans in heavy radiation suits with heavy tools possibly among heavy debris.While specific details on both exoskeletons’ recent upgrades and deployment timeline and/or capability are sparse, clearly the HAL suit and the PLL are conceptually ideal for the job. One assumes both will incorporate something like 20-30KG/45-65lbs. per limb of force amplification along with fully supporting the weight of the suit itself, and like HAL, PLL will have to work in a measure of radiological shielding and design consideration. So for now, HAL is clearly in the lead here.

Exoskeleton Competition Motivation Situation
Now, the HAL suit is widely known, widely deployed, and far and away the most successful of its kind ever made. No one else in Japan — in the world — is actually manufacturing and distributing powered exoskeletons at comparable scale. And that’s awesome and all due props to Professor Sankai and his team, but in taking stock of the HAL project’s 8 years of ongoing development, objectively one doesn’t see a whole lot of fundamental advancement. Sure, lifting capacity has increased incrementally and the size of the power source & overall bulk have decreased a bit. And yeah, no one else is doing what Cyberdyne’s doing, but that just might be the very reason why HAL seems to be treading water — and until recently, e.g., Activelink’s PLL, no one’s come along to offer up any kind of alternative.

Digressively Analogizing HAL with Japan & Vice-Versa Maybe
What follows is probably anecdotal, but probably right: See, Japanese economic and industrial institutions, while immensely powerful and historically cutting-edge, are also insular, proud — and weirdly — often glacially slow to innovate or embrace new technologies. With a lot of relatively happy workers doing excellent engineering with unmatched quality control and occasional leaps of innovation, Japan’s had a healthy electronics & general tech advantage for a good long time. Okay but now, thorough and integrated globalization has monkeywrenched the J-system, and while the Japanese might be just as good as ever, the world has caught up. For example, Korea’s big two — Samsung & LG — are now selling more TVs globally than all Japanese makers combined. Okay yeah, TVs ain’t robots, but across the board competition has arrived in a big way, and Japan’s tech & electronics industries are faltering and freaking out, and it’s illustrative of a wider socioeconomic issue. Cyberdyne, can you dig the parallel here?

Back to the Robot Stuff: Get on it, HAL/Japan — or Someone Else Will
A laundry list of robot/technology outlets, including Anthrobotic & IEEE, puzzled at how the first robots able to investigate at Fukushima were the American iRobot PackBots & Warriors. It really had to sting that in robot loving, automation saturated, theretofore 30% nuclear-powered Japan, there was no domestically produced device nimble enough and durable enough to investigate the facility without getting a radiation BBQ (the battle-tested PackBots & Warriors — no problem). So… ouch?

For now, HAL & Japan lead the exoskeletal pack, but with a quick look at Andra Keay’s survey piece over at Robohub it’s clear that HAL and the PLL are in a crowded and rapidly advancing field. So, if the U.S. or France or Germany or Korea or the Kiwis or whomever are first to produce a nimble, sufficiently powered, appropriately equipped, and ready-for-market & deployment human amplification platform, Japanese energy companies and government agencies and disaster response teams just might add those to cart instead. Without rapid and inspired development and improvement, HAL & Activelink, while perhaps remaining viable for Japan’s aging society industry, will be watching emergency response and cleanup teams at home with their handsome friend Asimo and his pet Aibo, wondering whatever happened to all the awesome, innovative, and world-leading Japanese robots.

It’ll all look so real on a 80-inch Samsung flat-panel HDTV.

Activelink Power Loader — Latest Model


Cyberdyne, Inc. HAL Suit — Latest Model
http://youtu.be/xwzYjcNXlFE

SOURCES & INFO & STUFF
[HAL SUIT UPGRADE FOR FUKUSHIMA — MEDGADGET]
[HAL RADIATION CONTAMINATION SUIT DETAILS — GIZMAG]
[ACTIVELINK POWER LOADER UPDATE — DIGINFO.TV]

[TOYOTA PERSONAL MOBILITY PROJECTS & ROBOT STUFF]
[HONDA STRIDE MANAGEMENT & ASSISTIVE DEVICE]

[iROBOT SENDING iROBOTS TO FUKUSHIMA — IEEE]
[MITSUBISHI NUCLEAR INSPECTION BOT]

For Fun:
[SKELETONICS — CRAZY HUMAN-POWERED PROJECT: JAPAN]
[KURATAS — EVEN CRAZIER PROJECT: JAPAN]

Note on Multimedia:
Main images were scraped from the above Diginfo.tv & AFPBBNEWS
YouTube videos, respectively. Because there just aren’t any decent stills
out there — what else is a pseudo-journalist of questionable competency to do?

This piece originally appeared at Anthrobotic.com on January 17, 2013.

A Revolution in Physics and Cosmology

by Otto E. Rossler, Faculty of Natural Sciences, University of Tubingen, Germany

A deterministic 2-particle system interacting with a fixed third particle (the wall of a confining T-tube) shows two kinds of behavior never seen in a deterministic system before: Dissipative and antididissipative behavior in both directions of time (dependent on the sign of the force law). Dissipative behavior occurs in both directions of time when the system is started from non-selected far-from-equipartition initial conditions while the potential (giving rise to the force law) is Newtonian–repulsive. Antidissipative behavior occurs (in both directions of time) when the system is started from non-selected far-from-equipartition initial conditions while the potential is Newtonian–attractive.

“Entropic” behavior had not been demonstrated before in a deterministic system. Now, both “entropic” and “ectropic” behavior are described under deterministic-chaos conditions. The numerical simulations are due to Klaus Sonnleitner (2010) and, independently, Ramis Movassagh (2011) who also provided an analytical derivation.

The “ectropic” behavior valid under attractive conditions gives rise to a new statistical mechanics besides thermodynamics, called cryodynamics. This new discipline governs the cosmos at large but at the same time has down-to-earth applications. It enables dynamically controlled hot fusion.

The described two facts are exciting. Whereas thermodynamics with its characteristic entropy production exists for the better part of two centuries, the sister discipline is a recent surprise new discovery. It enables an eternally recycling cosmos in the way anticipated by Heraclitus.

Theoretical and young physicists are invited to participate in the further development of cryodynamics. A book as big as detailed thermodynamics texts can be expected to be written. A second “machine age” is probably preprogrammed.

An empirically confirmable deterministic universe that shows both dissipation and anti-dissipation on two different size scales, the micro and the macro scale, is an exciting prospect. It gives you a whole new feeling at being honored to be a member of the universe.

My fear is that no one will believe that chaos theory is that powerful (Hamiltonian chaos theory was discovered by Poincaré toward the end of the 19th century). And that Newton and Einstein (there is no difference in this context) could win another prize of first magnitude.

I thank Christophe Letellier and Ali Sanayei for discussions. For J.O.R.