Toggle light / dark theme

CCC – “Constant c Catastrophe”

Otto E. Rossler

Faculty of Science, University of Tubingen, Auf der Morgenstelle 8, 72076 Tubingen, Germany

Abstract

The historical twist that the universal constancy of the speed of light, c, got abandoned for more than a century is recalled. The new situation, arrived at independently by Richard J. Cook, is outlined along with some of its uplifting consequences. A new metrology and a new cosmology take shape.
(March 15, 2013)

Text

A globally constant c is not a catastrophe in the opinion of the present writer, despite the fact that most everyone feels safe in the fold of the old paradigm. A simplification of physics is almost never a step back. The new situation can be summarized as follows.

In 1907, Einstein realized that relative to the tip of a constantly accelerating long rocketship in outer space, clocks located at the bottom of the rocketship tick slower, e.g. half as fast, than those at the tip [1]. This was “the happiest thought of my life,” he always stressed. The breakthrough allowed him to understand gravitation in the context of his new theory of special relativity described two years before.

To his dismay, however, he was forced to realize that, at the bottom of the long rocketship, the local slowdown of time is accompanied by a numerically equal, visible from above, reduction of the speed of light c even though the latter had been a universal constant in special relativity. Both observable changes (the slowdown and the crawl) remain masked on the lower level itself. The drawback of the reduced c caused Einstein to drop the subject of gravitation for 4 years (until his good friend Ehrenfest lured him back with the related paradigm of the rotating disk). Einstein then would carefully “build around” the drawback mentioned. And the simplest nontrivial solution of the finished general theory of relativity, the Schwarzschild metric, can indeed be written in an equivalent form in which c is globally constant [2].

But how about the riddle of the “creeping” speed of light downstairs in the rocketship and in gravity? The solution to the conundrum emerges from a second look at the famous “Lorentz contraction” which (as is well known) states that a fast-moving car is shortened while keeping its width: Does this fact mean that the shortened car has become anisotropic in its own frame? The answer is no.

Analogously here: the apparently only vertically enlarged “spaghetti people” downstairs in gravity are not distorted in their own frame. They are objectively enlarged in all directions since time is slowed and c is constant. Their lateral size change is masked when viewed from above. Hence c only seems to be creeping in the lateral directions downstairs without being reduced in reality.

“Who ordered that?,” one feels tempted to say. The new size change which follows from the universal constancy of c has been spotted from time to time in the past, cf. [2]. The most convincing mathematical demonstration based on the theory of general relativity was given by Richard J. Cook in a paper entitled “Gravitational space dilation” [3]. A very simple derivation using the equivalence principle is the “Telemach theorem” [4]. Its cousin, the “Olemach theorem” [5], is even simpler (using only angular-momentum conservation and the Bohr radius formula of quantum mechanics).

The thus successfully recovered “Einstein universality of c ” is a bonanza. Global constancy of c implies, for example, that the well-known infinite time delay of light going all the way down to a black hole’s horizon (or up from it) [6] reflects an infinite distance (if in s/t = c = const., t goes to infinity, so does s ). Therefore the famous “Flamm’s paraboloid” describing the shape of space-time around a Schwarzschild black hole now gets replaced by (is morphed into) a “generic 3-pseudosphere”: Space itself is infinitely enlarged towards the horizon in a trumpet-like fashion. While this is hard to visualize, the lower-dimensional analog, a halved 2-pseudosphere (replacing the upper part of the likewise 2-dimensional Flamm-paraboloid) looks like a vertical infinitely long trumpet whose upper rim and its neighborhood coincides with that of the former paraboloid. An ant placed on the locally flat rim of the trumpet’s big mouth can walk around it in a short finite time. But the same ant has to muster an infinite distance in order to reach the middle of the very same plane – namely the mouthpiece of the maximally drawn-out trumpet (which represents the horizon of the black hole). Thus, “curvature” and “stretching” do both go to infinity near the horizon like Siamese twins, in the new differential geometry of gravitation.

Further new implications are as follows: Rest mass and charge both go to zero in inverse proportion to the local redshift [4]. The charge change represents a major surprise in physics following a 1½ centuries long reign of the law of charge conservation. In consequence of this, the combined “Einstein-Maxwell equations” cease to be physically valid, as do other compound solutions [4]. The same fatal fate holds true, for all expanding-universe solutions to the Einstein equation since they imply global non-constancy of the speed of light c as is well known. Therefore cosmology suddenly finds itself to be on the lookout for a replacement for the big bang. A major catastrophe in view of a decades-long previous consensus.

Equally important, numerous changes in metrology follow if distances, masses and charges are no longer the same as before: The Ur-meter, the Ur-charge (of the electron) and the Ur-kilogram all cease to be valid along with other previous constants of nature, as the price to pay for c’s newly won universality [4]. Hence a new global picture of space-time including the cosmos is implicit. This prospect is almost unacceptable at first sight. By coincidence, though, a new “second statistical mechanics” – cryodynamics sister discipline to thermodynamics – was recently found to exist [7] which independently calls for a new cosmology and is bound to help in its formulation.

To conclude, space-time theory acquires a new symmetry between curvature and stretching in the wake of the new global constancy of the speed of light. General relativity acquires a new face without losing its beauty. The speed of light c thus proves as fertile as it was a century ago. Is it conceivable that Einstein will dominate the 21st century no less than the 20th?

Acknowledgments

I thank Dieter Fröhlich, Heinrich Kuypers, Frank Kuske and Ali Sanayei for discussions. For J.O.R.

References

[1] A. Einstein, On the relativity principle and the conclusions drawn from it (in German). Jahrbuch der Radioaktivität 4, 411–462 (1907), p. 458; English translation: http://www.pitt.edu/~jdnorton/teaching/GR&Grav_2007/pdf/Einstein_1907.pdf , p. 306.
[2] O.E. Rossler, Abraham-like return to constant c in general relativity: Gothic-R theorem demonstrated in Schwarzschild metric, 2008
( http://lhc-concern.info/wp-content/uploads/2009/01/fullpreprint.pdf ; revised http://www.wissensnavigator.com/documents/chaos.pdf ), Fractal Spacetime and Noncommutative Geometry in Quantum and High Energy Physics 2, 1–14 (2012).
[3] R.J. Cook, Gravitational space dilation (2009). http://arxiv.org/pdf/0902.2811.pdf
[4] O.E. Rossler, Einstein’s equivalence principle has three further implications besides affecting time: T-L-M-Ch theorem (“Telemach”). African Journal of Mathematics and Computer Science Research 5, 44–47 (2012). http://www.academicjournals.org/ajmcsr/PDF/pdf2012/Feb/9%20Feb/Rossler.pdf
[5] O.E. Rossler, Olemach theorem: Angular-momentum conservation implies gravitational-redshift proportional change of length, mass and charge. European Scientific Journal 9(2), 38–45 (2013). http://eujournal.org/index.php/esj/article/view/814/876
[6] J.R. Oppenheimer and H. Snyder, On continued gravitational contraction. Phys. Rev. 56, 455–459 (1939). Abstract: http://prola.aps.org/abstract/PR/v56/i5/p455_1
[7] O.E. Rossler, The new science of cryodynamics and its connection to cosmology. Complex Systems 20, 105–113 (2011). http://www.complex-systems.com/pdf/20-2-3.pdf

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

I continue to survey the available technology applicable to spaceflight and there is little change.

The remarkable near impact and NEO on the same day seems to fly in the face of the experts quoting a probability of such coincidence being low on the scale of millenium. A recent exchange on a blog has given me the idea that perhaps crude is better. A much faster approach to a nuclear propelled spaceship might be more appropriate.

Unknown to the public there is such a thing as unobtanium. It carries the country name of my birth; Americium.

A certain form of Americium is ideal for a type of nuclear solid fuel rocket. Called a Fission Fragment Rocket, it is straight out of a 1950’s movie with massive thrust at the limit of human G-tolerance. Such a rocket produces large amounts of irradiated material and cannot be fired inside, near, or at the Earth’s magnetic field. The Moon is the place to assemble, test, and launch any nuclear mission.

Such Fission Fragment propelled spacecraft would resemble the original Tsolkovsky space train with a several hundred foot long slender skeleton mounting these one shot Americium boosters. The turn of the century deaf school master continues to predict.

Each lamp-shade-spherical thruster has a programmed design balancing the length and thrust of the burn. After being expended the boosters use a small secondary system to send them into an appropriate direction and probably equipped with small sensor packages, using the hot irradiated shell for an RTG. The Frame that served as a car of the space train transforms into a pair of satellite panels. Being more an artist than an *engineer, I find the monoplane configuration pleasing to the eye as well as being functional. These dozens and eventually thousands of dual purpose boosters would help form a space warning net.

The front of the space train is a large plastic sphere partially filled filled with water sent up from the surface of a a Robotic Lunar Polar Base. The Spaceship would split apart on a tether to generate artificial gravity with the lessening booster mass balanced by varying lengths of tether with an intermediate reactor mass.

These piloted impact threat interceptors would be manned by the United Nations Space Defense Force. All the Nuclear Powers would be represented.…..well, most of them. They would be capable of “fast missions” lasting only a month or at the most two months. They would be launched from underground silos on the Moon to deliver a nuclear weapon package towards an impact threat at the highest possible velocity and so the fastest intercept time. These ships would come back on a ballistic course with all their boosters expended to be rescued by recovery craft from the Moon upon return to the vicinity of Earth.

The key to this scenario is Americium 242. It is extremely expensive stuff. The only alternative is Nuclear Pulse Propulsion (NPP). The problem with bomb propulsion is the need to have a humungous mass for the most efficient size of bomb to react with.

The Logic Tree then splits again with two designs of bomb propelled ship; the “Orion” and the “Medusa.” The Orion is the original design using a metal plate and shock absorbing system. The Medusa is essentially a giant woven alloy parachute and tether system that replaces the plate with a much lighter “mega-sail.” In one of the few cases where compromise might bear fruit- the huge spinning ufo type disc, thousands of feet across, would serve quite well to explore, colonize, and intercept impact threats. Such a ship would require a couple decades to begin manufacture on the Moon.

Americium boosters could be built on earth and inserted into lunar orbit with Human Rated Heavy Lift Vehicles (SLS) and a mission launched well within a ten-year apollo type plan. But the Americium Infrastructure has to be available as a first step.

Would any of my hundreds of faithful followers be willing to assist me in circulating a petition?

*Actually I am neither an artist or an engineer- just a wannabe pulp writer in the mold of Edgar Rice Burroughs.

It is a riddle and almost a scandal: If you let a particle travel fast through a landscape of randomly moving round troughs – like a frictionless ball sent through a set of circling, softly rounded “teacups” inserted into the floor (to be seated in for a ride at a country fair) – you will find that it loses speed on average.

This is perplexing because if you invert time before throwing in the ball, the same thing is bound to happen again – since we did not specify the direction of time beforehand in our frictionless fairy’s universe. So the effect depends only on the “hypothesis of molecular chaos” being fulfilled – lack of initial correlations – in Boltzmann’s 19th century parlance. Boltzmann was the first to wonder about this amazing fact – although he looked only at the opposite case of upwards-inverted cups, that is, repulsive particles.

The simplest example does away with fully 2-dimensional interaction. All you need is a light horizontal particle travelling back and forth in a frictionless 1-dimensional closed transparent tube, plus a single attractive, much heavier particle moving slowly up and down in a frictionless transversal 1-dimensional closed transparent tube of its own – towards and away from the middle of the horizontal tube while exerting a Newtonian attractive force on the light fast particle across the common plane. Then the energy-poor fast particle still gets statistically deprived of energy by the energy-rich heavy slow particle in a sort of “energetic capitalism.”

If now the mass of the heavy particle is allowed to go to infinity while its speed and the force exerted by it remain unchanged, we arrive at a periodically forced single-degree-of-freedom Hamiltonian oscillator in the horizontal tube. What could be simpler? But you again get “antidissipation” – a statistical taking-away of kinetic energy from the light fast particle by the heavy slow one.

A first successful numerical simulation was obtained by Klaus Sonnleitner in 2010 – still with a finite mass-ratio and hence with explicit energy conservation. Ramis Movassagh obtained a similar result independently and proved it analytically. Both publications did not yet look at the simpler – purely periodically forced – limiting case just described: A single-degree-of-freedom, periodically forced conservative system. The simplest and oldest paradigm in Poincaréan chaos theory as the source of big news?

If we invert the potential (Newtonian-repulsive rather than Newtonian-attractive), the light particle now gains energy statistically from the heavy guy – in this simplest example of statistical thermodynamics (which the system now turns out to be). Thus, chaos theory becomes the fundament of many-particle physics: both on earth with its almost everywhere repulsive potentials (thermodynamics) and in the cosmos with its almost everywhere attractive potentials (cryodynamics). The essence of two fundamental disciplines – statistical thermodynamics and statistical cryodynamics – is implicit in our periodically forced single-tube horizontal particle. That tube represents the simplest nontrivial example in Hamiltonian dynamics including celestial mechanics, anyhow. But it now reveals two miraculous new properties: “deterministic entropy” generation under repulsive conditions, and “deterministic ectropy” generation under attractive conditions.

I would love to elicit the enthusiasm of young and old chaos aficionados across the planet because this new two-tiered fundamental discipline in physics based on chaos theory is bound to generate many novel implications – from revolutionizing cosmology to taming the fire of the sun down here on earth. There perhaps never existed a more economically and theoretically promising unified discipline. Simple computers suffice for deriving its most important features, almost all still un-harvested.

Another exciting fact: The present proposal will be taken lightly by most everyone in academic physics because Lifeboat is not an anonymously refereed outlet. But many young people on the planet do own computers and will appreciate the liberating truth that “non-anonymous peer review” carries the day – with them at the helm. So, please, join in. I for one was so far unable to extract the really simplest underlying principle: Why is it possible to have a time-directed behavior in a non-time-directed reversible dynamics if that time-directedness does not come from statistics, as everyone believes for the better part of two centuries? What is the real secret? And why does the latter come in two mutually at odds ways? We only have scratched at the surface of chaos so far. Boltzmann used that term in a clairvoyant fashion, did he not? (For J.O.R.)

JUSTIN.SPACE.ROBOT.GUY
A Point too Far to Astronaut

It’s cold out there beyond the blue. Full of radiation. Low on breathable air. Vacuous.
Machines and organic creatures, keeping them functioning and/or alive — it’s hard.
Space to-do lists are full of dangerous, fantastically boring, and super-precise stuff.

We technological mammals assess thusly:
Robots. Robots should be doing this.

Enter Team Space Torso
As covered by IEEE a few days ago, the DLR (das German Aerospace Center) released a new video detailing the ins & outs of their tele-operational haptic feedback-capable Justin space robot. It’s a smooth system, and eventually ground-based or orbiting operators will just strap on what look like two extra arms, maybe some VR goggles, and go to work. Justin’s target missions are the risky, tedious, and very precise tasks best undertaken by something human-shaped, but preferably remote-controlled. He’s not a new robot, but Justin’s skillset is growing (video is down at the bottom there).

Now, Meet the Rest of the Gang:SPACE.TORSO.LINEUPS
NASA’s Robonaut2 (full coverage), the first and only humanoid robot in space, has of late been focusing on the ferociously mundane tasks of button pushing and knob turning, but hey, WHO’S IN SPACE, HUH? Then you’ve got Russia’s elusive SAR-400, which probably exists, but seems to hide behind… an iron curtain? Rounding out the team is another German, AILA. The nobody-knows-why-it’s-feminized AILA is another DLR-funded project from a university robotics and A.I. lab with a 53-syllable name that takes too long to type but there’s a link down below.

Why Humanoid Torso-Bots?
Robotic tools have been up in space for decades, but they’ve basically been iterative improvements on the same multi-joint single-arm grabber/manipulator. NASA’s recent successful Robotic Refueling Mission is an expansion of mission-capable space robots, but as more and more vital satellites age, collect damage, and/or run out of juice, and more and more humans and their stuff blast into orbit, simple arms and auto-refuelers aren’t going to cut it.

Eventually, tele-operable & semi-autonomous humanoids will become indispensable crew members, and the why of it breaks down like this: 1. space stations, spacecraft, internal and extravehicular maintenance terminals, these are all designed for human use and manipulation; 2. what’s the alternative, a creepy human-to-spider telepresence interface? and 3. humanoid space robots are cool and make fantastic marketing platforms.

A space humanoid, whether torso-only or legged (see: Robotnaut’s new legs), will keep astronauts safe, focused on tasks machines can’t do, and prevent space craziness from trying to hold a tiny pinwheel perfectly still next to an air vent for 2 hours — which, in fact, is slated to become one of Robonaut’s ISS jobs.

Make Sciencey Space Torsos not MurderDeathKillBots
As one is often want to point out, rather than finding ways to creatively dismember and vaporize each other, it would be nice if we humans could focus on the lovely technologies of space travel, habitation, and exploration. Nations competing over who can make the most useful and sexy space humanoid is an admirable step, so let the Global Robot Space Torso Arms Race begin!

“Torso Arms Race!“
Keepin’ it real, yo.

• • •

DLR’s Justin Tele-Operation Interface:

• • •

[JUSTIN TELE-OPERATION SITUATION — IEEE]

Robot Space Torso Projects:
[JUSTIN — GERMANY/DLRFACEBOOKTWITTER]
[ROBONAUT — U.S.A./NASAFACEBOOKTWITTER]
[SAR-400 — RUSSIA/ROSCOSMOS — PLASTIC PALSROSCOSMOS FACEBOOK]
[AILA — GERMANY/DAS DFKI]

This piece originally appeared at Anthrobotic.com on February 21, 2013.

With the recent meteor explosion over Russia coincident with the safe-passing of asteroid 2012 DA14, and an expectant spectacular approach by comet ISON due towards the end of 2013, one could suggest that the Year of the Snake is one where we should look to the skies and consider our long term safeguard against rocks from space.

Indeed, following the near ‘double whammy’ last week, where a 15 meter meteor caught us by surprise and caused extensive damage and injury in central Russia, while the larger anticipated 50 meter asteroid swept to within just 27,000 km of Earth, media reported an immediate response from astronomers with plans to create state-of-the-art detection systems to give warning of incoming asteroids and meteoroids. Concerns can be abated.
ATLAS, the Advanced Terrestrial-Impact Last Alert System is due to begin operations in 2015, and expects to give a one-week warning for a small asteroid – called “a city killer” – and three weeks for a larger “county killer” — providing time for evacuation of risk areas.

Deep Space Industries (a US Company), which is preparing to launch a series of small spacecraft later this decade aimed at surveying nearby asteroids for mining opportunities, could also be used to monitor smaller difficult-to-detect objects that threaten to strike Earth.

However — despite ISON doom-merchants — we are already in relatively safe hands. The SENTRY MONITORING SYSTEM maintains a Sentry Risk Table of possible future Earth impact events, typically tracking objects 50 meters or larger — none of which are currently expected to hit Earth. Other sources will tell you that comet ISON is not expected to pass any closer than 0.42 AU (63,000,000 km) from Earth — though it should still provide spectacular viewing in our night skies come December 2013. A recently trending threat, 140-metre wide asteroid AG5 was given just a 1-in-625 chance of hitting Earth in February 2040, though more recent measurements have reduced this risk to almost nil. The Torino Scale is currently used to rate the risk category of asteroid and comet impacts on a scale of 0 (no hazard) to 10 (globally-impacting certain collisions). At present, almost all known asteroids and comets are categorized as level 0 on this scale (AG5 was temporarily categorized at level 1 until recent measurements, and 2007 VK184, a 130 meter asteroid due for approach circa 2048–2057 is the only currently listed one categorized at level 1 or more).

An asteroid striking land will cause a crater far larger than its size. The diameter calculated in kilometers is = (energy of impact)(1÷3.4)÷106.77. As such, if an asteroid the size of AG5 (140-meter wide) were to strike Earth, it would create a crater over twice the diameter of Barringer Meteor Crater in northern Arizona and affect an area far larger — or on striking water, it would create a global-reach tsunami. Fortunately, the frequency of such an object striking Earth is quite low — perhaps once every 100,000 years. It is the smaller ones, such as the one which exploded over Russia last week which are the greater concern. These occur perhaps once every 100 years and are not easily detectable by our current methods — justifying the $5m funding NASA contributed to the new ATLAS development in Hawaii.

We are a long way from deploying a response system to deflect/destroy incoming meteors, though at least with ATLAS we will be more confident of getting out of the way when the sky falls in. More information on ATLAS: http://www.fallingstar.com/index.php

Humanities wake-up call has been ignored and we are probably doomed.

The Chelyabinsk event is a warning. Unfortunately, it seems to be a non-event in the great scheme of things and that means the human race is probably also a non-starter. For years I have been hoping for such an event- and saw it as the start of a new space age. Just as Sputnik indirectly resulted in a man on the Moon I predicted an event that would launch humankind into deep space.

Now I wait for ISON. Thirteen may be the year of the comet and if that does not impress upon us the vulnerability of Earth to impacts then only an impact will. If the impact throws enough particles into the atmosphere then no food will grow and World War C will begin. The C stands for cannibalism. If the impact hits the ring of fire it may generate volcanic effects that may have the same effect. If whatever hits Earth is big enough it will render all life above the size of microbes extinct. We have spent trillions of dollars on defense- yet we are defenceless.

Our instinctive optimism bias continues to delude us with the idea that we will survive no matter what happens. Beside the impact threat is the threat of an engineered pathogen. While naturally evolved epidemics always leave a percentage of survivors, a bug designed to be 100 percent lethal will leave none alive. And then there is the unknown- Earth changes, including volcanic activity, can also wreck our civilization. We go on as a species the same way we go on with our own lives- ignoring death for the most part. And that is our critical error.

The universe does not care if we thrive or go extinct. If we do not care then a quick end is inevitable.

I have given the world my best answer to the question. That is all I can do:

http://voices.yahoo.com/water-bombs-8121778.html?cat=15

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.