Toggle light / dark theme

Dirrogate Singularity — A Transhumanism Journey

A widely accepted definition of Transhumanism is: The ethical use of all kinds of technology for the betterment of the human condition.

This all encompassing summation is a good start as an elevator pitch to laypersons, were they to ask for an explanation. Practitioners and contributors to the movement, of course, know how to branch this out into specific streams: science, philosophy, politics and more.

- This article was originally published on ImmortalLife.info

We are in the midst of a technological revolution, and it is cool to proclaim that one is a Transhumanist. Yet, many intelligent and focused Transhumanists are asking some all important questions: What road-map have we drawn out, and what concrete steps are we taking to bring to fruition, the goals of Transhumanism?

Transhumanism could be looked at as culminating in Technological Singularity. People comprehend the meaning of Singularity differently. One such definition: Singularity marks a moment when technology trumps the human brain, and the limitations of the mind are surpassed by artificial intelligence. Being an Author and not a scientist myself, my definition of the Singularity is colored by creative vision. I call it Dirrogate Singularity.

I see us humans, successfully and practically, harnessing the strides we’ve made in semiconductor tech and neural networks, Artificial intelligence, and digital progress in general over the past century, to create Digital Surrogates of ourselves — our Dirrogates. In doing so, humans will reach pseudo-God status and will be free to merge with these creatures they have made in their own likeness…attaining, Dirrogate Singularity.

So, how far into the future will this happen? Not very far. In fact it can commence as soon as today or as far as, in a couple of years. The conditions and timing are right for us to “trans-form” into Digital Beings; Dirrogates.

I’ll use excerpts from the story ‘Memories with Maya’ to seed ideas for a possible road-map to Dirrogate Singularity, while keeping the tenets of Transhumanism in focus on the dashboard as we steer ahead. As this text will deconstruct many parts of the novel, major spoilers are unavoidable.

Dirrogate Singularity v/s The Singularity:

The main distinction in definition I make is: I don’t believe Singularity is the moment when technology trumps the human brain. I believe Singularity is when the human mind accepts and does not discriminate between an advanced “Transhuman” (effectively, a mind upload living in a bio-mechanical body) and a “Natural” (an un-amped homo sapien)

This could be seen as a different interpretation of the commonly accepted concept of The Singularity. As one of the aims of this essay is to create a possible road-map to seed ideas for the Transhumanism movement, I choose to look at a wholly digital path to Transhumanism, bypassing human augmentation via nanotechnology, prosthetics or cyborg-ism. As we will see further down, Dirrogate Singularity could slowly evolve into the common accepted definitions of Technological Singularity.

What is a Dirrogate:

A portmanteau of Digital + Surrogate. An excerpt from the novel explains in more detail:

“Let’s run the beta of our social interaction module outside.”

Krish asked the prof to follow him to the campus ground in front of the food court. They walked out of the building and approached a shaded area with four benches. As they were about to sit, my voice came through the phone’s speaker. “I’m on your far right.”

Krish and the prof turned, scanning through the live camera view of the phone until they saw me waving. The phone’s compass updated me on their orientation. I asked them to come closer.

“You have my full attention,” the prof said. “Explain…”

“So,” Krish said, in true geek style… “Dan knows where we are, because my phone is logged in and registered into the virtual world we have created. We use a digital globe to fly to any location. We do that by using exact latitude and longitude coordinates.” Krish looked at the prof, who nodded. “So this way we can pick any location on Earth to meet at, provided of course, I’m physically present there.”

“I understand,” said the prof. “Otherwise, it would be just a regular online multi-player game world.”

“Precisely,” Krish said. “What’s unique here is a virtual person interacting with a real human in the real world. We’re now on the campus Wifi.” He circled his hand in front of his face as though pointing out to the invisible radio waves. “But it can also use a high-speed cell data network. The phone’s GPS, gyro, and accelerometer updates as we move.”

Krish explained the different sensor data to Professor Kumar. “We can use the phone as a sophisticated joystick to move our avatar in the virtual world that, for this demo, is a complete and accurate scale model of the real campus.”

The prof was paying rapt attention to everything Krish had to say. “I laser scanned the playground and the food-court. The entire campus is a low rez 3D model,” he said. “Dan can see us move around in the virtual world because my position updates. The front camera’s video stream is also mapped to my avatar’s face, so he can see my expressions.”

“Now all we do is not render the virtual buildings, but instead, keep Daniel’s avatar and replace it with the real-world view coming in through the phone’s camera,” explained Krish.

“Hmm… so you also do away with render overhead and possibly conserve battery life?” the prof asked.

“Correct. Using GPS, camera and marker-less tracking algorithms, we can update our position in the virtual world and sync Dan’s avatar with our world.”

“And we haven’t even talked about how AI can enhance this,” I said.

I walked a few steps away from them, counting as I went.

“We can either follow Dan or a few steps more and contact will be broken. This way in a social scenario, virtual people can interact with humans in the real world,” Krish said. I was nearing the personal space out of range warning.

“Wait up, Dan,” Krish called.

I stopped. He and the prof caught up.

“Here’s how we establish contact,” Krish said. He touched my avatar on the screen. I raised my hand in a high-five gesture.

“So only humans can initiate contact with these virtual people?” asked the prof.

“Humans are always in control,” I said. They laughed.

“Aap Kaise ho?” Krish said.

“Main theek hoo,” I answered a couple of seconds later, much to the surprise of the prof.

“The AI module can analyze voice and cross-reference it with a bank of ten languages.” he said. “Translation is done the moment it detects a pause in a sentence. This way multicultural communication is possible. I’m working on some features for the AI module. It will be based on computer vision libraries to study and recognize eyebrows and facial expressions. This data stream will then be accessible to the avatar’s operator to carry out advanced interaction with people in the real world–”

“So people can have digital versions of themselves and do tasks in locations where they cannot be physically present,” the prof completed Krish’s sentence.

“Cannot or choose not to be present and in several locations if needed,” I said. “There is no reason we can’t own several digital versions of ourselves doing tasks simultaneously.”

“Each one licensed with a unique digital fingerprint registered with the government or institutions offering digital surrogate facilities.” Krish said.

“We call them di-rro-gates.” I said.

One of the characters in the story also says: “Humans are creatures of habit.” and, “We live our lives following the same routine day after day. We do the things we do with one primary motivation–comfort.”

Whether this is entirely true or not, there is something to think about here… What does ‘improving the human condition’ imply? To me Comfort, is high on the list and a major motivation. If people can spawn multiple Dirrogates of themselves that can interact with real people wearing future iterations of Google Glass (for lack of a more popular word for Augmented Reality visors)… then the journey on the road-map to Dirrogate Singularity is to see a few case examples of Dirrogate interaction.

Evangelizing Transhumanism:

In writing the novel, I took several risks, story length being one. I’ve attempted to keep the philosophy subtle, almost hidden in the story, and judging by reviews on sites such as GoodReads.com, it is plain to see that many of today’s science fiction readers are after cliff hanger style science fiction and gravitate toward or possibly expect a Dystopian future. This root craving must be addressed in lay people if we are to make Transhumanism as a movement, succeed.

I’d noticed comments made that the sex did not add much to the story. No one (yet) has delved deeper to see if there was a reason for the sex scenes and if there was an underlying message. The success of Transhumanism is going to be in large scale understanding and mass adoption of the values of the movement by laypeople. Google Glass will make a good case study in this regard. If they get it wrong, Glass will quickly share the same fate and ridicule as wearing blue-tooth headsets.

One of the first things, in my view, to improving the human condition, is experiencing pleasure… of every kind, especially carnal.

In that sense, we already are Digital Transhumans. Long distance video calls, teledildonics and recent mainstream offerings such as Durex’s “Fundawear” can bring physical, emotional and psychological comfort to humans, without the traditional need for physical proximity or human touch.

durex_fundawear_dirrogate_sex

(Durex’s Fundawear – Image Courtesy Snapo.com)

These physical stimulation and pleasure giving devices add a whole new meaning to ‘wearable computing’. Yet, behind every online Avatar, every Dirrogate, is a human operator. Now consider: What if one of these “Fundawear” sessions were recorded?

The data stream for each actuator in the garment, stored in a file – a feel-stream, unique to the person who created it? We could then replay this and experience or reminisce the signature touch of a loved one at any time…even long after they are gone; are no more. Would such as situation qualify as a partial or crude “Mind upload”?

Mind Uploading – A practical approach.

Using Augmented Reality hardware, a person can see and experience interaction with a Dirrogate, irrespective if the Dirrogate is remotely operated by a human, or driven by prerecorded subroutines under playback control of an AI. Mind uploading [at this stage of our technological evolution] does not have to be a full blown simulation of the mind.

Consider the case of a Google Car. Could it be feasible that a human operator remotely ‘drive’ the car with visual feedback from the car’s on-board environment analysis cameras? Any AI in the car could be used on an as-needed basis. Now this might not be the aim of a driver-less car, and why would you need your Dirrogate to physically drive when in essence you could tele-travel to any location?

Human Shape Shifters:

Reasons could be as simple as needing to transport physical cargo to places where home delivery is not offered. Your Dirrogate could drive the car. Once at the location [hardware depot], your Dirrogate could merge with the on-board computer of an articulated motorized shopping cart. Check out counter staff sees your Dirrogate augmented in the real world via their visor. You then steer the cart to the parking lot, load in cargo [via the cart’s articulated arm or a helper] and drive home. In such a scenario, a mind upload has swapped physical “bodies” as needed, to complete a task.

If that use made your eyes roll…here’s a real life example:

Devon Carrow, a 2nd grader has a life threatening illness that keeps him away from school. He sends his “avatar” a robot called Vigo.

In the case of a Dirrogate, if the classroom teacher wore an AR visor, she could “see” Devon’s Dirrogate sitting at his desk. A mechanical robot body would be optional. An overhead camera could project the entire Augmented classroom so all children could be aware of his presence. As AR eye-wear becomes more affordable, individual students could interact with Dirrogates. Such use of Dirrogates do fit in completely with the betterment-of-the-human-condition argument, especially if the Dirrogate operator is a human who could come into harm’s way in the real world.

While we simultaneously work on longevity and eliminating deadly diseases, both noble causes, we have to come to terms with the fact that biology has one up on us in the virus department as of today. Epidemic outbreaks such as SARS can keep schools closed. Would it not make sense to maintain the communal ethos of school attendance and classroom interaction by transhumanizing ourselves…digitally?

Does the above example qualify as Mind Uploading? Not in the traditional definition of the term. But looking at it from a different perspective, the 2nd grader has uploaded his mind to a robot.

Dirrogate Immortality via Quantum Archeology:

Below is a passage from the story. The literal significance of which, casual readers of science fiction miss out on:

“Look at her,” I said. “I don’t want her to be a just a memory. I want to keep her memory alive. That day, the Wizer was part of the reason for three deaths. Today, it’s keeping me from dying inside.”

“Help me, Krish,” I said. “Help me keep her memory alive.” He was listening. He wiped his eyes with his hands. I took the Wizer off. “Put it back on,” he said.

closer_look_wizer_memories_with_maya_dirrogate

A closer look at the Wizer – [visor with Augmented Intelligence built in.]

The preceding excerpt from the story talks about resurrecting her; digital-cryonics.

So, how would Quantum Archeology techniques be applied to resurrect a dead person? Every day we spend hours uploading our stream-of-consciousness to the “cloud”. Photos, videos, Instagrams, Facebook status updates, tweets. All of this is data that can be and is being mined by Deep Learning systems. There’s no prize for guessing who the biggest investor and investigator of Deep Learning is.

Quantum Archeology gets a helping hand with all the digital breadcrumbs we’re leaving around us in this century. The question is: Is that enough information for us to Create a Mind?

Mind Uploading – Libraries and Subroutines:

A more relevant question to ask is, should we attempt to build a mind from the ground up, or start by collecting subroutines and libraries unique to a particular person? Earlier on in the article, it was suggested that by recording a ‘Fundawear’ session, we could re-experience someone’s signature intimate touch. Using Deep Learning, can personality libraries be built?

A related question to answer is: Wouldn’t it make everything ‘artificial’ and be a degraded version of the original? To attempt to answer such a question, let’s look around us today. Aren’t we already degrading our sense of hearing for instance, when we listen to hour after hour of MP3 music sampled at 128kHz or less? How about every time we’ve come to rely on Google’s “did you mean” or Microsoft’s red squiggly line to correct even our simple spellings?

Now, it gets interesting… since we have mind upload “libraries”, we are at liberty to borrow subroutines from more accomplished humans, to augment our own intelligence.

Will the near future allow us to choose a Dirrogate partner with the creative thinking of one person’s personality upload, the intimate skill-set of another and… you get the picture. Most people lead routine 9 to 5 lives. That does not mean that they are not missed by loved ones after they have completed their biological life-cycle. Resurrecting or simulating such minds is much easier than say re-animating Einstein.

In the story, Krish, on digitally resurrecting his father recounts:

“After I saw Maya, I had to,” he said. “I’ve used her same frame structure for the newspaper reading. Last night I went through old photos, his things, his books,” his voice was low. “I’m feeding them into the frame. This was his life for the past two years before the cancer claimed him. Every evening he would sit in this chair in the old house and read his paper.”

I listened in silence as he spoke. Tactile receptors weren’t needed to experience pain. Tone of voice transported those spores just as easily.

“It was easy to create a frame for him, Dan,” he said. “In the time that the cancer was eating away at him, the day’s routine became more predictable. At first he would still go to work, then come home and spend time with us. Then he couldn’t go anymore and he was at home all day. I knew his routine so well it took me 15 minutes to feed it in. There was no need for any random branches.”

I turned to look at him. The Wizer hid his eyes well. “Krish,” I said. “You know what the best part about having him back is? It does not have to be the way it was. You can re-define his routine. Ask your mom what made your dad happy and feed that in. Build on old memories, build new ones and feed those in. You’re the AI designer… bend the rules.”

“I dare not show her anything like this,” he said. “She would never understand. There’s something not right about resurrecting the dead. There’s a reason why people say rest in peace.”

Who is the real Transhuman?

Is it a person who has augmented their physical self or augmented one of their five primary senses? Or is it a human who has successfully re-wired their brain and their mind to accept another augmented human and the tenets of Transhumanism?

“He said perception is in the eye of the beholder… or something to that effect.”

“Maybe he said realism?” I offered.

“Yeah. Maybe. Turns out he is a believer and subscribes to the concept of transhumanism,” Krish said, adjusting the Wizer on the bridge of his nose. “He believes the catalyst for widespread acceptance of transhumanism has to be based on visual fidelity or the entire construct will be stymied by the human brain and mind.”

“Hmm… the uncanny valley effect? It has to be love at first sight, if we are to accept an augmented person huh.”

“Didn’t know you followed the movement,” he said.

“Look around us. Am I really here in person?”

“Point taken,” he said.

While taking the noble cause of Transhumanism forward, we have to address one truism that was put forward in the movie, The Terminator: It’s in your nature to destroy yourselves.”

When we eventually reach a full mind-upload stage and have the ability to swap or borrow libraries from other ‘minds’, will personality traits of greed still be floating around as rogue libraries? Perhaps the common man is right – A Dystopian future is on the cards, that’s why science fiction writers gravitate toward dystopia worlds.

Could this change as we progress from transhuman to post-human?

In building a road-map for Transhumanism, we need to present and evangelize more to the common man in language and scenarios they can identify with. That is one of the main reasons Memories with Maya features settings and language that at times, borders on juvenile fiction. Concepts such as life extension, reversal of aging and immortality can be made to resound better with laypeople when presented in the right context. There is a reason that Vampire stories are on the nation’s best seller lists.

People are intrigued and interested in immortality.

Memories with Maya – The Dirrogate on Amazon: http://www.amazon.com/Memories-With-Maya-Clyde-Dsouza/dp/148…atfound-20

For more on the science used in the book, visit: Http://www.dirrogate.com

Why does Science Fiction gravitate towards Dystopia and not the Utopia that Transhumanism promises?

front_cover_Mwm

Of the two images above, as a typical Science Fiction reader, which would you gravitate towards? In designing the cover for my book I ran about 80 iterations of 14 unique designs through a group of beta readers, and the majority chose the one with the Green tint. (design credit: Dmggzz)

No one could come up with a satisfying reason on why they preferred it over the other, except that it “looked more sci-fi” I settled for the design on the right, though it was a very hard decision to make. I was throwing away one of the biggest draws to a book — An inviting Dystopian book cover.

As an Author (and not a scientist) myself, I’ve noticed that scifi readers seem to want dystopian fiction –exclusively. A quick glance at reader preferences in scifi on sites such as GoodReads shows this. Yet, from noticing Vampire themed fiction rule the best seller lists, and from box office blockbusters, we can assume, the common man and woman is also intrigued by Longevity and Immortality.

Why is it so hard for sci-fi fans to look to the “brighter side” of science. Look at the latest Star Trek for instance…Dystopia. Not the feel good, curiosity nurturing theme of Roddenberry. This is noted in a post by Gray Scott on the website ImmortalLife.

I guess my question is: Are there any readers or Futurology enthusiasts that crave a Utopian future in their fiction and real life, or are we descending a spiral staircase (no pun) into eventual Dystopia. In ‘The Dirrogate — Memories with Maya’, I’ve tried to (subtly) infuse the philosophy of transhumanism — technology for the betterment of humans.

At Lifeboat, the goal is ‘encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies.’ We need to reach out to the influencers of lay people, the authors, the film-makers… those that have the power to evangelize the ethos of Transhumanism and the Singularity, to paint the truth: Science and Technology advancement is for the betterment of the human race.

It would be naive to think that technology would not be abused and a Dystopia world is indeed a scary and very real threat, but my belief is: We should guide (influence?) people to harness this “fire” to nurture and defend humanity, via our literature and movies, and cut back on seeding or fueling ideas that might lead to the destruction of our species.

Your thoughts?

Ten Commandments of Space

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

The human race can only go in one of two directions; space or extinction- right now we are an endangered species.

3. Thou shalt use the power of the atom to live on other worlds.

Nuclear energy is to the space age as steam was to the industrial revolution; chemical propulsion is useless for interplanetary travel and there is no solar energy in the outer solar system.

4. Thou shalt use nuclear weapons to travel through space.

Physical matter can barely contain chemical reactions; the only way to effectively harness nuclear energy to propel spaceships is to avoid containment problems completely- with bombs.

5. Thou shalt gather ice on the Moon as a shield and travel outbound.

The Moon has water for the minimum 14 foot thick radiation shield and is a safe place to light off a bomb propulsion system; it is the starting gate.

6. Thou shalt spin thy spaceships and rings and hollow spheres to create gravity and thrive.

Humankind requires Earth gravity and radiation to travel for years through space; anything less is a guarantee of failure.

7. Thou shalt harvest the Sun on the Moon and use the energy to power the Earth and propel spaceships with mighty beams.

8. Thou shalt freeze without damage the old and sick and revive them when a cure is found; only an indefinite lifespan will allow humankind to combine and survive. Only with this reprieve can we sleep and reach the stars.

9. Thou shalt build solar power stations in space hundreds of miles in diameter and with this power manufacture small black holes for starship engines.

10. Thou shalt build artificial intellects and with these beings escape the death of the universe and resurrect all who have died, joining all minds on a new plane.

Human Brain Mapping & Simulation Projects: America Wants Some, Too?


The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

Machine Morality: a Survey of Thought and a Hint of Harbinger


The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

How can humans compete with singularity agents?

It appears now that intelligence of humans is largely superseeded by robots and artificial singularity agents. Education and technology have no chances to make us far more intelligent. The question is now what is our place in this new world where we are not the topmost intelligent kind of species.

Even if we develop new scientific and technological approaches, it is likely that machines will be far more efficient than us if these approaches are based on rationality.

IMO, in the next future, we will only be able to compete in irrational domains but I am not that sure that irrational domains cannot be also handled by machines.

On Leaving the Earth. Like, Forever. Bye-Bye.


Technology is as Human Does

When one of the U.S. Air Force’s top future strategy guys starts dorking out on how we’ve gotta at least begin considering what to do when a progressively decaying yet apocalyptically belligerent sun begins BBQing the earth, attention is payed. See, none of the proposed solutions involve marinade or species-level acquiescence, they involve practical discussion on the necessity for super awesome technology on par with a Kardeshev Type II civilization (one that’s harnessed the energy of an entire solar system).

Because Not if, but WHEN the Earth Dies, What’s Next for Us?
Head over to Kurzweil AI and have a read of Lt. Col. Peter Garretson’s guest piece. There’s perpetuation of the species stuff, singularity stuff, transhumanism stuff, space stuff, Mind Children stuff, and plenty else to occupy those of us with borderline pathological tech obsessions.

[BILLION YEAR PLAN — KURZWEIL AI]
[U.S. AIR FORCE BLUE HORIZONS FUTURE STUFF PROJECT]

Approaching the Great Rescue

http://www.sciencedaily.com/releases/2012/08/120815131137.htm

One more step has been taken toward making whole body cryopreservation a practical reality. An understanding of the properties of water allows the temperature of the human body to be lowered without damaging cell structures.

Just as the microchip revolution was unforeseen the societal effects of suspending death have been overlooked completely.

The first successful procedure to freeze a human being and then revive that person without damage at a later date will be the most important single event in human history. When that person is revived he or she will awaken to a completely different world.

It will be a mad rush to build storage facilities for the critically ill so their lives can be saved. The very old and those in the terminal stages of disease will be rescued from imminent death. Vast resources will be turned toward the life sciences as the race to repair the effects of old age and cure disease begins. Hundreds of millions may eventually be awakened once aging is reversed. Life will become far more valuable overnight and activities such as automobile and air travel will be viewed in a new light. War will end because no one will desire to hasten the death of another human being.

It will not be immortality, just parole from the death row we all share. Get ready.

The Electric Septic Spintronic Artilect

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

Response to the Global Futures 2045 Video

I have just watched this video by Global Futures 2045.

This is my list of things I disagree with:

It starts with scary words about how every crisis comes faster and faster. However this is untrue. Many countries have been running deficits for decades. The financial crisis is no surprise. The reason the US has such high energy costs goes back to government decisions made in the 1970s. And many things that used to be crises no longer happen, like the Black Plague. We have big problems, but we’ve also got many resources we’ve built up over the centuries to help. Much of the challenges we face are political and social, not technical.

We will never fall into a new Dark Ages. The biggest problem is that we aren’t advancing as fast as we could and many are still starving, sick, etc. However, it has always been this way. The 20th century was very brutal! But we are advancing and it is mostly known threats like WMDs which could cause a disaster. In the main, the world is getting safer every day as we better understand it.

We aren’t going to build a new human. It is more like a Renaissance. Those who lost limbs will get increasingly better robotic ones, but they will still be humans. The best reason to build a robotic arm is to attach it to a human.

The video had a collectivist and authoritarian perspective when it said:

“The world’s community and leaders should encourage mankind instead of wasting resources on solving momentary problems.”

This sentence needs to be deconstructed:

1. Government acts via force. Government’s job is to maintain civil order, so having it also out there “encouraging” everyone to never waste resources is creepy. Do you want your policeman to also be your nanny? Here is a quote from C.S. Lewis:

“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”

2. It is wrong to think government is the solution to our problems. Most of the problems that exist today like the Greek Debt Crisis, and the US housing crisis were caused by governments trying to do too much.

3. There is no such thing as the world’s leaders. There is the UN, which doesn’t act in a humanitarian crisis until after everyone is dead. In any case, we don’t need the governments to act. We built Wikipedia.

4. “Managing resources” is codeword for socialism. If their goal is to help with the development of new technologies, then the task of managing existing resources is totally unrelated. If your job is to build robots, then your job is not also to worry about whether the water and air are dirty. Any scientist who talks about managing resources is actually a politician. Here is a quote from Frederic Hayek:

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design. Before the obvious economic failure of Eastern European socialism, it was widely thought that a centrally planned economy would deliver not only “social justice” but also a more efficient use of economic resources. This notion appears eminently sensible at first glance. But it proves to overlook the fact that the totality of resources that one could employ in such a plan is simply not knowable to anybody, and therefore can hardly be centrally controlled.”

5. We should let individuals decide what to spend their resources on. People don’t only invest in momentary things. People build houses. In fact, if you are looking for an excuse to drink, being poor because you live in a country with 70% taxes is a good one.

The idea of tasking government to finding the solutions and to do all futuristic research and new products to shove down our throats is wrong and dangerous. We want individuals, and collections of them (corporations) to do it because they will best put it to use in ways that actually improve our lives. Everything is voluntary which encourages good customer relationships. The money will be funded towards the products people actually care about, instead of what some mastermind bureaucrat thinks we should spend money on. There are many historical examples of how government doesn’t innovate as well as the private sector: the French telephone system, Cuba, expensive corn-based ethanol, the International Space Station, healthcare. The free market is imperfect but it leads to fastest technological and social progress for the reasons Frederic Hayek has explained. A lot of government research today is wasted because it never gets put to use commercially. There are many things that can be done to make the private sector more vibrant. There are many ways government can do a better job, and all that evidence should be a warning to not use governments to endorse programs with the goal of social justice. NASA has done great things, but it was only because it existed in a modern society that it was possible.

They come up with a nice list of things that humanity can do, but they haven’t listed that the one of the most important first steps is more Linux. We aren’t going to get cool and smart robots, etc. without a lot of good free software first.

The video says:

“What we need is not just another technological revolution, but a new civilization paradigm, we need philosophy and ideology, new ethics, new culture, new psychology.”

It minimizes the technology aspect when this is the hard work by disparate scientists that will bring us the most benefits.

It is true that we need to refine our understandings of many things, but we are not starting over, just evolving. Anyone who thinks we need to start over doesn’t realize what we’ve already built and all the smart people who’ve come before. The basis of good morals from thousands of years ago still apply. It will just be extended to deal with new situations, like cloning. The general rules of math, science, and biology will remain. In many cases, we are going back to the past. The Linux and free software movement is simply returning computer software to the hundreds of years-old tradition of science. Sometimes the idea has already been discovered, but it isn’t widely used yet. It is a social problem, not a technical one.

The repeated use of the word “new”, etc. makes this video like propaganda. Cults try to get people to reset their perspective into a new world, and convince them that only they have the answers. This video comes off as a sales pitch with them as the solution to our problems, ignoring that it will take millions. Their lists of technologies are random. Some of these problems we could have solved years ago, and some we can’t solve for decades, and they mix both examples. It seems they do no know what is coming next given how disorganized they are. They also pick multiple words that are related and so are repeating themselves. Repetition is used to create an emotional impact, another trait of propaganda.

The thing about innovation and the future is that it is surprising. Many futurists get things wrong. If these guys really had the answers, they’d have invented it and made money on it. And compared to some of the tasks, we are like cavemen.

Technology evolves in a stepwise fashion, and so looking at it as some clear end results on some day in the future is wrong.

For another example: the video makes it sound like going beyond Earth and then beyond the Solar System is a two-step process when in fact it is many steps, and the journey is the reward. If they were that smart, they’d endorse the space elevator which is the only cheap way to get out there, and we can do it in 10 years.

The video suggests that humanity doesn’t have a masterplan, when I just explained that you couldn’t make one.

It also suggests that individuals are afraid of change, when in fact, that is a trait characteristic of governments as well. The government class has known for decades that Social Security is going bankrupt, but they’d rather criticize anyone who wants to reform it rather than fix the underlying problem. This video is again trying to urge collectivism with its criticism of the “mistakes” people make. The video is very arrogant at how it looks down at “the masses.” This is another common characteristic of collectivism.

Here is the first description of their contribution:

“We integrate the latest discoveries and developments from the sciences: physics, energetics, aeronautics, bio-engineering, nanotechnology, neurology, cybernetics, cognitive science.”

That sentence is laughable because it is an impossible task. To understand all of the latest advances would involve talking with millions of scientists. If they are doing all this integration work, what have they produced? They want everyone to join up today, work to be specified later.

The challenge for nuclear power is not the science, it is the lawyers who outlawed new ones in 1970s, and basically have halted all advancements in building safer and better ones. Some of these challenges are mostly political, not scientific. We need to get engineers in corporations like GE, supervised by governments, building safer and cleaner nuclear power.

If you wanted to create all of what they offer, you’d have to hire a million different people. If you were building the pyramids, you could get by with most of your workers having one skill, the ability to move heavy things around. However, the topics they list are so big and complicated, I don’t think you could build an organization that could understand it all, let alone build it.

They mention freedom and speak in egalitarian terms, but this is contradicted by their earlier words. In their world, we will all be happy worker bees, working “optimally” for their collective. Beware of masterminds offering to efficiently manage your resources.

I support discussion and debate. I am all for think-tanks and other institutions that hire scientists. However, those that lobby government to act on their behalf are scary. I don’t want every scientist lobbying the government to institute their pet plan, no matter how good it sounds. They will get so overwhelmed that they won’t be able to do their actual job. The rules of the US Federal government are very limited and generally revolve around an army and a currency. Social welfare is supposed to be handled by the states.

Some of their ideas cannot be turned into laws by the US Congress because they don’t have this authority — the States do. Obamacare is likely to be ruled unconstitutional, and their ideas are potentially much more intrusive towards individual liberty. It would require a Constitutional Amendment, which would never pass and we don’t need.

They offer a social network where scientists can plug in and figure out what they need to do. This could also be considered an actual concrete example of something they are working on. However, there are already social networks where people are advancing the future. SourceForge.net is the biggest community of programmers. There is also Github.com with 1,000,000 projects. Sage has a community advancing the state of mathematics.

If they want to create their own new community solving some aspect, that is great, especially if they have money. But the idea that they are going to make it all happen is impossible. And it will never replace all the other great communities that already exist. Even science happens on Facebook, when people chat about their work.

If they want to add value, they need to specialize. Perhaps they come up with millions of dollars and they can do research in specific areas. However, their fundamental research would very likely get used in ways they never imagined by other people. The more fundamental, the more no one team can possibly take advantage of all aspects of the discovery.

They say there is some research lab they’ve got working on cybernetics. However they don’t demonstrate any results. I don’t imagine they can be that much ahead of the rest of the world who provides them the technology they use to do their work. Imagine a competitor to Henry Ford. Could he really build a car much better given the available technology at the time? My response to anyone who has claims of some advancements is: turn it into a demo or useful product and sell it. All this video offer as evidence here is CGI, which any artist can make.

I support the idea of flying cars. First we need driverless cars and cheaper energy. Unless they are a car or airplane company, I don’t see what this organization will have to do with that task. I have nothing against futuristic videos, but they don’t make clear what is their involvement and instances of ambiguity should be noted.

They are wrong when they say we won’t understand consciousness till 2030 because we already understand it at some level today. Neural networks have been around for decades. IBM’s Jeopardy-playing Watson was a good recent example. However, it is proprietary so not much will come of that particular example. Fortunately, Watson was built on lots of free software, and the community will get there. Google is very proprietary with their AI work. Wolfram Alpha is also proprietary. Etc. We’ve got enough the technical people for an amazing world if we can just get them to work together in free software and Python.

The video’s last sentence suggests that spiritual self-development is the new possibility. But people can work on that today. And again, enlightenment is not a destination but a journey.

We are a generation away from immortality unless things greatly change. I think about LibreOffice, cars that drive themselves and the space elevator, but faster progress in biology is also possible as well if people will follow the free software model. The Microsoft-style proprietary development model has infected many fields.

/* */