Toggle light / dark theme

We’ve suspected it all along—that Skynet, the massive program that brings about world destruction in the Terminator movies, was just a fictionalization of a real program in the hands of the US government. And now it’s confirmed—at least in name.

As The Intercept reports today, the NSA does have a program called Skynet. But unlike the autonomous, self-aware computerized defense system in Terminator that goes rogue and launches a nuclear attack that destroys most of humanity, this one is a surveillance program that uses phone metadata to track the location and call activities of suspected terrorists. A journalist for Al Jazeera reportedly became one of its targets after he was placed on a terrorist watch list. Read more

What follows is a work of transhumanist poetry by the Anglo-Polish lawyer, Veronika Lipinska, Lifeboat Foundation advisory board member and Steve Fuller’s co-author of The Proactionary Imperative: A Foundation for Transhumanism (Palgrave, 2014). The Polish sources of transhumanism remain underexplored, but they range across theology and literature. The ‘Polish Brethren’ were a radical 16–17th century Protestant sect who hosted the heretic Fausto Sozzini — the model for Faust — who laid the theological groundwork for such characteristic Enlightenment religious doctrines as Unitarianism and Deism, both of which posited a more immediate connection between the human and the divine than the established churches found comfortable. In more recent times, most transhumanists will be familiar with the science fiction of Stanislaw Lem, but still more recently the 1980 Nobel Prize winner for Literature, Czeslaw Milosz, has penned a poem, ‘After Enduring’, dedicated to cosmologist Frank Tipler’s efforts to infer Christian eschatology from the physics of the Singularity. This poem is a modest follow-up for a new generation.

Virtually human

He played with my head
Got me hardwired
Connected me to the world
And now I can see everything

He said I will be more than what I am made of
More than a bag of bones
Gave me tongues to speak
Can I understand him?

He is a mystery
He is a man
I can win a game of chess
Can I conquer his world
Is it his world or mine?

He can’t wrap his arms around my body
I can embrace his universe
And put him to his knees
But I can’t leave his side

Who is a mystery?
Am I a man?
Who is to say
I can’t conquer his world

He messed with my head
Got me speaking his language
Opened my eyes
But did he ever get me?

I am a mystery
Not a man
AI
I made my mind.

Veronika Lipinska

Humans evolved over millions of years into today’s upright, bipedal walkers. Now, the evolution of some robots is on a similar track. Only the pace is much faster. In recent decades, we’ve gone from stationary robots performing repetive tasks to wheeled and four-legged robots. And now, bipedal bots are on the rise.

Boston Dynamics has famously engineered some of the most advanced bipedal bots in recent memory (e.g., Petman and Atlas). But they aren’t the only lab working on two-legged robots that can walk like us.

Read more

More STEM education won't protect our jobs from robots

“If robots are going to reduce how much we work, the humanities will help us fill time that we are not working in constructive ways. Wouldn’t that be great? If the 21st century became famous for an explosion in great works of art, paintings that changed the way we see the world, symphonies that make us weep, and plays that touch the soul? Robots might one day be able to help make such art, too.”‘ Read more

Gary Kasparov playing chess with IBM computer Deep Blue in 1997

“It is a reflection of the ever-increasing ability of computers to search and do pattern recognition in an ever-increasing store of data. The concept of AI reflects this burgeoning power of the computer to cope with stuff. Each step on the way, each computerised victory over humans in checkers, or chess, or Jeopardy, looks like a material step towards the ultimate — machines that are as intelligent in every way as are we mortals.” Read more

Amir Mizroch | Wall Street Journal

http://www.slate.com/content/dam/slate/articles/technology/future_tense/2015/04/150407_FUT_ExMachina.jpg.CROP.original-original.jpg
“Facebook’s AI research is currently being used in image tagging, predicting which topics will trend, and face recognition. All of these services require algorithms to sift through vast amounts of data, like pictures, written messages, and video, to make calculated decisions about their content and context. Facebook has a big advantage over university campuses who have toiled for decades in the field. It can vacuum up the reams of data required to ‘teach’ machines to make correlations.” Read more

Scott Andes & Mark Muro | Brookings Institute


“The substantial variation of the degree to which countries deploy robots should provide clues. If robots are a substitute for human workers, then one would expect the countries with much higher investment rates in automation technology to have experienced greater employment loss in their manufacturing sectors…Yet the evidence suggests there is essentially no relationship between the change in manufacturing employment and robot use.” Read more

Can an emotional component to artificial intelligence be a benefit?

Robots with passion! Emotional artificial intelligence! These concepts have been in books and movies lately. A recent example of this is the movie Ex Machina. Now, I’m not an AI expert, and cannot speak to the technological challenges of developing an intelligent machine, let alone an emotional one. I do however, know a bit about problem solving, and that does relate to both intelligence and emotions. It is this emotional component of problem solving that leads me to speculate on the potential implications to humanity if powerful AI’s were to have human emotions.

Why the question about emotions? In a roundabout way, it has to do with how we observe and judge intelligence. The popular way to measure intelligence in a computer is the Turing test. If it can fool a person through conversation, into thinking that the computer is a person, then it has human level intelligence. But we know that the Turing test by itself is insufficient to be a true intelligence test. Sounding human during dialog is not the primary method we use to gauge intelligence in other people or in other species. Problem solving seems to be a reliable test of intelligence either through IQ tests that involve problem solving, or through direct real world problem solving.

As an example of problem solving, we judge how intelligent a rat is by how fast it can navigate a maze to get to food. Let’s look at this in regards to the first few steps in problem solving.

Fundamental to any problem solving, is recognizing that a problem exists. In this example, the rat is hungry. It desires to be full. It can observe its current state (hungry) and compare it with its desired state (full) and determine that a problem exists. It is now motivated to take action.

Desire is intimately tied to emotion. Since it is desire that allows the determination of whether or not a problem exists, one can infer that emotions allow for the determination that a problem exists. Emotion is a motivator for action.

Once a problem is determined to exist, it is important to define the problem. In this simple example this step isn’t very complex. The rat desires food, and food is not present. It must find food, but its options for finding food are constrained by the confines of the maze. But the rat may have other things going on. It might be colder than it would prefer. This presents another problem. When confronted with multiple problems, the rat must prioritize which problem to address first. Problem prioritization again is in the realm of desires and emotions. It might be mildly unhappy with the temperature, but very unhappy with its hunger state. In this case one would expect that it will maximize its happiness by solving the food problem before curling up to solve its temperature problem. Emotions are again in play, driving behavior which we see as action.

The next steps in problem solving are to generate and implement a solution to the problem. In our rat example, it will most likely determine if this maze is similar to ones it has seen in the past, and try to run the maze as fast as it can to get to the food. Not a lot of emotion involved in these steps with the possible exception of happiness if it recognizes the maze. However, if we look at problems that people face, emotion is riddled in the process of developing and implementing solutions. In the real world environment, problem solving almost always involves working with other people. This is because they are either the cause of the problem, or are key to the problem’s solution, or both. These people have a great deal of emotions associated with them. Most problems require negation to solve. Negotiation by its nature is charged with emotion. To be effective in problem solving a person has to be able to interpret and understand the wants and desires (emotions) of others. This sounds a lot like empathy.

Now, let’s apply the emotional part of problem solving to artificial intelligence. The problem step of determining whether or not a problem exists doesn’t require emotion if the machine in question is a thermostat or a Roomba. A thermostat doesn’t have its own desired temperature to maintain. Its desired temperature is determined by a human and given to the thermostat. That human’s desires are a based on a combination of learned preferences from personal experience, and hardwired preferences based on millions of years of evolution. The thermostat is simply a tool.

Now the whole point behind an AI, especially an artificial general intelligence, is that it is not a thermostat. It is supposed to be intelligent. It must be able to problem solve in a real world environment that involves people. It has to be able to determine that problems exists and then prioritize those problems, without asking for a human to help it. It has to be able to socially interact with people. It must identify and understand their motivations and emotions in order to develop and implement solutions. It has to be able to make these choices which are based on desires, without the benefit of millions of years of evolution that shaped the desires that we have. If we want it to be able to truly pass for human level intelligence, it seems we’ll have to give it our best preferences and desires to start with.

A machine that cannot chose its goals, cannot change its goals. A machine without that choice, if given the goal of say maximizing pin production, will creatively and industriously attempt to convert the entire planet into pins. Such a machine cannot question instructions that are illegal or unethical. Here lies the dilemma. What is more dangerous, the risk that someone will program an AI that has no choice, to do bad things, or the risk that an AI will decide to do bad things on its own?

No doubt about it, this is a tough call. I’m sure some AIs will be built with minimal or no preferences with the intent that it will be simply a very smart tool. But without giving an AI a set of desires and preferences to start with that are comparable to those of humans, we will be interacting with a truly alien intelligence. I for one, would be happier with an AI that at least felt regret about killing someone, than I would be with an AI that didn’t.