Toggle light / dark theme

In his final speech as US president, Barack Obama warned of the “relentless pace of automation that makes a lot of good, middle-class jobs obsolete.” Bill Gates, co-founder of Microsoft, has said that governments will need to tax robots to replace forgone revenue when human workers lose their jobs.

If the past is prologue, these concerns are warranted.

In a recent study (pdf), economists Daren Acemoglu of MIT and Pascual Restrepo of Boston University try to quantify how worried we should be about robots. They examine the impact of industrial automation on the US labor market from 1990 to 2007. They conclude that each additional robot reduced employment in a given commuting area by 3–6 workers, and lowered overall wages by 0.25–0.5%.

Read more

The first of my major #Libertarian policy articles for my California gubernatorial run, which broadens the foundational “non-aggression principle” to so-called negative natural phenomena. “In my opinion, and to most #transhumanist libertarians, death and aging are enemies of the people and of liberty (perhaps the greatest ones), similar to foreign invaders running up our shores.” A coordinated defense agianst them is philosophically warranted.


Many societies and social movements operate under a foundational philosophy that often can be summed up in a few words. Most famously, in much of the Western world, is the Golden Rule: Do onto others as you want them to do to you. In libertarianism, the backbone of the political philosophy is the non-aggression principle (NAP). It argues it’s immoral for anyone to use force against another person or their property except in cases of self-defense.

A challenge has recently been posed to the non-aggression principle. The thorny question libertarian transhumanists are increasingly asking in the 21st century is: Are so-called natural acts or occurrences immoral if they cause people to suffer? After all, taken to a logical philosophical extreme, cancer, aging, and giant asteroids arbitrarily crashing into the planet are all aggressive, forceful acts that harm the lives of humans.

Traditional libertarians throw these issues aside, citing natural phenomena as unable to be morally forceful. This thinking is supported by most people in Western culture, many of whom are religious and fundamentally believe only God is aware and in total control of the universe. However, transhumanists —many who are secular like myself—don’t care about religious metaphysics and whether the universe is moral. (It might be, with or without an almighty God.) What transhumanists really care about are ways for our parents to age less, to make sure our kids don’t die from leukemia, and to save the thousands of species that vanish from Earth every year due to rising temperatures and the human-induced forces.

As part of I BM’s annual InterConnect conference in Las Vegas, the company is announcing a new machine learning course in partnership with workspace and education provider Galvanize to familiarize students with IBM’s suite of Watson APIs. These APIs simplify the process of building tools that rely on language, speech and vision analysis.

Going by the admittedly clunky name IBM Cognitive Course, the class will spend four weeks teaching the basics of machine learning and Watson’s capabilities. Students will be able to take the class directly within IBM’s Bluemix cloud platform.

“Not everyone knows what to do with a Watson API,” says Bryson Koehler, CTO of IBM Watson & IBM Cloud. “The group of engineers that are experts is really small.”

Read more

In the popular TV show Sherlock, visual depictions of our hero’s deductive reasoning often look like machine algorithms. And probably not by accident, given that this version of Conan Doyle’s detective processes tremendous amounts of observed data—the sort of minutiae that the average person tends to pass over or forget—more like a computer than a human.

Sherlock’s intelligence is both strength and limitation. His way of thinking is often bounded by an inability to intuitively understand social and emotional contexts. The show’s central premise is that Sherlock Holmes needs his friend John Watson to help him synthesize empirical data into human truth.

In Sherlock we see the analog for modern AI: highly performant learning machines that can achieve metacognitive results with the assistance of fully cognitive human partners. Machine intelligence does not by its nature make human intelligence obsolete. Quite the opposite, really—machines need human guidance.

Read more

In this video series the Galactic Public Archives takes bite sized looks at a variety of terms, technologies, and ideas that are likely to be prominent in the future.

In this entry, we take a look at the rapidly developing technology of Brain to Machine interfaces.

Interesting links for more information:

http://www.medgadget.com/2017/02/non-invasive-brain-computer…dhary.html