Toggle light / dark theme

A team of researchers at UC Berkeley has found a way to get a robot to mimic an activity it sees on a video screen just a single time. In a paper they have uploaded to the arXiv preprint server, the team describes the approach they used and how it works.

Robots that learn to do things simply by watching a human carry out an action a single time would be capable of learning many more new actions much more quickly than is now possible. Scientists have been working hard to figure out how to make it happen.

Historically though, robots have been programmed to perform actions like picking up an object by via code that expressly lays out what needs to be done and how. That is how most robots that do things like assemble cars in a factory work. Such robots must still undergo a training process by which they are led through procedures multiple times until they are able to do them without making mistakes. More recently, robots have been programmed to learn purely through observation—much like humans and other animals do. But such imitative learning typically requires thousands of observations. In this new effort, the researchers describe a technique they have developed that allows a to perform a desired action by watching a human being do it just a single time.

Read more

The 90-pound mechanical beast — about the size of a full-grown Labrador — is intentionally designed to do all this without relying on cameras or any external environmental sensors. Instead, it nimbly “feels” its way through its surroundings in a way that engineers describe as “blind locomotion,” much like making one’s way across a pitch-black room.

“There are many unexpected behaviors the robot should be able to handle without relying too much on vision,” says the robot’s designer, Sangbae Kim, associate professor of mechanical engineering at MIT. “Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow. So we want the robot to rely more on tactile information. That way, it can handle unexpected obstacles while moving fast.”

Researchers will present the robot’s vision-free capabilities in October at the International Conference on Intelligent Robots, in Madrid. In addition to blind locomotion, the team will demonstrate the robot’s improved hardware, including an expanded range of motion compared to its predecessor Cheetah 2, that allows the robot to stretch backwards and forwards, and twist from side to side, much like a cat limbering up to pounce.

Read more

Marvin Minsky was one of the founding fathers of artificial intelligence and co-founder of the Massachusetts Institute of Technology’s AI laboratory.


Abstract for scientists

Neuro cluster Brain Model analyses the processes in the brain from the point of view of the computer science. The brain is a massively parallel computing machine which means that different areas of the brain process the information independently from each other. Neuro cluster Brain Model shows how independent massively parallel information processing explains the underlying mechanism of previously unexplainable phenomena such as sleepwalking, dissociative identity disorder (a.k.a. multiple personality disorder), hypnosis, etc.

Bottom of the barrel white collar jobs will all probably be automated by 2025.


When Google introduced Google Duplex, its AI assistant designed to speak like a human, the company showed off how the average person could use the tech to save time making reservations and whatnot. What wasn’t touched on was the possibility that Duplex may have a use on the other side of the line, taking over for call center employees and telemarketers.

A report from The Information suggests Google may be making a play to find other applications for its human-sounding assistant and has already started experimenting with ways to use Duplex to do with away roles currently filled by humans—a move that could have ramifications for millions of people.

Read more

German automaker Daimler is the 1st foreign company licensed to test its autonomous vehicles in Beijing.


July 6 (UPI) — German automaker Daimler is the first foreign company licensed to test its autonomous vehicles in Beijing, the company announced on Friday.

With the certification, the maker of Mercedes-Benz vehicles can begin road tests of self-driving cars in Beijing, “a metropolis with unique and complex urban traffic situations,” a company statement said.

Daimler has similar licenses in Germany and the United States and has had a research facility in China since 2005.

A team of Japanese researchers from Waseda University, Osaka University, and Shizuoka University designed and successfully developed a high-power, silicon-nanowire thermoelectric generator which, at a thermal difference of only 5 degrees C, could drive various IoT devices autonomously in the near future.

Objects in our daily lives, such as speakers, refrigerators, and even cars, are becoming “smarter” day by day as they connect to the internet and exchange data, creating the Internet of Things (IoT), a network among the objects themselves. Toward an IoT-based society, a miniaturized is anticipated to charge these objects, especially for those that are portable and wearable.

Due to advantages such as its relatively low thermal conductance but high electric conductance, have emerged as a promising thermoelectric material. Silicon-based thermoelectric generators conventionally employed long, nanowires of about 10–100 nanometers, which were suspended on a cavity to cutoff the bypass of the heat current and secure the temperature difference across the silicon nanowires. However, the cavity structure weakened the mechanical strength of the devices and increased the fabrication cost.

Read more

While facial recognition performs well in controlled environments (like photos taken at borders), they struggle to identify faces in the wild. According to data released under the UK’s Freedom of Information laws, the Metropolitan’s AFR system has a 98 percent false positive rate — meaning that 98 percent of the “matches” it makes are of innocent people.


The head of London’s Metropolitan Police force has defended the organization’s ongoing trials of automated facial recognition systems, despite legal challenges and criticisms that the technology is “almost entirely inaccurate.”

According to a report from The Register, UK Metropolitan Police commissioner Cressida Dick said on Wednesday that she did not expect the technology to lead to “lots of arrests,” but argued that the public “expect[s]” law enforcement to test such cutting-edge systems.

Read more

As we develop robots with increasingly human-like capabilities, we should take a closer look at our own. Only by learning to overcome – or at least evade – our cognitive limitations can we have long and fruitful careers in the new global economy.”


___

The Cognitive Limits of Lifelong Learning (Project Syndicate):

“As new technologies continue to upend industries and take over tasks once performed by humans, workers worldwide fear for their futures. But what will really prevent humans from competing effectively in the labor market is not the robots themselves, but rather our own minds, with all their psychological biases and cognitive limitations …