Toggle light / dark theme

In order to see and then grasp objects, robots typically utilize depth-sensing cameras like the Microsoft Kinect. And while such cameras may be thwarted by transparent or shiny objects, scientists at Carnegie Mellon University have developed a work-around.

Depth-sensing cameras function by shining infrared laser beams onto an object, then measuring the amount of time that it takes for the light to reflect off of the contours of that object, and back to sensors on the camera.

While this system works well enough on relatively dull opaque objects, it has problems with transparent items that much of the light passes through, or shiny objects that scatter the reflected light. That’s where the Carnegie Mellon system comes in, by utilizing a color optical camera that also functions as a depth-sensing camera.

The snake bites its tail

Google AI can independently discover AI methods.

Then optimizes them

It Evolves algorithms from scratch—using only basic mathematical operations—rediscovering fundamental ML techniques & showing the potential to discover novel algorithms.

AutoML-Zero: new research that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: https://arxiv.org/abs/2003.

Do you agree with these predictions?


The first few months of 2020 have radically reshaped the way we work and how the world gets things done. While the wide use of robotaxis or self-driving freight trucks isn’t yet in place, the Covid-19 pandemic has hurried the introduction of artificial intelligence across all industries. Whether through outbreak tracing or contactless customer pay interactions, the impact has been immediate, but it also provides a window into what’s to come. The second annual ForbesAI 50, which highlights the most promising U.S.-based artificial intelligence companies, features a group of founders who are already pondering what their space will look like in the future, though all agree that Covid-19 has permanently accelerated or altered the spread of AI.

“We have seen two years of digital transformation in the course of the last two months,” Abnormal Security CEO Evan Reiser told Forbes in May. As more parts of a company are forced to move online, Reiser expects to see AI being put to use to help businesses analyze the newly available data or to increase efficiency.

With artificial intelligence becoming ubiquitous in our daily lives, DeepMap CEO James Wu believes people will abandon the common misconception that AI is a threat to humanity. “We will see a shift in public sentiment from ‘AI is dangerous’ to ‘AI makes the world safer,’” he says. “AI will become associated with safety while human contact will become associated with danger.”

Google’s Arts and Culture vertical has been known to release fun apps and tools to help people engage with art and history. In 2018, it launched a feature to let you find your fine art doppelganger by taking a selfie, and more recently it added ways for you to apply filters to your photos to take on the style of masters like Van Gogh or Da Vinci. Now, the company is launching a web-based AI tool to let users interact with ancient Egyptian hieroglyphs and also help researchers decode the symbols with machine learning. It’s called Fabricius, named after the “father of epigraphy, the study of ancient inscriptions,” according to Google, and will let you send roughly translated messages in hieroglyphs to your friends.

Fabricius has three sections: Learn, Play and Work. In the first part, you go through a quick six-stage course that introduces you to the history and study of hieroglyphs. There are activities here that include tracing and drawing a symbol, with machine learning analyzing your drawings to see how accurate you were. For example, my drawing of an Ankh symbol after having seen it for five seconds was determined to be 100 percent correct, while my attempt at a sceptre was deemed 98 percent accurate.

The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.

Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the of a supercomputer.

One of the reasons for this is the efficient transfer of information between in the brain. Neurons send short electrical impulses (spikes) to other neurons—but, to save energy, only as often as absolutely necessary.

TOKYO (Reuters) — In August, a robot vaguely resembling a kangaroo will begin stacking sandwiches, drinks and ready meals on shelves at a Japanese convenience store in a test its maker, Telexistence, hopes will help trigger a wave of retail automation.

Following that trial, store operator FamilyMart says it plans to use robot workers at 20 stores around Tokyo by 2022. At first, people will operate them remotely — until the machines’ artificial intelligence (AI) can learn to mimic human movements. Rival convenience store chain Lawson is deploying its first robot in September, according to Telexistence.

“It advances the scope and scale of human existence,” the robot maker’s chief executive, Jin Tomioka, said as he explained how its technology lets people sense and experience places other than where they are.