Toggle light / dark theme

Processors That Work Like Brains Will Accelerate Artificial Intelligence

Weekend Reads: Even tiny fly brains can do many things computers can’t. This 2014 feature showed why making machines much smarter might require processors that more closely mimic brains.

____________________________________________

This weekend we revisit stories from MIT Technology Review’s archives that weigh the question of how far AI can go—and when.

Read more

Beyond SpaceX: 10 space companies to watch in 2016 & 2017

While development is happening everywhere, these companies are the next big things to shoot past the stratosphere.

While a lot of end-of-the-year, turn-of-the-calendar roundups try to focus on the year that was or the year ahead, the space industry is very different. Developments are planned further in advance, so some of the qualifying news that gets companies on this list isn’t scheduled to happen until 2017. The industry is small compared to cloud computing or cybersecurity, for example, but the rate of growth is tremendous. There seems to be a cultural solidarity with spacetech on account of its tightly-knit history of cooperation and the still limited number of private companies that can facilitate space flight.

Read more

Nvidia announces a ‘supercomputer’ GPU and deep-learning platform for self-driving cars

Nvidia took pretty much everyone by surprise when it announced it was getting into self-driving cars; it’s just not what you expect from a company that’s made its name off selling graphics cards for gamers.

At this year’s CES, it’s taking the focus on autonomous cars even further.

The company today announced the Nvidia Drive PX2. According to CEO Jen-Hsun Huang, it’s basically a supercomputer for your car. Hardware-wise, it’s made up of 12 CPU cores and four GPUs, all liquid-cooled. That amounts to about 8 teraflops of processing power, is as powerful as 6 Titan X graphics cards, and compares to ‘about 150 MacBook Pros’ for self-driving applications.

Read more

Computer model matches humans at predicting how objects move

We humans take for granted our remarkable ability to predict things that happen around us. For example, consider Rube Goldberg machines: One of the reasons we enjoy them is because we can watch a chain-reaction of objects fall, roll, slide and collide, and anticipate what happens next.

But how do we do it? How do we effortlessly absorb enough information from the world to be able to react to our surroundings in real-time? And, as a computer scientist might then wonder, is this something that we can teach machines?

That last question has recently been partially answered by researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), who have developed a computational model that is just as accurate as humans at predicting how objects move.

Read more

Deep Learning in Action | How to learn an algorithm

Deep Learning in Action | A talk by Juergen Schmidhuber, PhD at the Deep Learning in Action talk series in October 2015. He is professor in computer science at the Dalle Molle Institute for Artificial Intelligence Research, part of the University of Applied Sciences and Arts of Southern Switzerland.

Juergen Schmidhuber, PhD | I review 3 decades of our research on both gradient based and more general problem solvers that search the space of algorithms running on general purpose computers with internal memory.

Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our algorithm searchers are based on algorithmic information theory and are optimal in asymptotic or other senses.

Read more

Researchers say retrieving information from a black hole might be possible

Interstellar is one of the best sci-fi movies of the last decade, imagining a post-apocalyptic human population that needs to be saved from a dying Earth. A nearby black hole has the answers to humanity’s problems, and the brilliant script tells us we can enter a black hole and then use it to transcend space and time. In the film, the black hole also leaks out information that can save us, and it is captured by a complex computer as it’s being entered. That might seem implausible, but since we don’t know a lot about how black holes work, we can certainly accept such an outlandish proposition in the context of the movie.

In real life, however, physicists are trying to figure out how to access the secrets of a black hole. And it looks like some researchers have a theory to retrieve information from it, though it’s not quite as exciting as the complex bookcase that Interstellar proposes.

DON’T MISS: The biggest ‘Star Wars: The Force Awakens’ plot holes explained

Black holes have an immense gravitational pull that affects everything around them, which makes data collection a major issue. Not even light can escape a black hole, and we’re far from figuring out how to reach one and “see” inside it.

Read more

Artificial Intelligence Finally Entered Our Everyday World

Andrew Ng hands me a tiny device that wraps around my ear and connects to a smartphone via a small cable. It looks like a throwback—a smartphone earpiece without a Bluetooth connection. But it’s really a glimpse of the future. In a way, this tiny device allows the blind to see.

Ng is the chief scientist at Chinese tech giant Baidu, and this is one of the company’s latest prototypes. It’s called DuLight. The device contains a tiny camera that captures whatever is in front of you—a person’s face, a street sign, a package of food—and sends the images to an app on your smartphone. The app analyzes the images, determines what they depict, and generates an audio description that’s heard through to your earpiece. If you can’t see, you can at least get an idea of what’s in front of you.

Artificial intelligence is changing not only the way we use our computers and smartphones but the way we interact with the real world.

Read more

/* */