Toggle light / dark theme

Google has smartened up several of its products with a type of artificial intelligence called deep learning, which involves training neural networks on lots of data and then having them make predictions about new data. Google Maps, Google Photos, and Gmail, for example, have been enhanced with this type of technology. The next service that could see gains is Google Translate.

Well, let me back up. Part of Google Translate actually already uses deep learning. That would be the instant visual translations you can get on a mobile device when you hold up your smartphone camera to the words you want to translate. But if you use Google Translate to just translate text, you know that the service isn’t always 100 percent accurate.

In an interview at the Structure Data conference in San Francisco today, Jeff Dean, a Google senior fellow who worked on some of Google’s core search and advertising technology and is now the head of the Google Brain team that works on deep learning, said that his team has been working with Google’s translation team to scale out experiments with translation based on deep learning. Specifically, the work is based on the technology depicted in a 2014 paper entitled “Sequence to Sequence Learning with Neural Networks.”

Read more

Google, AI, and Quantum — Google believes deep learning is not suitable on Quantum. Not so sure that I agree with this position because deep learning in principle is “a series of complex algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures” — the beauty around quantum is it’s performance in processing of vast sets of information and complex algorithms. Maybe they meant to say at this point they have not resolved that piece for AI.


Artificial intelligence is one of the hottest subjects these days, and recent advances in technology make AI even closer to reality than most of us can imagine.

The subject really got traction when Stephen Hawking, Elon Musk and more than 1,000 AI and robotics researchers signed an open letter issuing a warning regarding the use of AI in weapons development last year. The following month, BAE Systems unveiled Taranis, the most advanced autonomous UAV ever created; there are currently 40 countries working on the deployment of AI in weapons development.

Those in the defense industry are not the only ones engaging in an arms race to create advanced AI. Tech giants Facebook, Google, Microsoft and IBM are all engaging in various AI-initiatives, as well as competing on developing digital personal assistants like Facebook’s M, Cortana from Microsoft and Apple’ Siri.

Read more

As achievements go, learning how to pick up objects doesn’t sound quite as impressive as twice beating the world Go champion – it is, after all, something the average toddler can do. But it’s the fact that the robots themselves figured out the best way to do it using neural networks that makes this notable.

A recent Google report spotted by TNW explains how the company let robot arms pick up a variety of different objects, using neural networks to learn by trial-and-error the best way to handle each. Some 800,000 goes later, the robots seemed to have it figured out pretty well …

Read more

Government’s other big NextGen Program “Advanced Research Projects Agency-EnergyAdvanced Research Projects Agency-Energy” (ARPA) is funding a personal climate change solution with robots, foot coolers, etc. There is one fact; US Government does love their acronyms.


Why heat or cool a whole building when you could heat or cool individual people instead?

Read more

Allen Institute working with Baylor on reconstructing neuronal connections.


The Intelligence Advanced Research Projects Activity (IARPA) has awarded an $18.7 million contract to the Allen Institute for Brain Science, as part of a larger project with Baylor College of Medicine and Princeton University, to create the largest ever roadmap to understand how the function of networks in the brain’s cortex relates to the underlying connections of its individual neurons.

The project is part of the Machine Intelligence from Cortical Networks (MICrONS) program, which seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.

“This effort will be the first time that we can physically look at more than a thousand connections between neurons in a single cortical network and understand how those connections might allow the network to perform functions, like process visual information or store memories,” says R. Clay Reid, Ph.D., Senior Investigator at the Allen Institute for Brain Science, Principal Investigator on the project.

Read more

Another data scientist with pragmatic thinking which is badly needed today. Keeping it real with Una-May O’Reilly.


Mumbai: Una-May O’Reilly, principal research scientist at Anyscale Learning For All (ALFA) group at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, has expertise in scalable machine learning, evolutionary algorithms, and frameworks for large-scale, automated knowledge mining, prediction and analytics. O’Reilly is one of the keynote speakers at the two-day EmTech India 2016 event, to be held in New Delhi on 18 March.

In an email interview, she spoke, among other things, about how machine learning underpins data-driven artificial intelligence (AI), giving the ability to predict complex events from predictive cues within streams of data. Edited excerpts:

When you say that the ALFA group aims at solving the most challenging Big Data problems—questions that go beyond the scope of typical analytics—what do you exactly mean?

Typical analytics visualize and retrieve direct information in the data. This can be very helpful. Visualizations allow one to discern relationships and correlations, for example. Graphs and charts plotting trends and comparing segments are informative. Beyond its value for typical analytics, one should also be aware that the data has latent (that is, hidden) predictive power. By using historical examples, machine learning makes it possible to build predictive models from data. What segments are likely to spend next month? Which students are likely to drop out? Which patient may suffer an acute health episode? Predictive models of this sort rely upon historical data and are vital. Predictive analytics is new, exciting and what my group aims to enable technologically.

Read more

I like this article; why? Because if I plan to make any investment into a robot that is my personal assistant, or housekeeper, or caregiver, etc. I want to ensure that they fit my own needs as a person. Many of us have taken some sort of a personality profile for work; interview for jobs where you were reviewed to be a “fit” culturaly, etc. as well as met people 1st before you hired them. So, why should be any different from the so called “humnoid robots?” And, this should be intriguing for some of us where only 6% of your gender thinks and processes information like you do.


Emotional behaviors can make your drone seem like it’s an adventurer, anti-social, or maybe just exhausted.

Read more

Now, we’re saying 50 yrs instead of 30 yrs. And, 3 months ago it was 10 yrs. I guess 6 months from now it will be 100 yrs from now. Folks need to get a little more pragmatic instead of hyping too much or you will lose creditability with consumers and the markets.


Pew Report: Majority think AI will replace humans, though most still believe their job is secure by Steven Loeb on March 10, 2016.

Read more