Toggle light / dark theme

Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In his keynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for “self-supervised learning,” his roadmap to solve deep learning’s data problem. LeCun is one of the godfathers of deep learning and the inventor of convolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

In the stillness and noise of the M.R.I., I picture what the magnet is doing to my brain. I imagine hydrogen protons aligning along and against the direction of its field. Bursts of radio waves challenge their orientation, generating signals that are rendered into images. Other than the sting of the contrast agent, the momentary changes in nuclear spin feel like nothing. “Twenty-five more minutes,” the radiologist says through the plastic headphones. Usually, I fall asleep.

I’ve had more than 50 scans since 2005, when I received a diagnosis of multiple sclerosis, and I now possess thousands of images of my brain and spine. Sometimes I open the files to count the spinal-cord lesions that are slowly but aggressively taking away my ability to walk. On days my right leg can clear the ground, it feels as if a corkscrew is twisting into my femur. I take halting steps, like a hapless robot, until it’s impossible to move forward. “Maybe in 10 years there will be a pill, or a treatment,” a doctor told me.

For now, even a sustained low fever could cause permanent disability, and medications that treat the disease have left me immunosuppressed, making fevers more likely. I quarantined before it was indicated, and what I miss most now, sheltering in place, are walks through my neighborhood park in Los Angeles with my dog, who gleefully chases the latest bouncy ball I’m hurtling against the concrete. Her current favorite is the Waboba Moon Ball, which comes in highlighter fluorescent yellow and Smurf blue, among other colors. Technically Moon Balls are spherical polyhedrons. They sport radically dimpled surfaces, as if Buckminster Fuller had storyboarded an early pitch for “Space Jam.” Moon Balls are goofy, but they bounce 100 feet.

Some foresee quantum computers will come to solve some of the world’s most serious issues. However, others accept that the advantages will be exceeded by the downsides, for example, cost or that quantum computers basically can’t work, incapable to play out the complexities demanded of them in the manner we envision. The integral factor will be if the producers can guarantee ‘quantum supremacy’ by accomplishing low error rates for their machines and outperforming current computers.

Hollywood has made numerous anticipations with respect to the future and artificial intelligence, some disturbing, others empowering. One of the most quickly developing research areas takes a look at the use of quantum computers in molding artificial intelligence. Actually, some consider machine learning the yardstick by which the field is estimated.

The idea of machine learning, to ‘learn’ new data without express explicit instruction or programming has existed since 1959, in spite of the fact that we still haven’t exactly shown up at the vision set somewhere by the likes of Isaac Asimov and Arthur C. Clarke. In any case, the conviction is that quantum computing will help accelerate our advancement right now. What was at one time a periphery thought evaded by the more extensive science community, has developed to turn into a well known and practical field worthy of serious investment.

A Google interview candidate recently asked me: “What are three big science questions that keep you up at night?” This was a great question because one’s answer reveals so much about one’s intellectual interests — here are mine:

Q1: Can we imitate “thinking” from only observing behavior?

Suppose you have a large fleet of autonomous vehicles with human operators driving them around diverse road conditions. We can observe the decisions made by the human, and attempt to use imitation learning algorithms to map robot observations to the steering decisions that the human would take.

Designer Babies

Xenobots, which were first brought to life back in January, can’t reproduce. Instead, computer scientists program them in a virtual environment and then 3D print their creations out of embryonic cells.

“We are witnessing almost the birth of a new discipline of synthetic organisms,” Columbia University roboticist Hod Lipson, who was not part of the research team, told the NYT. “I don’t know if that’s robotics, or zoology or something else.”

An AI that translates brainwaves into sentences.


Scientists have developed a new artificial intelligence-based system that converts brain activity into text which could result in transforming communication for people who can’t speak or type.

The electrodes on the brain have been used to translate brainwaves into words spoken by a computer which is helpful for people who have lost the ability to speak. When you speak, your brain sends signals from the motor cortex to the muscles in your jaw, lips, and larynx to coordinate their movement and produce a sound.