Toggle light / dark theme

Finding Artificial Intelligence Through Storytelling — An Interview with Dr. Roger Schank

The media is all-abuzz with tales of Artificial Intelligence (AI). The provocative two-letter symbol conjures up images of invading autonomous robot drones and Terminator-like machines wreaking havoc on mankind. Then there’s the pervading presence of deep learning and big data, also referred to as artificial intelligence. This might leave some of us wondering, is artificial intelligence one or all of these things?

In that sense, AI leaves a bit of an ambiguous trail – there does not seem to be a clear definition, even amongst scientists and researchers in the field. There are certainly many different branches of AI. I asked Dr. Roger Schank, Professor Emeritus at Northwestern University, for a more clear definition; he told me that artificial intelligence is not big data and deep learning algorithms, at least not in the pure sense of the definition.

Roger emphasizes that intelligence has everything to do with the intersection of learning and interaction and memory. “I will tell you the number one thing people do, it’s pretty obvious – they talk to each other. Guess how hard that is? That is phenomenally hard, that is the subsection of AI called natural language processing, the part that I worked on my whole life, and I understand how far away we are from that.”

Take a “simple” AI concept, such as how to create a computer that plays chess, to better understand the challenge. There are, more or less, two approaches to creating an intelligent machine that can play chess like a champion. The first approach requires programming the computer to predict thousands of moves ahead of time, while the second approach involves building a computer system that tries to imitate a grand master. In the historical pursuit of how to create an artificially intelligent entity, a vast majority of scientists chose the first option of programming based on prediction.

Predicting thousands of moves sounds like a next to impossible task, but scientists have worked over decades to try and do just that, because the second option — imitation through trial and error — is that much more complex. Still, if we want to be creating an artificial intelligence that thinks on its own, Schank argues that the second option is the more promising of the two. “Some of us always saw that this could be the field that could tell us more about people by pursuing method number two (i.e. imitating the grand master),” he says.

Learning more about people while simultaneously developing true intelligence is why Schank entered the field in the first place. “When we talk about Facebook, we might think about the work of AI and face recognition; this technology has certainly come a long way, but that’s a different part of AI. The part of AI that people imagine – the talking and teaching and thinking robots – most people that talk about AI are not really talking about these questions.”

The famous Turing test, run every year with chat bots, is another example of researchers working towards developing an artificial intelligence, and yet every year there is little doubt that the AI is a computer. “This is not AI, this is something (chat bots) that could “fool” someone,” Roger argues.

In order to make a legitimately useful house robot, for example, scientists would have to solve the natural language problem, the memory problem, and the learning problem. If a future household robot makes a bad meal and overcooks the meat, you want the robot to learn from its mistakes and become smarter through experience. Schank describes this seemingly simple act – learning from mistakes and having a sense of awareness about what to do next – as the hallmark of intelligence.

Schank is particularly interested in AI that can help humans by providing more than just a great restaurant review or telling a joke on command. Currently, he is building a program called ExTRA (Experts Telling Relevant Advice), made for and through DARPA, with the objective of “getting a machine to say the right thing to the right person at the right moment. ” Actually, the emphasis is less on a machine and more on an intelligently organized body of knowledge.

Roger tells a real-life analogy, which starts with a ship traveling through the Suez Canal, when the boiler suddenly catches fire. The Captain starts to put out the fire, when his Superior, who is also on board the ship, asks the Captain what he is doing. “Why, putting out the fire of course!” replies the captain. The Superior orders the Captain to continue through the canal without stopping for the fire, explaining that he cannot stop the ship in the Suez Canal, for reasons relating to corrupt Egyptian officials who will not hesitate to take over the ship and cargo. ‘We’re not doing it, keep going,’ orders the Captain’s Superior.

“I thought that was a weird story”, remarks Schank. It was only later, after meeting a real ship captain and giving him the story premise, that Roger was surprised to find the captain arriving at the same conclusion i.e. ‘Full speed ahead!’ This story serves as an illustration of getting a story, from an expert, at a moment when you need it most – a “just in time” story. There is untold value in receiving wisdom in a timely fashion, often expressed in various cultures through oral or written short stories

On the road to getting a machine to be intelligent, one would have to conquer the “expert in the machine.” Working on artificial intelligence that imitates a human mind is not a clean and streamlined process. Developing a machine or system that can imitate a story-telling human does not necessarily equal an intelligent entity. Does the computer really understands what it’s saying? As far as we can deduce, the answer is still no. At some point, Schank remarks, scientists have to know and incorporate the structure of human memory and learning – the key issue in intelligence – in order to build a truly intelligent machine.

Schank does not believe a true, machine-based AI is going to emerge in his lifetime. There simply has not been enough funding in the appropriate direction of AI research. Yet, Roger believes that in the next 10 years, we can replicate a version of ‘just-in-time’ teaching, an indexed system that helps people think through situations in life by providing them with an extension of mind, a tool that increases human decision-making through helpful and relevant stories.

The Holy Grail: Machine Learning + Extreme Robotics

Two experts on robotics and machine learning will reveal breakthrough developments in humanlike robots and machine learning at the annual SXSW conference in Austin next March, in a proposed* panel called “The Holy Grail: Machine Learning + Extreme Robotics.”

Participants will interact with Hanson Robotics’ forthcoming state-of-the-art female Sophia robot as a participant on the panel as she spontaneously tracks human faces, listens to speech, and generates a natural-language response while participating in dialogue about the potential of genius machines.

This conversation on the future of advanced robotics combined with machine learning and cognitive science will feature visionary Hanson Robotics founder/CEO David Hanson and Microsoft executive Jim Kankanias, who heads Program Management for Information Management and Machine Learning in the Cloud + Enterprise Division at Microsoft. The panel will be moderated by Hanson Robotics consultant Eric Shuss.

Read more

In No-A, A Robot Risks Everything For Its Creator

No-A is a short student film directed by Liam Murphy. In it, a hulking robot risks everything to save its creator from an army of faceless soldiers. It’s a really neat CGI film with some really outstanding designs in it.

Overall, this feels like a bit of a sliver from a much larger story, and I hope that the creators and production team will add onto the story and continue it in another short film or maybe a longer feature. It seems like there’s a lot more story to uncover.

Read more

Toyota Pledges $50M To Research AI For Autonomous Vehicles, Hires DARPA’s Dr. Gill Pratt

Today, Toyota announced that it has hired Gill Pratt to drive its autonomous car research. Pratt is best known in this field for his work at DARPA and MIT, including starting the Robotics Challenge. The company is also investing $50 million in the research over the next five years as well as partnering with MIT and Stanford.

Pratt has spent the past five years with DARPA, and laid out what’s important for Toyota at an event in Palo Alto today: “Our long-term goal is to make a car that is never responsible for a crash.”

Pratt will serve as Toyota’s “Executive Technical Advisor” on the research.

Read more

Startup claims a breakthrough in brain-like computing on chips

A small, Santa Fe, New Mexico-based company called Knowm claims it will soon begin commercializing a state-of-the-art technique for building computing chips that learn. Other companies, including HP HPQ and IBM IBM, have already invested in developing these so-called brain-based chips, but Knowm says it has just achieved a major technological breakthrough that it should be able to push into production hopefully within a few years.

The basis for Knowm’s work is a piece of hardware called a memristor, which functions (warning: oversimplification coming) by mimicking synapses in the brain. Rather than committing certain information to a software program and traditional computing memory, memristors are able to “learn” by strengthening the electrical charge between two resistors (the “ristor” part of memristor) much like synapses strengthen connections between commonly used neurons in the brain.

Done correctly—and this is the result that HP and IBM are after—memristors can make computer chips much smarter, but also very energy efficient. That could mean data centers that don’t use as much energy as small towns, as well as more viable robotics, driverless cars, and other autonomous devices. Alex Nugent, Knowm’s founder and CEO, says memristors—especially the ones his company is working on—offer “a massive leap in efficiency” over traditional CPUs, GPUs, and other hardware now used to power artificial intelligence workloads.

Read more

Delivering Drugs And Removing Toxins With 3-D Printed Micro-Robots

Nanotechnology and 3-D printing are two fields that have huge potential in general, but manipulating this technology and using it in biology also has tremendous and exciting prospects. In a promising prototype, scientists have created micro-robots shaped like fish which are thinner than a human hair, and can be used to remove toxins, sense environments or deliver drugs to specific tissue.

These tiny fish were formed using a high resolution 3-D printing technology directed with UV light, and are essentially aquatic themed sensing, delivery packages. Platinum particles that react with hydrogen peroxide push the fish forward, and iron oxide at the head of the fish can be steered by magnets; both enabling control of where they ‘swim’ off to. And there you have it — a simple, tiny machine that can be customised for various medical tasks.

In a test of concept, researchers attached polydiacetylene (PDA) nanoparticles to the body, which binds with certain toxins and fluoresces in the red spectrum. When these fish entered an environment containing these toxins, they did indeed fluoresce and neutralised the compounds.

Read more

/* */