Toggle light / dark theme

Neural networks have become enormously successful – but we often don’t know how or why they work. Now, computer scientists are starting to peer inside their artificial minds.

A PENNY for ’em? Knowing what someone is thinking is crucial for understanding their behaviour. It’s the same with artificial intelligences. A new technique for taking snapshots of neural networks as they crunch through a problem will help us fathom how they work, leading to AIs that work better – and are more trustworthy.

In the last few years, deep-learning algorithms built on neural networks – multiple layers of interconnected artificial neurons – have driven breakthroughs in many areas of artificial intelligence, including natural language processing, image recognition, medical diagnoses and beating a professional human player at the game Go.

The trouble is that we don’t always know how they do it. A deep-learning system is a black box, says Nir Ben Zrihem at the Israel Institute of Technology in Haifa. “If it works, great. If it doesn’t, you’re screwed.”

Neural networks are more than the sum of their parts. They are built from many very simple components – the artificial neurons. “You can’t point to a specific area in the network and say all of the intelligence resides there,” says Zrihem. But the complexity of the connections means that it can be impossible to retrace the steps a deep-learning algorithm took to reach a given result. In such cases, the machine acts as an oracle and its results are taken on trust.

To address this, Zrihem and his colleagues created images of deep learning in action. The technique, they say, is like an fMRI for computers, capturing an algorithm’s activity as it works through a problem. The images allow the researchers to track different stages of the neural network’s progress, including dead ends.

Read more

I must admit that this will be hard to do. Sure; I can code anything to come across as responding & interacting to questions, topics, etc. Granted logical/ pragmatic decision making is based on facts/ information that people have at a given point of time; being human isn’t only based on algorithms and prescript data it includes being spontaneous, and sometimes emotional thinking. Robots without the ability to be spontaneous, and have emotional thinking capabilities; will not be human and will lack the connection that humans need.


Some people worry that someday a robot – or a collective of robots – will turn on humans and physically hurt or plot against us.

The question, they say, is how can robots be taught morality?

There’s no user manual for good behavior. Or is there?

Read more

This is one that truly depends on the targeted audience. I still believe that the 1st solely owned & operated female robotics company will make billions.


Beyond correct pronunciation, there is the even larger challenge of correctly placing human qualities like inflection and emotion into speech. Linguists call this “prosody,” the ability to add correct stress, intonation or sentiment to spoken language.

Today, even with all the progress, it is not possible to completely represent rich emotions in human speech via artificial intelligence. The first experimental-research results — gained from employing machinelearning algorithms and huge databases of human emotions embedded in speech — are just becoming available to speech scientists.

Synthesised speech is created in a variety of ways. The highest-quality techniques for natural-sounding speech begin with a human voice that is used to generate a database of parts and even subparts of speech spoken in many different ways. A human voice actor may spend from 10 hours to hundreds of hours, if not more, recording for each database.

Read more

GPS is an utterly pervasive and wonderful technology, but it’s increasingly not accurate enough for modern demands. Now a team of researchers can make it accurate right down to an inch.

Regular GPS registers your location and velocity by measuring the time it takes to receive signals from four or more satellites, that were sent into space by the military. Alone, it can tell you where you are to within 30 feet. More recently a technique called Differential GPS (DGPS) improved on that resolution by adding ground-based reference stations—increasing accuracy to within 3 feet.

Now, a team from the University of California, Riverside, has developed a technique that augments the regular GPS data with on-board inertial measurements from a sensor. Actually, that’s been tried before, but in the past it’s required large computers to combine the two data streams, rendering it ineffective for use in cars or mobile devices. Instead what the University of California team has done is create a set of new algorithms which, it claims, reduce the complexity of the calculation by several order of magnitude.

Read more

What you’re looking at is the first direct observation of an atom’s electron orbitalan atom’s actual wave function! To capture the image, researchers utilized a new quantum microscope — an incredible new device that literally allows scientists to gaze into the quantum realm.

An orbital structure is the space in an atom that’s occupied by an electron. But when describing these super-microscopic properties of matter, scientists have had to rely on wave functions — a mathematical way of describing the fuzzy quantum states of particles, namely how they behave in both space and time. Typically, quantum physicists use formulas like the Schrödinger equation to describe these states, often coming up with complex numbers and fancy graphs.

Up until this point, scientists have never been able to actually observe the wave function. Trying to catch a glimpse of an atom’s exact position or the momentum of its lone electron has been like trying to catch a swarm of flies with one hand; direct observations have this nasty way of disrupting quantum coherence. What’s been required to capture a full quantum state is a tool that can statistically average many measurements over time.

Read more

Here is a concept to think about when we’re 20 or 30 years into the future — imagine a world where humans and all living things in it are truly Singular, and the new AI & Humanoid robots are alive and well. Will AI (including Robots) ever need therapy, will AI ever get stressed out or have panic attacks, will any humans know what AI is thinking once we give AI more independence?

I ask these questions because as we enhance and evolve AI to be like humans and interpret and process emotions, feelings, and interact like humans; will AI expeience fully the struggles of everyday life like some humans do? And, when needs counseling or therapy will they go to another AI or will they see a human therapist?

As we evolve AI; we must look at the full longer picture around AI including how human do we really wish to make AI.


Two actors pose for stock footage in that can be used in political ads. Karen O’Connell, left, & Leslie Luxemburg pretend to chat over coffee. In a political ad, this clip could be used to illustrate any number of topics. (Marvin Joseph/The Washington Post)

Two weeks ago, the Internet Archive started its new Political TV Ad Archive, which monitors television stations in 20 markets in eight U.S. states to compile a list of 2016 primary-election advertisements & uses audio fingerprinting algorithms to automatically flag each one airing of those spots.

[Who are all those smiling people in crusade advertisements?]

The project will create a public database of political television ads in the 2016 race, where they are running & who is paying for them. Since the end of Nov., the archive has identified 267 distinct ads that, if broadcast end to end, would total 196 minutes. They have aired a collective 72,807 times on the stations it’s monitoring.

Read more

What if computers could recognize objects as well as the human brain could? Electrical engineers at the University of California, San Diego have taken an important step toward that goal by developing a pedestrian detection system that performs in near real-time (2−4 frames per second) and with higher accuracy (close to half the error) compared to existing systems. The technology, which incorporates deep learning models, could be used in “smart” vehicles, robotics and image and video search systems.

Read more

Around the world, cities are choking on smog. But a new AI system plans to analyze just how bad the situation is by aggregating data from smartphone pictures captured far and wide across cities.

The project, called AirTick, has been developed by researchers from Nanyang Technological University in Singapore, reports New Scientist. The reasoning is pretty simple: Deploying air sensors isn’t cheap and takes a long time, so why not make use of the sensors that everyone has in their pocket?

The result is an app which allows people to report smog levels by uploading an image tagged with time and location. Then, a machine learning algorithm chews through the data and compares it against official air-quality measurements where it can. Over time, the team hopes the software will slowly be able to predict air quality from smartphone images alone.

Read more

Individual brain cells within a neural network are highlighted in this image obtained using a fluorescent imaging technique (credit: Sandra Kuhlman/CMU)

Carnegie Mellon University is embarking on a five-year, $12 million research effort to reverse-engineer the brain and “make computers think more like humans,” funded by the U.S. Intelligence Advanced Research Projects Activity (IARPA). The research is led by Tai Sing Lee, a professor in the Computer Science Department and the Center for the Neural Basis of Cognition (CNBC).

The research effort, through IARPA’s Machine Intelligence from Cortical Networks (MICrONS) research program, is part of the U.S. BRAIN Initiative to revolutionize the understanding of the human brain.

A “Human Genome Project” for the brain’s visual system

“MICrONS is similar in design and scope to the Human Genome Project, which first sequenced and mapped all human genes,” Lee said. “Its impact will likely be long-lasting and promises to be a game changer in neuroscience and artificial intelligence.”

Read more

Researchers at the University of Bristol have created ‘Mogrify’ — an algorithm that can predict how to reprogram virtually any type of cell

One way of creating new cells is with stem cells. The most famous of these are embryonic and induced pluripotent stem cells, the latter made from your own cells. While these cells have immense potential, the process of creating them is complicated and not without error. Coaxing these cells into a new type once you’ve made them is also easier said than done.

Read more