Toggle light / dark theme

Google’s DeepMind develops creepy, ultra-realistic human speech synthesis

We all become accustomed to the tone and pattern of human speech at an early age, and any deviations from what we have come to accept as “normal” are immediately recognizable. That’s why it has been so difficult to develop text-to-speech (TTS) that sounds authentically human. Google’s DeepMind AI research arm has turned its machine learning model on the problem, and the resulting “WaveNet” platform has produced some amazing (and slightly creepy) results.

Google and other companies have made huge advances in making human speech understandable by machines, but making the reply sound realistic has proven more challenging. Most TTS systems are based on so-called concatenative technologies. This relies upon a database of speech fragments that are combined to form words. This tends to sound rather uneven and has odd inflections. There is also some work being done on parametric TTS, which uses a data model to generate words, but this sounds even less natural.

DeepMind is changing the way speech synthesis is handled by directly modeling the raw waveform of human speech. The very high-level approach of WaveNet means that it can conceivably generate any kind of speech or even music. Listen above for an example of WaveNet’s voice synthesis. There’s an almost uncanny valley quality to it.

Why the United Nations Must Move Forward With a Killer Robots Ban

The reason I have been motivated to do this is simple. If we don’t get a ban in place, there will be an arms race. And the end point of this race will look much like the dystopian future painted by Hollywood movies like The Terminator.

Even before this end point, such weapons will likely fall into the hands of terrorists and rogue nations. These people will have no qualms about removing any safeguards. Or using them against us.

And it won’t simply be robots fighting robots. Conflicts today are asymmetric. It will mostly be robots against humans. So unlike what some robot experts might claim, many of those humans will be innocent civilians.

This Is the Holographic AI Servant of Your Dreams…or Maybe Your Nightmares

According to her profile, “She is a comforting character that is great to those living alone. She will always do all she can just for the owner.” How thoughtful and sweet. Except she comes with a $2,600 price tag (and her US version will be sold for $3,000). So, caring for her “owner” is the least she can do, right?


The hologram bot is based on a Japanese anime character, but she isn’t going to be the only character for Gatebox. From the looks of the website, the company is going to make other characters available, presumably also from anime.

Azuma’s hologram appears inside the main tube body of Gatebox, projected at a 1280 × 720 resolution. The hardware itself weighs 5kg, has stereo speakers, a microphone, and a camera mounted on top. Azuma is built with a machine learning algorithm, that helps her recognize her “master’s” voice, learn his sleeping habits, and send him messages through Gatebox’s native chat app.

Perhaps Azuma will work for some, but she might not cut it for others — going home to a cartoonish AI hologram could take some getting used to. Anyway, Gatebox is certainly trying to disrupt the virtual assistant space. However, we still seem to be far from that holographic projection that we’re really be looking for.

Prepare for the real political game changer: robots with votes

My new story for New Scientist on robot voting rights. This story came about because of my discussion with Professor Lawrence Lessig on the BBC World Service. To read my article, you may have to sign up for a free New Scientist account (it takes 1 minute). Please do it! It’s an important article. Gerrymandering in the AI and & robot age: https://www.newscientist.com/article/mg23231043-300-prep…ith-votes/ #transhumanism


If human-like AI comes to fruition then we may have grant voting rights to our silicon equals. Democracy will change forever, says Zoltan Istvan.

Why we are still light years away from full artificial intelligence

The future is here… or is it?

With so many articles proliferating the media space on how humans are at the cusp of full AI (artificial intelligence), it’s no wonder that we believe that the future — which is full of robots and drones and self-driven vehicles, as well as diminishing human control over these machines — is right on our doorstep.

But are we really approaching the singularity as fast as we think we are?

How Researchers Tapped into Brain Activity to Boost People’s Confidence

There may be a way to tap into people’s brain activity to boost their confidence, a new study suggests.

In the study, the researchers used a technique called decoded neurofeedback, which involves scanning people’s brains to monitor their brain activity, and using artificial intelligence to detect activity patterns that are linked with feelings of confidence.

Then, whenever these patterns are detected, people are given a reward — in this case, participants were given a small amount of money.

This Non-Invasive Brain Cap Allows People to Control a Robotic Arm with Their Minds

Although this still looks like you’re part of a medical experiment; it is in fact a step forward in BMI progress as it is non-invasive & not bulky as the other BMI technology that I have seen. With the insights we’re able to collect from this model plus prove 80% accuracy in the neuro communication means next generations will be able to focus on materials to make the model more and more seamless. So, it is very promising.


A new non-invasive brain-computer interface allows people to control a robotic arm using only their minds.

(Photo : Sean Gallup/Getty Images)

A team of researchers at the University of Minnesota has developed a new non-invasive brain-computer interface that allows people to control a robotic arm using only their minds.

/* */