Toggle light / dark theme

Voicebots, humanoids and other tools capture memories for future generations.

What happens after we die—digitally, that is? In this documentary, WSJ’s Joanna Stern explores how technology can tell our stories for generations to come.

Old photos, letters and tapes. Tech has long allowed us to preserve memories of people long after they have died. But with new tools there are now interactive solutions, including memorialized online accounts, voice bots and even humanoid robots. WSJ’s Joanna Stern journeys across the world to test some of those for a young woman who is living on borrowed time. Photo illustration: Adele Morgan/The Wall Street Journal.

More from the Wall Street Journal:
Visit WSJ.com: http://www.wsj.com.
Visit the WSJ Video Center: https://wsj.com/video.

On Facebook: https://www.facebook.com/pg/wsj/videos/
On Twitter: https://twitter.com/WSJ
On Snapchat: https://on.wsj.com/2ratjSM

#WSJ #Tech #Documentary

Is it possible to read a person’s mind by analyzing the electric signals from the brain? The answer may be much more complex than most people think.

Purdue University researchers—working at the intersection of artificial intelligence and neuroscience—say a prominent dataset used to try to answer this question is confounded, and therefore many eye-popping findings that were based on this dataset and received high-profile recognition are false after all.

The Purdue team performed extensive tests over more than one year on the dataset, which looked at the brain activity of individuals taking part in a study where they looked at a series of images. Each individual wore a cap with dozens of electrodes while they viewed the images.

Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.

The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.

Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.

Elon Musk has been a vocal critic of artificial intelligence, calling it an “existential threat to humanity”. He is wrong, right?


Musk is heavily invested in AI research himself through his OpenAI and NeuroLink ventures, and believes that the only safe road to AI involves planning, oversight & regulation. He recently summarized this, saying:

“My recommendation for the longest time has been consistent. I think we ought to have a government committee that starts off with insight, gaining insight… Then, based on that insight, comes up with rules in consultation with industry that give the highest probability for a safe advent of AI.”

Across dozens of media appearances, Musk’s message about AI has indeed been remarkably consistent. He says it’s dangerous, and says it needs regulation, or else “AI could turn humans into an endangered species”.

A team of researchers has developed a flexible, rechargeable silver oxide-zinc battery with a five to 10 times greater areal energy density than state of the art. The battery also is easier to manufacture; while most flexible batteries need to be manufactured in sterile conditions, under vacuum, this one can be screen printed in normal lab conditions. The device can be used in flexible, stretchable electronics for wearables as well as soft robotics.

The team, made up of researchers at the University of California San Diego and California-based company ZPower, details their findings in the Dec. 7 issue of the journal Joule.

“Our batteries can be designed around electronics, instead of electronics needed to be designed around batteries,” said Lu Yin, one of the paper’s co-first authors and a Ph.D. student in the research group of UC San Diego’s nanoengineering Professor Joseph Wang.

Being able to see, move, and exercise independently is something most of us take for granted. [Thomas Panek] was an avid runner before losing his sight due to a genetic condition, and had to rely on other humans and guide dogs to run again. After challenging attendants at a Google hackathon, Project Guideline was established to give blind runners (or walkers) independence from a cane, dog or another human, while exercising outdoors. Using a smartphone with line following AI software, and bone conduction headphones, users can be guided along a path with a line painted on it. You need to watch the video below to get a taste of just how incredible it is for the users.

Getting a wheeled robot to follow a line is relatively simple, but a running human is by no means a stable sensor platform. At the previously mentioned hackathon, developers put together a rough proof of concept with a smartphone, using its camera to recognize a painted line on the ground and provide left/right audio cues. As the project developed, the smartphone was attached to a waist belt and bone conduction headphones were used, which don’t affect audio situational awareness as much as normal headphones.

The shaking and side to side movement of running, and varying light conditions and visual obstructions in the outdoors made the problem more difficult to solve, but within a year the developers had completed successful running tests with [Thomas] on a well-lit indoor track and an outdoor pedestrian path with a temporary line. For the first time in 25 years, [Thomas] was able to run independently.