Toggle light / dark theme

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed new soft robotic systems that are inspired by origami. These new systems are able to move and change shape in response to external stimuli. The new developments bring us closer to having fully untethered soft robots. The soft robots that we possess today use external power and control. Because of this, they have to be tethered to off-board systems with hard components.

The research was published in Science Robotics. Jennifer A. Lewis, a Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and co-lead author of the study, spoke about the new developments.

“The ability to integrate active materials within 3D-printed objects enables the design and fabrication of entirely new classes of soft robotic matter,” she said.

David Lindell, a graduate student in electrical engineering at Stanford University, along with his team, developed a camera that can watch moving objects around corners. When they tested the new technology, Lindell wore a high visibility tracksuit as he moved around an empty room. They had a camera that was aimed at a blank wall away from Lindell, and the team was able to watch all of his movements with the use of a high powered laser. The laser reconstructed the images through the use of single particles of light that were reflected onto the walls around Lindell. The newly developed camera used advanced sensors and a processing algorithm.

Gordon Wetzstein, assistant professor of electrical engineering at Stanford, spoke about the newly developed technology.

“People talk about building a camera that can see as well as humans for applications such as autonomous cats and robots, but we want to build systems that go well beyond that,” he said. “We want to see things in 3D, around corners and beyond the visible light spectrum.”

Humans can communicate a range of nonverbal emotions, from terrified shrieks to exasperated groans. Voice inflections and cues can communicate subtle feelings, from ecstasy to agony, arousal and disgust. Even when simply speaking, the human voice is stuffed with meaning, and a lot of potential value if you’re a company collecting personal data.

Now, researchers at the Imperial College London have used AI to mask the emotional cues in users’ voices when they’re speaking to internet-connected voice assistants. The idea is to put a “layer” between the user and the cloud their data is uploaded to by automatically converting emotional speech into “normal” speech. They recently published their paper “Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants” on the arXiv preprint server.

Our voices can reveal our confidence and stress levels, physical condition, age, gender, and personal traits. This isn’t lost on smart speaker makers, and companies such as Amazon are always working to improve the emotion-detecting abilities of AI.

But in many ways, the field of AI ethics remains limited. Researchers say they are blocked from investigating many systems thanks to trade secrecy protections and laws like the Computer Fraud and Abuse Act (CFAA). As interpreted by the courts, that law criminalizes breaking a website or platform’s terms of service, an often necessary step for researchers trying to audit online AI systems for unfair biases.


Whittaker acknowledges the potential for the AI ethics movement to be co-opted. But as someone who has fought for accountability from within Silicon Valley and outside it, Whittaker says she has seen the tech world begin to undergo a deep transformation in recent years. “You have thousands and thousands of workers across the industry who are recognizing the stakes of their work,” Whittaker explains. “We don’t want to be complicit in building things that do harm. We don’t want to be complicit in building things that benefit only a few and extract more and more from the many.”

It may be too soon to tell if that new consciousness will precipitate real systemic change. But facing academic, regulatory and internal scrutiny, it is at least safe to say that the industry won’t be going back to the adolescent, devil-may-care days of “move fast and break things” anytime soon.

“There has been a significant shift and it can’t be understated,” says Whittaker. “The cat is out of the box, and it’s not going back in.”

While the Valkyrie program develops one type of wingman drone, the broader Skyborg program is working on the hardware and software for integrating manned and unmanned fighters.

The U.S. Air Force’s future drone fighter is back in the air.

The XQ-58 Valkyrie on June 11, 2019 took off for its second test flight over Yuma, Arizona. The 29-feet-long, jet-powered drone “successfully completed all test objectives during a 71-minute flight,” the Air Force Research Laboratory announced.

New York, NY—August 12, 2019—A novel neck brace, which supports the neck during its natural motion, was designed by Columbia engineers. This is the first device shown to dramatically assist patients suffering from Amyotrophic Lateral Sclerosis (ALS) in holding their heads and actively supporting them during range of motion. This advance would result in improved quality of life for patients, not only in improving eye contact during conversation, but also in facilitating the use of eyes as a joystick to control movements on a computer, much as scientist Stephen Hawkins famously did.


A team of engineers and neurologists led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine, designed a comfortable and wearable robotic neck brace that incorporates both sensors and actuators to adjust the head posture, restoring roughly 70% of the active range of motion of the human head. Using simultaneous measurement of the motion with sensors on the neck brace and surface electromyography (EMG) of the neck muscles, it also becomes a new diagnostic tool for impaired motion of the head-neck. Their pilot study was published August 7 in the Annals of Clinical and Translational Neurology.

The brace also shows promise for clinical use beyond ALS, according to Agrawal, who directs the Robotics and Rehabilitation (ROAR) Laborator y. “The brace would also be useful to modulate rehabilitation for those who have suffered whiplash neck injuries from car accidents or have from poor neck control because of neurological diseases such as cerebral palsy,” he said.

Huawei officially launched yesterday the Ascend 910, the world’s most powerful artificial intelligence (AI) processor, and an all-scenario AI computing framework called MindSpore.

RELATED: HUAWEI UNVEILS ITS LATEST PHONES, THE HUAWEI P30 AND HUAWEI P30 PRO

“We have been making steady progress since we announced our AI strategy in October last year,” said Eric Xu, Huawei’s Rotating Chairman. “Everything is moving forward according to plan, from R&D to product launch. We promised a full-stack, all-scenario AI portfolio. And today we delivered, with the release of Ascend 910 and MindSpore. This also marks a new stage in Huawei’s AI strategy.”