Toggle light / dark theme

Study suggests smart assistant design improvements for deaf users

Despite the inherent challenges that voice-interaction may create, researchers at the Penn State College of Information Sciences and Technology recently found that deaf and hard-of-hearing users regularly use smart assistants like Amazon’s Alexa and Apple’s Siri in homes, workplaces and mobile devices.

The work highlights a clear need for more inclusive design, and presents an opportunity for deaf and hard-of-hearing users to have a more active role in the research and development of new systems, according to Johnna Blair, an IST doctoral student and member of the research team.

“As smart assistants become more common, are preloaded on every smartphone, and continue to provide benefits to the user beyond just the ease of voice activation, it’s important to understand how deaf and hard-of-hearing users have made smart assistants work for them and the realistic challenges they continue to face,” said Blair.

A model that can create unique Chinese calligraphy art

Over the past few years, computer scientists have developed increasingly advanced and sophisticated artificial intelligence (AI) tools, which can tackle a wide variety of tasks. This includes generative adversarial networks (GANs), machine-learning models that can learn to generate new data, including text, audio files or images. Some of these models can also be tailored for creative purposes, for instance, to create unique drawings, songs or poems.

Researchers at Tongji University in Shanghai in China and the University of Delaware in the US have recently created a GAN-based model that can generate abstract artworks inspired by Chinese . The term Chinese calligraphy refers to the artistic form in which Chinese characters were traditionally written.

“In 2019, we collaborated with a restaurant based in Shanghai to showcase some AI technologies for better customer engagement and experience,” Professor Harry Jiannan Wang, one of the researchers who carried out the study, told TechXplore. “We then had the idea to use AI technologies to generate personalized abstract art based on the dishes customers order and present the artwork to entertain customers while they wait for their meals to be served.”

Watch MIT’s ‘mini cheetah’ robots frolic, fall, flip – and play soccer together

Circa 2019


MIT’s Biomimetics Robotics department took a whole herd of its new ‘mini cheetah’ robots out for a group demonstration on campus recently – and the result is an adorable, impressive display of the current state of robotic technology in action.

The school’s students are seen coordinating the actions of 9 of the dog-sized robots running through a range of activities, including coordinated movements, doing flips, springing in slow motion from under piles of fall leaves, and even playing soccer.

The mini cheetah weights just 20 lbs, and its design was revealed for the first time earlier this year by a team of robot developers working at MIT’s Department of Mechanical Engineering. The mini cheetah is a shrunk-down version of the Cheetah 3, a much larger and more expensive to produce robot that is far less light on its feet, and not quite so customizable.

Humans could merge with AI through this specialized polymer

Elon Musk’s Neuralink has a straightforward outlook on artificial intelligence: “If you can’t beat em, join em.” The company means that quite literally — it’s building a device that aims to connect our brains with electronics, which would enable us, in theory, to control computers with our thoughts.

But how? What material would companies like Neuralink use to connect electronics with human tissue?

One potential solution was recently revealed at the American Chemical Society’s Fall 2020 Virtual Meeting & Expo. A team of researchers from the University of Delaware presented a new biocompatible polymer coating that could help devices better fuse with the brain.

Is neuroscience the key to protecting AI from adversarial attacks?

Deep learning has come a long way since the days when it could only recognize handwritten characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars.

Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as humans do. But they still have a long way to go, and they make mistakes in situations where humans would never err.

These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead to machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.

‘Augmented creativity’: How AI can accelerate human invention

We’re witnessing the emergence of something called “augmented creativity,” in which humans use AI to help them understand the deluge of data.


Researchers at Carnegie Mellon developed an alternate method: an AI-based approach to mining the patent and research databases for ideas that could be combined to form interesting solutions to specific problems. Their system uses analogies to help connect work from two seemingly distinct areas, which they believe makes innovation faster and a lot cheaper.

Augmented creativity

What we’re witnessing is the emergence of something called “augmented creativity,” in which humans use AI to help them understand the deluge of data. Early prototypes highlight the important role humans can, and should, play in making sense of the suggestions proposed by the AI.

MIT Report: Robots Aren’t the Biggest Threat to the Future of Work—Policy Is

But the MIT report also acknowledges that while fears of an imminent jobs apocalypse have been over-hyped, the way technology has been deployed over recent decades has polarized the economy, with growth in both white-collar work and low-paid service work at the expense of middle-tier occupations like receptionists, clerks, and assembly-line workers.

This is not an inevitable consequence of technological change, though, say the authors. The problem is that the spoils from technology-driven productivity gains have not been shared equally. The report notes that while US productivity has risen 66 percent since 1978, compensation for production workers and those in non-supervisory roles has risen only 10 percent.

“People understand that automation can make the country richer and make them poorer, and that they’re not sharing in those gains,” economist David Autor, a co-chair of the task force, said in a press release. “We need to restore the synergy between rising productivity and improvements in labor market opportunity.”

Brett Vaughan — U.S. Navy Chief AI Officer and AI Portfolio Manager, Office of Naval Research

U.S. Navy Chief Artificial Intelligence Officer, and AI Portfolio Manager, Office of Naval Research.


Brett Vaughan is the U.S. Navy Chief Artificial Intelligence (AI) Officer and AI Portfolio Manager at the Office of Naval Research (ONR).

Mr. Vaughan has 30 years of Defense Intelligence and Technology expertise with strengths in military support, strategic communications, geospatial intelligence (GEOINT), Naval Intelligence and Navy R&D.

He spent two decades in various roles at the National Geospatial-Intelligence Agency (NGA), an additional 10 years in intelligence roles in the Office of the Chief of Naval Operations, and was recently appointed to his current role in 2019.

Mr. Vaughan has Master’s Degrees in Environmental Science from Johns Hopkins University, and in National Security and Strategic Studies from the Naval War College, as well as a Bachelor’s Degree in Geography and Cartography, from University of Mary Washington.

/* */