Toggle light / dark theme

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

Read more

Using steam to propel a spacecraft from asteroid to asteroid is now possible, thanks to a collaboration between a private space company and the University of Central Florida.

UCF planetary research scientist Phil Metzger worked with Honeybee Robotics of Pasadena, California, which developed the World Is Not Enough spacecraft prototype that extracts water from asteroids or other planetary bodies to generate steam and propel itself to its next mining target.

UCF provided the simulated asteroid material and Metzger did the computer modeling and simulation necessary before Honeybee created the prototype and tried out the idea in its facility Dec. 31. The team also partnered with Embry-Riddle Aeronautical University in Daytona Beach, Florida, to develop initial prototypes of steam-based rocket thrusters.

Read more

Researchers at the University of Waterloo, Canada, have recently developed a system for generating song lyrics that match the style of particular music artists. Their approach, outlined in a paper pre-published on arXiv, uses a variational autoencoder (VAE) with artist embeddings and a CNN classifier trained to predict artists from MEL spectrograms of their song clips.

“The motivation for this project came from my personal interest,” Olga Vechtomova, one of the researchers who carried out the study, told TechXplore. “Music is a passion of mine, and I was curious about whether a machine can generate lines that sound like the lyrics of my favourite music artists. While working on text generative models, my research group found that can generate some impressive lines of text. The natural next step for us was to explore whether a machine could learn the ‘essence’ of a specific music artist’s lyrical style, including choice of words, themes and sentence structure, to generate novel lyrics lines that sound like the artist in question.”

The system developed by Vechtomova and her colleagues is based on a neural network model called variational autoencoder (VAE), which can learn by reconstructing original lines of text. In their study, the researchers trained their model to generate any number of new, diverse and coherent lyric lines.

Read more

Researchers from Australia’s national science agency, CSIRO, have offered a bold glimpse into what the robots of the future could look like. And it’s nothing like C3PO, or a T-800 Terminator.

In a paper just published in Nature Machine Intelligence, CSIRO’s Active Integrated Matter Future Science Platform (AIM FSP) says robots could soon be taking their engineering cues from evolution, creating truly startling and effective designs.

This concept, known as Multi-Level Evolution (MLE), argues that current robots struggle in unstructured, complex environments because they aren’t specialised enough, and should emulate the incredibly diverse adaptation animals have undergone to survive in their environment.

Read more

Rapid comprehension of world events is critical to informing national security efforts. These noteworthy changes in the natural world or human society can create significant impact on their own, or may form part of a causal chain that produces broader impact. Many events are not simple occurrences but complex phenomena composed of a web of numerous subsidiary elements – from actors to timelines. The growing volume of unstructured, multimedia information available, however, hampers uncovering and understanding these events and their underlying elements.

“The process of uncovering relevant connections across mountains of information and the static elements that they underlie requires temporal information and event patterns, which can be difficult to capture at scale with currently available tools and systems,” said Dr. Boyan Onyshkevych, a program manager in DARPA’s Information Innovation Office (I2O).

The use of schemas to help draw correlations across information isn’t a new concept. First defined by cognitive scientist Jean Piaget in 1923, schemas are units of knowledge that humans reference to make sense of events by organizing them into commonly occurring narrative structures. For example, a trip to the grocery store typically involves a purchase transaction schema, which is defined by a set of actions (payment), roles (buyer, seller), and temporal constraints (items are scanned and then payment is exchanged).

Read more