Toggle light / dark theme

Today Amazon and The Johns Hopkins University announced the creation of the JHU + Amazon Initiative for Interactive AI (AI2AI). The collaboration will focus on … See more.


Amazon and Johns Hopkins University (JHU) today announced the creation of the JHU + Amazon Initiative for Interactive AI (AI2AI).

The Amazon-JHU collaboration will focus on driving ground-breaking AI advances with an emphasis on machine learning, computer vision, natural language understanding, and speech processing. Sanjeev Khudanpur, an associate professor in the Department of Electrical and Computer Engineering, will serve as the founding director of the initiative.

Amazon’s sponsorship of AI2AI, which will be housed in JHU’s Whiting School of Engineering, underscores its commitment to partnering with academia to address the most complex challenges in Al, democratizing access to the benefits of Al innovations, and broadening participation in research from diverse, interdisciplinary scholars, and other innovators.

A pair of researchers working in the Personal Robotics Laboratory at Imperial College London has taught a robot to put a surgical gown on a supine mannequin. In their paper published in the journal Science Robotics, Fan Zhang and Yiannis Demiris described the approach they used to teach the robot to partially dress the mannequin. Júlia Borràs, with Institut de Robòtica i Informàtica Industrial, CSIC-UPC, has published a Focus piece in the same journal issue outlining the difficulties in getting robots to handle soft material and the work done by the researchers on this new effort.

As researchers and engineers continue to improve the state of robotics, one area has garnered a lot of attention—using robots to assist with health care. In this instance, the focus was on assisting patients in a who have lost the use of their limbs. In such cases, dressing and undressing falls to healthcare workers. Teaching a robot to dress patients has proven to be challenging due to the nature of the soft materials used to make clothes. They change in a near infinite number of ways, making it difficult to teach a robot how to deal with them. To overcome this problem in a clearly defined setting, Zhang and Demiris used a new approach.

The setting was a simulated hospital room with a mannequin lying face up on a bed. Nearby was a hook affixed to the wall holding a surgical gown that is worn by pushing arms forward through sleeves and tying in the back. The task for the robot was to remove the gown from the hook, maneuver it to an optimal position, move to the bedside, identify the “patient” and its orientation and then place the gown on the patient by lifting each arm one at a time and pulling the gown over each in a natural way.

Cybersecurity researchers have detailed a “simple but efficient” persistence mechanism adopted by a relatively nascent malware loader called Colibri, which has been observed deploying a Windows information stealer known as Vidar as part of a new campaign.

“The attack starts with a malicious Word document deploying a Colibri bot that then delivers the Vidar Stealer,” Malwarebytes Labs said in an analysis. “The document contacts a remote server at (securetunnel[.]co) to load a remote template named ‘trkal0.dot’ that contacts a malicious macro,” the researchers added.

First documented by FR3D.HK and Indian cybersecurity company CloudSEK earlier this year, Colibri is a malware-as-a-service (MaaS) platform that’s engineered to drop additional payloads onto compromised systems. Early signs of the loader appeared on Russian underground forums in August 2021.

Say cheese! Researchers have developed a tiny camera that takes amazingly clear photos. Just don’t sneeze while it’s in your hand. At the size of a coarse grain of salt, you may never find it again.

Smaller cameras could mean lighter smartphones and new James Bond–style gadgets. But that’s not all. Cameras on this scale could swim through the body, hitch a ride on an insect, scope out your brain or monitor hostile environments. And those are just a few of the possibilities.

How do you pack that much picture-taking power into something the size of a crumb? It takes a “radically different approach” to making a camera lens, says Felix Heide. He’s a computer scientist at Princeton University in New Jersey. His lab developed the camera with colleagues from the University of Washington in Seattle. The team shared its work in Nature Communications in November.

In January, 2021, the OpenAI consortium — founded by Elon Musk and financially backed by Microsoft — unveiled its most ambitious project to date, the DALL-E machine learning system. This ingenious multimodal AI was capable of generating images (albeit, rather cartoonish ones) based on the attributes described by a user — think “a cat made of sushi” or “an x-ray of a Capybara sitting in a forest.” On Wednesday, the consortium unveiled DALL-E’s next iteration which boasts higher resolution and lower latency than the original.

The first DALL-E (a portmanteau of “Dali,” as in the artist, and “WALL-E,” as in the animated Disney character) could generate images as well as combine multiple images into a collage, provide varying angles of perspective, and even infer elements of an image — such as shadowing effects — from the written description.

“Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to ‘fill in the blanks’ when the caption implies that the image must contain a certain detail that is not explicitly stated,” the OpenAI team wrote in 2021.

The study also developed an automated diagnostic pipeline to streamline the genomic data— including the millions of variants present in each genome—for clinical interpretation. Variants unlikely to contribute to the presenting disease are removed, potentially causative variants are identified, and the most likely candidates prioritized. For its pipeline, the researchers and clinicians used Exomiser, a software tool that Robinson co-developed in 2014. To assist with the diagnostic process, Exomiser uses a phenotype matching algorithm to identify and prioritize gene variants revealed through sequencing. It thus automates the process of finding rare, segregating and predicted pathogenic variants in genes in which the patient phenotypes match previously referenced knowledge from human disease or model organism databases. The use of Exomiser was noted in the paper as having greatly increased the number of successful diagnoses made.

The genomic future.

Not surprisingly, the paper concludes that the findings from the pilot study support the case for using whole genome sequencing for diagnosing rare disease patients. Indeed, in patients with specific disorders such as intellectual disability, genome sequencing is now the first-line test within the NHS. The paper also emphasizes the importance of using the HPO to establish a standardized, computable clinical vocabulary, which provides a solid foundation for all genomics-based diagnoses, not just those for rare disease. As the 100,000 Genomes Project continues its work, the HPO will continue to be an essential part of improving patient prognoses through genomics.