Toggle light / dark theme

In January, 2021, the OpenAI consortium — founded by Elon Musk and financially backed by Microsoft — unveiled its most ambitious project to date, the DALL-E machine learning system. This ingenious multimodal AI was capable of generating images (albeit, rather cartoonish ones) based on the attributes described by a user — think “a cat made of sushi” or “an x-ray of a Capybara sitting in a forest.” On Wednesday, the consortium unveiled DALL-E’s next iteration which boasts higher resolution and lower latency than the original.

The first DALL-E (a portmanteau of “Dali,” as in the artist, and “WALL-E,” as in the animated Disney character) could generate images as well as combine multiple images into a collage, provide varying angles of perspective, and even infer elements of an image — such as shadowing effects — from the written description.

“Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to ‘fill in the blanks’ when the caption implies that the image must contain a certain detail that is not explicitly stated,” the OpenAI team wrote in 2021.

The study also developed an automated diagnostic pipeline to streamline the genomic data— including the millions of variants present in each genome—for clinical interpretation. Variants unlikely to contribute to the presenting disease are removed, potentially causative variants are identified, and the most likely candidates prioritized. For its pipeline, the researchers and clinicians used Exomiser, a software tool that Robinson co-developed in 2014. To assist with the diagnostic process, Exomiser uses a phenotype matching algorithm to identify and prioritize gene variants revealed through sequencing. It thus automates the process of finding rare, segregating and predicted pathogenic variants in genes in which the patient phenotypes match previously referenced knowledge from human disease or model organism databases. The use of Exomiser was noted in the paper as having greatly increased the number of successful diagnoses made.

The genomic future.

Not surprisingly, the paper concludes that the findings from the pilot study support the case for using whole genome sequencing for diagnosing rare disease patients. Indeed, in patients with specific disorders such as intellectual disability, genome sequencing is now the first-line test within the NHS. The paper also emphasizes the importance of using the HPO to establish a standardized, computable clinical vocabulary, which provides a solid foundation for all genomics-based diagnoses, not just those for rare disease. As the 100,000 Genomes Project continues its work, the HPO will continue to be an essential part of improving patient prognoses through genomics.

The FAA has granted Iris Automation a second waiver for Beyond Visual Line of Sight (BVLOS) autonomous drone operations on behalf of the City of Reno. But while the previous waiver required the use of Iris Automation’s advanced detect and avoid solution Casia X, this one utilizes the company’s Casia G ground-based solution (pictured above).

The fresh waiver allows an operator to fly without the need for visual observers or the Remote Pilot in Command to maintain visual contact with the drone. Casia G uses Iris Automation’s patented detect and avoid technology to create a stationary perimeter of sanitized, monitored airspace, enabling drones to complete missions safely. The system also provides awareness of intruder-piloted aircraft to maneuver drones to safe zones.

Artificial intelligence is a target for every existing industry Or is it just another hyped innovation? It comes with no surprise how AI today becomes a catchall term that is said out loud in the job market. The US and China are in nip and tuck in the AI race for supremacy. Although China aims to be the technology leader by 2030, the economy is still at a struggle phase with a slowdown and trade war with the US. Emerging trends in artificial intelligence (AI) significantly points toward having a geopolitical disruption in the foreseeable future. As much as the fourth industrial revolution augmented the rise of advanced economies, so will machine learning and artificial intelligence transform the world.

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.

While tools exist to help experts make sense of a model’s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.

Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a ’s behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model’s reasoning matches that of a human.

The dissemination of synthetic biology into materials science is creating an evolving class of functional, engineered living materials that can grow, sense and adapt similar to biological organisms.

Nature has long served as inspiration for the design of materials with improved properties and advanced functionalities. Nonetheless, thus far, no synthetic material has been able to fully recapitulate the complexity of living materials. Living organisms are unique due to their multifunctionality and ability to grow, self-repair, sense and adapt to the environment in an autonomous and sustainable manner. The field of engineered living materials capitalizes on these features to create biological materials with programmable functionalities using engineering tools borrowed from synthetic biology. In this focus issue we feature a Perspective and an Article to highlight how synergies between synthetic biology and biomaterial sciences are providing next-generation engineered living materials with tailored functionalities.

Whether a computer could ever pass for a living thing is one of the key challenges for researchers in the field of Artificial Intelligence. There have been vast advancements in AI since Alan Turing first created what is now called the Turing Test—whether a machine could exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. However, machines still struggle with one of the fundamental skills that is second nature for humans and other life forms: lifelong learning. That is, learning and adapting while we’re doing a task without forgetting previous tasks, or intuitively transferring knowledge gleaned from one task to a different area.

Now, with the support of the DARPA Lifelong Learning Machines (L2M) program, USC Viterbi researchers have collaborated with colleagues at institutions from around the U.S. and the world on a new resource for the future of AI learning, defining how artificial systems can successfully think, act and adapt in the real world, in the same way that living creatures do.

The paper, co-authored by Dean’s Professor of Electrical and Computer Engineering Alice Parker and Professor of Biomedical Engineering, and of Biokinesiology and Physical Therapy, Francisco Valero-Cuevas and their research teams, was published in Nature Machine Intelligence, in collaboration with Professor Dhireesha Kudithipudi at the University of Texas at San Antonio, along with 22 other universities.