Toggle light / dark theme

In January, 2021, the OpenAI consortium — founded by Elon Musk and financially backed by Microsoft — unveiled its most ambitious project to date, the DALL-E machine learning system. This ingenious multimodal AI was capable of generating images (albeit, rather cartoonish ones) based on the attributes described by a user — think “a cat made of sushi” or “an x-ray of a Capybara sitting in a forest.” On Wednesday, the consortium unveiled DALL-E’s next iteration which boasts higher resolution and lower latency than the original.

The first DALL-E (a portmanteau of “Dali,” as in the artist, and “WALL-E,” as in the animated Disney character) could generate images as well as combine multiple images into a collage, provide varying angles of perspective, and even infer elements of an image — such as shadowing effects — from the written description.

“Unlike a 3D rendering engine, whose inputs must be specified unambiguously and in complete detail, DALL·E is often able to ‘fill in the blanks’ when the caption implies that the image must contain a certain detail that is not explicitly stated,” the OpenAI team wrote in 2021.

The study also developed an automated diagnostic pipeline to streamline the genomic data— including the millions of variants present in each genome—for clinical interpretation. Variants unlikely to contribute to the presenting disease are removed, potentially causative variants are identified, and the most likely candidates prioritized. For its pipeline, the researchers and clinicians used Exomiser, a software tool that Robinson co-developed in 2014. To assist with the diagnostic process, Exomiser uses a phenotype matching algorithm to identify and prioritize gene variants revealed through sequencing. It thus automates the process of finding rare, segregating and predicted pathogenic variants in genes in which the patient phenotypes match previously referenced knowledge from human disease or model organism databases. The use of Exomiser was noted in the paper as having greatly increased the number of successful diagnoses made.

The genomic future.

Not surprisingly, the paper concludes that the findings from the pilot study support the case for using whole genome sequencing for diagnosing rare disease patients. Indeed, in patients with specific disorders such as intellectual disability, genome sequencing is now the first-line test within the NHS. The paper also emphasizes the importance of using the HPO to establish a standardized, computable clinical vocabulary, which provides a solid foundation for all genomics-based diagnoses, not just those for rare disease. As the 100,000 Genomes Project continues its work, the HPO will continue to be an essential part of improving patient prognoses through genomics.

The FAA has granted Iris Automation a second waiver for Beyond Visual Line of Sight (BVLOS) autonomous drone operations on behalf of the City of Reno. But while the previous waiver required the use of Iris Automation’s advanced detect and avoid solution Casia X, this one utilizes the company’s Casia G ground-based solution (pictured above).

The fresh waiver allows an operator to fly without the need for visual observers or the Remote Pilot in Command to maintain visual contact with the drone. Casia G uses Iris Automation’s patented detect and avoid technology to create a stationary perimeter of sanitized, monitored airspace, enabling drones to complete missions safely. The system also provides awareness of intruder-piloted aircraft to maneuver drones to safe zones.

Artificial intelligence is a target for every existing industry Or is it just another hyped innovation? It comes with no surprise how AI today becomes a catchall term that is said out loud in the job market. The US and China are in nip and tuck in the AI race for supremacy. Although China aims to be the technology leader by 2030, the economy is still at a struggle phase with a slowdown and trade war with the US. Emerging trends in artificial intelligence (AI) significantly points toward having a geopolitical disruption in the foreseeable future. As much as the fourth industrial revolution augmented the rise of advanced economies, so will machine learning and artificial intelligence transform the world.

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.

While tools exist to help experts make sense of a model’s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.

Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a ’s behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model’s reasoning matches that of a human.