Toggle light / dark theme

Avalo, a crop development company based in North Carolina, is using machine learning models to accelerate the creation of new and resilient crop varieties.

The traditional way to select for favorable traits in crops is to identify individual plants that exhibit the trait – such as drought resistance – and use those plants to pollinate others, before planting those seeds in fields to see how they perform. But that process requires growing a plant through its entire life cycle to see the result, which can take many years.

Avalo uses an algorithm to identify the genetic basis of complex traits like drought, or pest resistance in hundreds of crop varieties. Plants are cross-pollinated in the conventional way, but the algorithm can predict the performance of a seed without needing to grow it – speeding up the process by as much as 70%, according to Avalo chief technology officer Mariano Alvarez.

In recent years, engineers have been trying to create hardware systems that better support the high computational demands of machine learning algorithms. These include systems that can perform multiple functions, acting as sensors, memories and computer processors all at once.

Researchers at Peking University recently developed a new reconfigurable neuromorphic computing platform that integrates sensing and computing functions in a single device. This system, outlined in a paper published in Nature Electronics, is comprised of an array of multiple phototransistors with one memristor (MP1R).

“The inspiration for this research stemmed from the limitations of traditional vision computing systems based on the CMOS von Neumann architecture,” Yuchao Yang, senior author of the paper, told Tech Xplore.

In 2018, Google DeepMind’s AlphaZero program taught itself the games of chess, shogi, and Go using machine learning and a special algorithm to determine the best moves to win a game within a defined grid. Now, a team of Caltech researchers has developed an analogous algorithm for autonomous robots—a planning and decision-making control system that helps freely moving robots determine the best movements to make as they navigate the real world.

“Our algorithm actually strategizes and then explores all the possible and important motions and chooses the best one through dynamic simulation, like playing many simulated games involving moving robots,” says Soon-Jo Chung, Caltech’s Bren Professor of Control and Dynamical Systems and a senior research scientist at JPL, which Caltech manages for NASA. “The breakthrough innovation here is that we have derived a very efficient way of finding that optimal safe motion that typical optimization-based methods would never find.”

The team describes the technique, which they call Spectral Expansion Tree Search (SETS), in the December cover article of the journal Science Robotics.

AWS and NVIDIA are teaming up to address one of the biggest challenges in quantum computing: integrating classical computing into the quantum stack, according to an AWS Quantum Technologies blog post. This partnership brings NVIDIA’s open-source CUDA-Q quantum development platform to Amazon Braket, enabling researchers to design, simulate and execute hybrid quantum-classical algorithms more efficiently.

Hybrid computing — where classical and quantum systems work together — is actually a facet of all quantum computing applications. Classical computers handle tasks like algorithm testing and error correction, while quantum computers tackle problems beyond classical reach. As quantum processors improve, the demand for classical computing power grows exponentially, especially for tasks like error mitigation and pre-processing.

The collaboration between AWS and NVIDIA is designed to ease this transition by providing researchers with seamless access to NVIDIA’s CUDA-Q platform directly within Amazon Braket. This integration allows users to test their programs using powerful GPUs, then execute the same programs on quantum hardware without extensive modifications.

Running massive AI models locally on smartphones or laptops may be possible after a new compression algorithm trims down their size — meaning your data never leaves your device. The catch is that it might drain your battery in an hour.

Despite technological advances like electronic health records (EHRs) and dictation tools, the administrative load on healthcare providers has only grown, often overshadowing the time and energy dedicated to direct patient care. This escalation in clerical tasks is a major contributor to physician burnout and dissatisfaction, affecting not only the well-being of providers but also the quality of care they deliver.

During consultations, the focus on documentation can detract from meaningful patient interactions, resulting in fragmented, rushed, and sometimes impersonal communication. The need for a solution that both streamlines documentation and restores the patient-centred nature of healthcare has never been more pressing. This is where AI-powered medical scribes come into play, offering a promising path from traditional dictation to fully automated, integrated documentation support.

AI medical scribe software utilises advanced artificial intelligence and machine learning to transcribe, in real time, entire patient-physician consultations without the need for traditional audio recordings. Leveraging sophisticated speech recognition and natural-language processing (NLP) algorithms, AI scribes are capable of interpreting and processing complex medical conversations with impressive accuracy. These systems can intelligently filter out non-essential dialogue, such as greetings and small talk, to create a streamlined and detailed clinical note.

A new study from Washington University School of Medicine in St. Louis describes an innovative method of analyzing mammograms that significantly improves the accuracy of predicting the risk of breast cancer development over the following five years.

Using up to three years of previous mammograms, the new method identified individuals at high risk of developing 2.3 times more accurately than the standard method, which is based on questionnaires assessing clinical risk factors alone, such as age, race and family history of breast cancer.

The study is published Dec. 5 in JCO Clinical Cancer Informatics.

New research from the Human Cell Atlas offers insights into cell development, disease mechanisms, and genetic influences, enhancing our understanding of human biology and health.

The Human Cell Atlas (HCA) consortium has made significant progress in its mission to better understand the cells of the human body in health and disease, with a recent publication of a Collection of more than 40 peer-reviewed papers in Nature and other Nature Portfolio journals.

The Collection showcases a range of large-scale datasets, artificial intelligence algorithms, and biomedical discoveries from the HCA that are enhancing our understanding of the human body. The studies reveal insights into how the placenta and skeleton form, changes during brain maturation, new gut and vascular cell states, lung responses to COVID-19, and the effects of genetic variation on disease, among others.

Meta might yet teach its AI to more consistently show the right posts at the right time. Still, there’s a bigger lesson it could learn from Bluesky, though it might be an uncomfortable one for a tech giant to confront. It’s that introducing algorithms into a social feed may cause more problems than it solves—at least if timeliness matters, as it does with any service that aspires to scoop up disaffected Twitter users.

For a modern social network, Bluesky stays out of your way to a shocking degree. (So does Mastodon; I’m a fan, but it seems to be more of an acquired taste.) Bluesky’s primary view is “Following”—the most recent posts from the people you choose to follow, just as in the golden age of Twitter. (Present-day Twitter and Threads have equivalent views, but not as their defaults.) Starter Packs, which might be Bluesky’s defining feature, let anyone curate a shareable list of users. You can follow everyone in one with a single click, or pick and choose, but either way, you decide.

Generative models, artificial neural networks that can generate images or texts, have become increasingly advanced in recent years. These models can also be advantageous for creating annotated images to train algorithms for computer vision, which are designed to classify images or objects contained within them.

While many generative models, particularly generative adversarial networks (GANs), can produce synthetic images that resemble those captured by cameras, reliably controlling the content of the images they produce has proved challenging. In many cases, the images generated by GANs do not meet the exact requirements of users, which limits their use for various applications.

Researchers at Seoul National University of Science and Technology recently introduced a new image framework designed to incorporate the content users would like generated images to contain. This framework, introduced in a paper published on the arXiv preprint server, allows users to exert greater control over the image generation process, producing images that are more aligned with the ones they were envisioning.