Toggle light / dark theme

Revolutionizing image generation through AI: Turning text into images

Creating images from text in seconds—and doing so with a conventional graphics card and without supercomputers? As fanciful as it may sound, this is made possible by the new Stable Diffusion AI model. The underlying algorithm was developed by the Machine Vision & Learning Group led by Prof. Björn Ommer (LMU Munich).

“Even for laypeople not blessed with artistic talent and without special computing know-how and , the new model is an effective tool that enables computers to generate images on command. As such, the model removes a barrier to expressing their creativity,” says Ommer. But there are benefits for seasoned artists as well, who can use Stable Diffusion to quickly convert new ideas into a variety of graphic drafts. The researchers are convinced that such AI-based tools will be able to expand the possibilities of creative image generation with paintbrush and Photoshop as fundamentally as computer-based word processing revolutionized writing with pens and typewriters.

In their project, the LMU scientists had the support of the start-up Stability. Ai, on whose servers the AI model was trained. “This additional computing power and the extra training examples turned our AI model into one of the most powerful image synthesis algorithms,” says the computer scientist.

Machine learning algorithm predicts how to get the most out of electric vehicle batteries

Researchers have developed a machine learning algorithm that could help reduce charging times and prolong battery life in electric vehicles by predicting how different driving patterns affect battery performance, improving safety and reliability.

The researchers, from the University of Cambridge, say their algorithm could help drivers, manufacturers and businesses get the most out of the batteries that power by suggesting routes and driving patterns that minimize battery degradation and charging times.

The team developed a non-invasive way to probe batteries and get a holistic view of battery health. These results were then fed into a machine learning algorithm that can predict how different driving patterns will affect the future health of the battery.

Topological Neuron Synthesis

In a study published in Cell Reports, we present a novel algorithm for the digital generation of neuronal morphologies, based on the topology of their branching structure. This algorithm generates neurons that are statistically similar to the biological neurons, in terms of morphological properties, electrical responses and the connectivity of the networks they form.

This study represents a major milestone for the Blue Brain Project and for the future of computational neuroscience. The topological neuron synthesis enables the generation of millions of unique neuronal shapes from different cell types. This process will allow us to reconstruct brain regions with detailed and unique neuronal morphologies at each cell position.

The topological representation of neurons facilitates the generation of neurons that approximate morphologies that are structurally altered compared to healthy neuronal morphologies. These structural alterations of neurons are disrupting the brain systems and are contributing factors to brain diseases. The topological synthesis can be used to study the differences between healthy and diseased states of different brain regions and specifically, what structural alterations of neurons are causing important problems to the networks they form.

ROBE Array could let small companies access popular form of AI

A breakthrough low-memory technique by Rice University computer scientists could put one of the most resource-intensive forms of artificial intelligence—deep-learning recommendation models (DLRM)—within reach of small companies.

DLRM recommendation systems are a popular form of AI that learns to make suggestions users will find relevant. But with top-of-the-line training models requiring more than a hundred terabytes of memory and supercomputer-scale processing, they’ve only been available to a short list of technology giants with deep pockets.

Rice’s “random offset block embedding ,” or ROBE Array, could change that. It’s an algorithmic approach for slashing the size of DLRM memory structures called embedding tables, and it will be presented this week at the Conference on Machine Learning and Systems (MLSys 2022) in Santa Clara, California, where it earned Outstanding Paper honors.

Physicists uncover new dynamical framework for turbulence

Turbulence plays a key role in our daily lives, making for bumpy plane rides, affecting weather and climate, limiting the fuel efficiency of the cars we drive, and impacting clean energy technologies. Yet, scientists and engineers have puzzled at ways to predict and alter turbulent fluid flows, and it has long remained one of the most challenging problems in science and engineering.

Now, physicists from the Georgia Institute of Technology have demonstrated—numerically and experimentally—that turbulence can be understood and quantified with the help of a relatively small set of special solutions to the governing equations of fluid dynamics that can be precomputed for a particular geometry, once and for all.

“For nearly a century, turbulence has been described statistically as a random process,” said Roman Grigoriev. “Our results provide the first experimental illustration that, on suitably short time scales, the dynamics of turbulence is deterministic—and connects it to the underlying deterministic governing equations.”

How does classical, Newtonian inertia emerge from quantum mechanics?

From my understanding, inertia is typically taken as an axiom rather than something that can be explained by some deeper phenomenon. However, it’s also my understanding that quantum mechanics must reduce to classical, Newtonian mechanics in the macroscopic limit.

By inertia, I mean the resistance to changes in velocity — the fact of more massive objects (or paticles, let’s say) accelerating more slowly given the same force.

What is the quantum mechanical mechanism that, in its limit, leads to Newtonian inertia? Is there some concept of axiomatic inertia that applies to the quantum mechanical equations and explains Newtonian inertia, even if it remains a fundamental assumption of quantum theory?

AI could revolutionize healthcare but can we trust it?

The tool can identify symptoms of dengue, malaria, leptospirosis, and scrub typhus.

The study investigates both statistical and machine learning approaches. WHO has categorized dengue as a “neglected tropical disease.”

A prediction tool based on multi-nominal regression analysis and a machine learning algorithm was developed.

Accurate diagnosis is essential for the proper treatment and ensuring the well-being of patients. However, some diseases present with similar clinical symptoms and laboratory results, making diagnosing them more challenging.


Artificial Intelligence is perhaps the most promising technology for transforming our lives — but it’s also incredibly scary. At CES 2022, A panel of AI experts discussed what role AI might play in the future of healthcare.

In a session titled “Consumer Safety Driven by AI,” Pat Baird, an AI engineer and software developer who works in standards and regulations for Phillips, and Joseph Murphy, VP Marketing at Sensory Inc., an American technology company that develops AI products, discussed what AI could add to our lives. They also discussed the apprehension many people feel about the technology.

Artificial Intelligence Model Can Detect Parkinson’s From Breathing Patterns

Summary: A newly developed artificial intelligence model can detect Parkinson’s disease by reading a person’s breathing patterns. The algorithm can also discern the severity of Parkinson’s disease and track progression over time.

Source: MIT

Parkinson’s disease is notoriously difficult to diagnose as it relies primarily on the appearance of motor symptoms such as tremors, stiffness, and slowness, but these symptoms often appear several years after the disease onset.

Protein-Designing AI Opens Door to Medicines Humans Couldn’t Dream Up

A new study in Science overthrew the whole gamebook. Led by Dr. David Baker at the University of Washington, a team tapped into an AI’s “imagination” to dream up a myriad of functional sites from scratch. It’s a machine mind’s “creativity” at its best—a deep learning algorithm that predicts the general area of a protein’s functional site, but then further sculpts the structure.

As a reality check, the team used the new software to generate drugs that battle cancer and design vaccines against common, if sometimes deadly, viruses. In one case, the digital mind came up with a solution that, when tested in isolated cells, was a perfect match for an existing antibody against a common virus. In other words, the algorithm “imagined” a hotspot from a viral protein, making it vulnerable as a target to design new treatments.

The algorithm is deep learning’s first foray into building proteins around their functions, opening a door to treatments that were previously unimaginable. But the software isn’t limited to natural protein hotspots. “The proteins we find in nature are amazing molecules, but designed proteins can do so much more,” said Baker in a press release. The algorithm is “doing things that none of us thought it would be capable of.”

/* */