Toggle light / dark theme

Farmers across the United States will be able to monitor their crops in real time, thanks to a novel algorithm from researchers in South Dakota State University’s Geospatial Sciences Center of Excellence.

Two years ago, Yu Shen, research assistant in the GSCE, and Xiaoyang Zhang, professor in the Department of Geography and Geospatial Sciences and co-director at the GSCE, began investigating if it would be possible to make crop monitoring more efficient.

“Previously, crop progress was monitored by visually looking at the plants,” Shen explained.

For eons, deoxyribonucleic acid (DNA) has served as a sort of instruction manual for life, providing not just templates for a vast array of chemical structures but a means of managing their production.

In recent years engineers have explored a subtly new role for the molecule’s unique capabilities, as the basis for a biological computer. Yet in spite of the passing of 30 years since the first prototype, most DNA computers have struggled to process more than a few tailored algorithms.

A team researchers from China has now come up with a DNA integrated circuit (DIC) that’s far more general purpose. Their liquid computer’s gates can form an astonishing 100 billion circuits, showing its versatility with each capable of running its own program.

Here’s my new article for Aporia magazine, the final futurist story in my 4-part series for them!


Written by Zoltan Istvan.

I met my wife on Match.com 15 years ago. She didn’t have a picture on her profile, but she had written a strong description of herself. It was enough to warrant a first date, and we got married a year later.

But what if ordinary dating sites allowed users to see their potential date naked using advanced AI that could “virtually undress” that person? Let’s take it a step further. What if they gave users the option to have virtual sex with their potential date using deepfake technology, before they ever met them in person? Some of this technology is already here. And it’s prompting a lot of thorny questions – not just for dating sites but for anyone who uses the web.

Deepfakes are synthetic media created by deep learning algorithms (a form of artificial intelligence). These videos, images and audio recordings depict faces, voices or entire personas that are often indistinguishable from the real thing. They have raised concerns due to their potential to deceive and manipulate. While originally used to create humorous or satirical content, they will increasingly be misused to spread fake news and to commit fraud or blackmail. Of particular concern is how they might affect personal relationships.

Whether you like it or not, people are increasingly seeing art that was generated by computers. Everyone has an opinion about it, but researchers at the University of Vienna recently ran a small study to find out how people actually perceive computer-generated art.

In the study, led by Theresa Demmer, people were shown abstract art of black and white blocks in a grid. The art was either generated by a human artist or by a random number generator.

“For the computer-generated images, we avoided using AI or a self-learning algorithm trained on human-generated images but chose to use a very simple algorithm instead,” Demmer told the University of Vienna. “The goal of this approach was to produce… More.


Researchers at the University of Vienna recently ran a small study to find out how people perceive computer-generated art.

A new set of equations captures the dynamical interplay of electrons and vibrations in crystals and forms a basis for computational studies.

Although a crystal is a highly ordered structure, it is never at rest: its atoms are constantly vibrating about their equilibrium positions—even down to zero temperature. Such vibrations are called phonons, and their interaction with the electrons that hold the crystal together is partly responsible for the crystal’s optical properties, its ability to conduct heat or electricity, and even its vanishing electrical resistance if it is superconducting. Predicting, or at least understanding, such properties requires an accurate description of the interplay of electrons and phonons. This task is formidable given that the electronic problem alone—assuming that the atomic nuclei stand still—is already challenging and lacks an exact solution. Now, based on a long series of earlier milestones, Gianluca Stefanucci of the Tor Vergata University of Rome and colleagues have made an important step toward a complete theory of electrons and phonons [1].

At a low level of theory, the electron–phonon problem is easily formulated. First, one considers an arrangement of massive point charges representing electrons and atomic nuclei. Second, one lets these charges evolve under Coulomb’s law and the Schrödinger equation, possibly introducing some perturbation from time to time. The mathematical representation of the energy of such a system, consisting of kinetic and interaction terms, is the system’s Hamiltonian. However, knowing the exact theory is not enough because the corresponding equations are only formally simple. In practice, they are far too complex—not least owing to the huge number of particles involved—so that approximations are needed. Hence, at a high level, a workable theory should provide the means to make reasonable approximations yielding equations that can be solved on today’s computers.

According to Einstein’s theory of special relativity, first published in 1905, light can be converted into matter when two light particles collide with intense force. But, try as they might, scientists have never been able to do this. No one could create the conditions needed to transform light into matter — until now.

Physicists claim to have generated matter from pure light for the first time — a spectacular display of Einstein’s most famous equation.

This is a significant breakthrough, overcoming a theoretical barrier that seemed impossible only a few decades ago.

German scientists present a method by which AI could be trained much more efficiently.

In the last couple of years, research institutions have been working on finding new concepts of how computers can process data in the future. One of these concepts is known as neuromorphic computing. Neuromorphic computing models may sound similar to artificial neural networks but have little to do with them.

Compared to traditional artificial intelligence algorithms, which require significant amounts of data to be trained on before they can be effective, neuromorphic computing systems can learn and adapt on the fly.

A machine-learning algorithm demonstrated the capability to process data that exceeds a computer’s available memory by identifying a massive data set’s key features and dividing them into manageable batches that don’t choke computer hardware. Developed at Los Alamos National Laboratory, the algorithm set a world record for factorizing huge data sets during a test run on Oak Ridge National Laboratory’s Summit, the world’s fifth-fastest supercomputer.

Equally efficient on laptops and supercomputers, the highly scalable solves hardware bottlenecks that prevent processing information from data-rich applications in , , social media networks, national security science and earthquake research, to name just a few.

“We developed an ‘out-of-memory’ implementation of the non-negative matrix factorization method that allows you to factorize larger than previously possible on a given hardware,” said Ismael Boureima, a computational physicist at Los Alamos National Laboratory. Boureima is first author of the paper in The Journal of Supercomputing on the record-breaking algorithm.