Toggle light / dark theme

A population of photosynthetic algae has been shown to exhibit a highly nonlinear response to light, forming dynamic structures in light-intensity gradients.

Many photosynthetic microbes move in response to light. For example, the single-celled green alga Chlamydomonas reinhardtii swims toward moderate light to photosynthesize and away from intense light to avoid damage. Two longstanding questions about this light response regard how light-seeking cells move in a light-intensity gradient and whether this motion depends on cell concentration. Now, Aina Ramamonjy and colleagues at the French National Center for Scientific Research (CNRS) and the University of Paris have answered these questions [1]. The results could improve our understanding of how groups of photosynthetic organisms arrange themselves into dynamic patterns to control the amount of light that they receive.

In 1911, the botanist Harold Wager reported a seminal study [2] that launched the field of bioconvection, a collective phenomenon that results in self-organized structures and emergent flow patterns in suspensions of swimming microbes. The overall picture is that dense collections of microbes that are heavier than surrounding water but can swim against gravity self-organize into passively descending, cell-packed plumes flanked by actively ascending, cell-sparse populations.

Synopsis: The arrival of homo sapiens on Earth amounted to a singularity for its ecosystems, a transition that dramatically changed the distribution and interaction of living species within a relatively short amount of time. Such transitions are not unprecedented during the evolution of life, but machine intelligence represents a new phenomenon: for the first time, there are agents on earth that are not part of the biosphere. Instead of competing for a niche in the ecosystems of living systems, AI might compete with life itself.

How can we understand agency in the context of the cooperation and competition between AI, humans and other organisms?

This talk was part of the ‘Stepping Into the Future‘conference.

Agency in an Age of Machines – Joscha Bach

Bio: Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

As artificial intelligence and deep learning techniques become increasingly advanced, engineers will need to create hardware that can run their computations both reliably and efficiently. Neuromorphic computing hardware, which is inspired by the structure and biology of the human brain, could be particularly promising for supporting the operation of sophisticated deep neural networks (DNNs).

Researchers at Graz University of Technology and Intel have recently demonstrated the huge potential of neuromorphic computing hardware for running DNNs in an experimental setting. Their paper, published in Nature Machine Intelligence and funded by the Human Brain Project (HBP), shows that neuromorphic computing hardware could run large DNNs 4 to 16 times more efficiently than conventional (i.e., non-brain inspired) computing hardware.

“We have shown that a large class of DNNs, those that process temporally extended inputs such as for example sentences, can be implemented substantially more energy-efficiently if one solves the same problems on neuromorphic hardware with brain-inspired neurons and neural network architectures,” Wolfgang Maass, one of the researchers who carried out the study, told TechXplore. “Furthermore, the DNNs that we considered are critical for higher level cognitive function, such as finding relations between sentences in a story and answering questions about its content.”

They are part of the brain of almost every animal species, yet they remain usually invisible even under the electron microscope. “Electrical synapses are like the dark matter of the brain,” says Alexander Borst, director at the MPI for Biological Intelligence, in foundation (i.f). Now a team from his department has taken a closer look at this rarely explored brain component: In the brain of the fruit fly Drosophila, they were able to show that electrical synapses occur in almost all brain areas and can influence the function and stability of individual nerve cells.

Neurons communicate via synapses, small contact points at which chemical messengers transmit a stimulus from one cell to the next. We may remember this from biology class. However, that is not the whole story. In addition to the commonly known chemical synapses, there is a second, little-known type of synapse: the electrical synapse. “Electrical synapses are much rarer and are hard to detect with current methods. That’s why they have hardly been researched so far,” explains Georg Ammer, who has long been fascinated by these hidden cell connections. “In most animal brains, we therefore don’t know even basic things, such as where exactly electrical synapses occur or how they influence brain activity.”

An electrical synapse connects two neurons directly, allowing the electrical current that neurons use to communicate, to flow from one cell to the next without a detour. Except in echinoderms, this particular type of synapse occurs in the brain of every animal species studied so far. “Electrical synapses must therefore have important functions: we just do not know which ones!” says Georg Ammer.

NASA’s Artemis mission has the chief goals of sending astronauts to establish the first long-term presence on the Moon and learning what is necessary to send the first astronauts to Mars. But it’s also planning to do so much more than that.

One of its many scientific mission will see the agency send the Lunar Vulkan Imaging and Spectroscopy Explorer (Lunar-VISE) and the Lunar Explorer Instrument for space biology Applications (LEIA) to the Moon in order to explore the mysterious Gruithuisen Domes, geological features that have puzzled scientists for years.

Plastic bottles, punnets, wrap – such lightweight packaging made of PET plastic becomes a problem if it is not recycled. Scientists at Leipzig University have now discovered a highly efficient enzyme that degrades PET in record time. The enzyme PHL7, which the researchers found in a compost heap in Leipzig, could make biological PET recycling possible much faster than previously thought. The findings have now been published in the scientific journal “ChemSusChem” and selected as the cover topic.

One way in which enzymes are used in nature is by bacteria to decompose plant parts. It has been known for some time that some enzymes, so-called polyester-cleaving hydrolases, can also degrade PET. For example, the enzyme LCC, which was discovered in Japan in 2012, is considered to be a particularly effective “plastic eater”. The team led by Dr Christian Sonnendecker, an early career researcher from Leipzig University, is searching for previously undiscovered examples of these biological helpers as part of the EU-funded projects MIPLACE and ENZYCLE. They found what they were looking for in the Südfriedhof, a cemetery in Leipzig: in a sample from a compost heap, the researchers came across the blueprint of an enzyme that decomposed PET at record speed in the laboratory.

The researchers from the Institute of Analytical Chemistry found and studied seven different enzymes. The seventh candidate, called PHL7, achieved results in the lab that were significantly above average. In the experiments, the researchers added PET to containers with an aqueous solution containing either PHL7 or LCC, the previous leader in PET decomposition. Then they measured the amount of plastic that was degraded in a given period of time and compared the values with each other.

Spike-based neuromorphic hardware holds the promise to provide more energy efficient implementations of Deep Neural Networks (DNNs) than standard hardware such as GPUs. But this requires to understand how DNNs can be emulated in an event-based sparse firing regime, since otherwise the energy-advantage gets lost. In particular, DNNs that solve sequence processing tasks typically employ Long Short-Term Memory (LSTM) units that are hard to emulate with few spikes. We show that a facet of many biological neurons, slow after-hyperpolarizing (AHP) currents after each spike, provides an efficient solution. AHP-currents can easily be implemented in neuromorphic hardware that supports multi-compartment neuron models, such as Intel’s Loihi chip. Filter approximation theory explains why AHP-neurons can emulate the function of LSTM units.