Toggle light / dark theme

CU Boulder scientists have found how ions move in tiny pores, potentially improving energy storage in devices like supercapacitors. Their research updates Kirchhoff’s law, with significant implications for energy storage in vehicles and power grids.

Imagine if your dead laptop or phone could be charged in a minute, or if an electric car could be fully powered in just 10 minutes. While this isn’t possible yet, new research by a team of scientists at CU Boulder could potentially make these advances a reality.

Published in the Proceedings of the National Academy of Sciences, researchers in Ankur Gupta’s lab discovered how tiny charged particles, called ions, move within a complex network of minuscule pores. The breakthrough could lead to the development of more efficient energy storage devices, such as supercapacitors, said Gupta, an assistant professor of chemical and biological engineering.

Artificial neural networks—algorithms inspired by biological brains—are at the center of modern artificial intelligence, behind both chatbots and image generators. But with their many neurons, they can be black boxes, their inner workings uninterpretable to users.

Researchers have now created a fundamentally new way to make neural networks that in some ways surpasses traditional systems. These new networks are more interpretable and also more accurate, proponents say, even when they’re smaller. Their developers say the way they learn to represent physics data concisely could help scientists uncover new laws of nature.

Could we store samples of Earth’s endangered biodiversity on the Moon for long-term preservation? This is what a recent study published in BioScience hopes to address as a team of researchers led by the Smithsonian Institution proposes how the Moon’s permanently shadowed regions (PSRs) located at the lunar north and south poles could be ideal locations for establishing a lunar biorepository where endangered species can be cryopreserved. This study holds the potential to safeguard Earth’s biodiversity from extinction while improving future space exploration and possible terraforming of other worlds.

“Initially, a lunar biorepository would target the most at-risk species on Earth today, but our ultimate goal would be to cryopreserve most species on Earth,” said Dr. Mary Hagedorn, who is a research cryobiologist at the Smithsonian National Zoo and Conservation Biology Institute and lead author of the study. “We hope that by sharing our vision, our group can find additional partners to expand the conversation, discuss threats and opportunities and conduct the necessary research and testing to make this biorepository a reality.”

The reason lunar PSRs are of interest for this proposal is due to several craters being completely devoid of sunlight from the Moon’s small axial tilt (6.7 degrees versus Earth’s 23.5 degrees). The team postulates this presents ample opportunity for storing several groups, including pollinators, threatened and endangered animals, culturally important species, and primary producers, just to name a few.

Neuromorphic computers are devices that try to achieve reasoning capability by emulating a human brain. They are a different type of computer architecture that copies the physical characteristics and design principles of biological nervous systems. Although neuromorphic computations can be emulated, it’s very inefficient for classical computers to simulate. Typically new hardware is required.

The first neuromorphic computer at the scale of a full human brain is about to come online. It’s called DeepSouth, and will be finished in April 2024 at Western Sydney University. This computer should enable new research into how our brain actually functions, potentially leading to breakthroughs in how AI is created.

One important characteristic of this neuromorphic computer is that it’s constructed out of commodity hardware. Specifically, it’s built on top of FPGAs. This means it will be much easier for other organizations to copy the design. It also means that once AI starts self-improving, it can probably build new iterations of hardware quite easily. Instead of having to build factories from the ground up, leveraging existing digital technology allows all the existing infrastructure to be reused. This might have implications for how quickly we develop AGI, and how quickly superintelligence arises.

#ai #neuromorphic #computing.

“These spots are a big surprise,” said Dr. David Flannery. “On Earth, these types of features in rocks are often associated with the fossilized record of microbes living in the subsurface.”


Did Mars once have life billions of years ago? This is what NASA’s Perseverance (Percy) rover hopes to figure out, and scientists might be one step closer to answering that question with a recent discovery by the car-sized robotic explorer that found a unique rock with “leopard spots” that have caused some in the scientific community to claim this indicates past life might have once existed on the now cold and dry Red Planet. However, others have just as quickly rushed to say that further evidence is required before jumping to conclusions.

Upon analyzing the rock using Percy’s intricate suite of scientific instruments, scientists determined that it contained specific chemical signatures indicative of life possibly having existed billions of years ago when liquid water flowed across the surface. However, the science team is also considering other reasons for the rock’s unique appearance, including further research to determine if the findings are consistent with potential ancient life.

The unique features of the rock include calcium sulfate veins with reddish material between the veins which indicate the presence of hematite, which is responsible for the Red Planet’s rusty color. Upon further inspecting the reddish material, Percy identified dozens of off-white splotches at the millimeter-scale with black material surrounding it, hence the name “leopard spots”

Putting 50 billion transistors into a microchip the size of a fingernail is a feat that requires manufacturing methods of nanometer level precision—layering of thin films, then etching, depositing, or using photolithography to create the patterns of semiconductor, insulator, metal, and other materials that make up the tiny working devices within the chip.

The process relies heavily on solvents that carry and deposit materials in each layer—solvents that can be difficult to handle and toxic to the environment.

Now researchers led by Fiorenzo Omenetto, Frank C. Doble Professor of Engineering at Tufts, have developed a nanomanufacturing approach that uses water as the primary solvent, making it more environmentally compatible and opening the door to the development of devices that combine inorganic and biological materials.

In a study published in Device (“Self-powered electrostatic tweezer for adaptive object manipulation”), a research team led by Dr. DU Xuemin from the Shenzhen Institute of Advanced Technology (SIAT) of the Chinese Academy of Sciences has reported a new self-powered electrostatic tweezer that offers superior accumulation and tunability of triboelectric charges, enabling unprecedented flexibility and adaptability for manipulating objects in various working scenarios.

The ability to manipulate objects using physical tweezers is essential in fields such as physics, chemistry, and biology. However, conventional tweezers often require complex electrode arrays and external power sources, have limited charge-generation capabilities, or produce undesirable temperature rises.

The newly proposed self-powered electrostatic tweezer (SET) features a polyvinylidene fluoride trifluoroethylene (P(VDF-TrFE))-based self-powered electrode (SE) that generates large and tunable surface charge density through the triboelectric effect, along with a dielectric substrate that functions as both a tribo-counter material and a supportive platform, and a slippery surface to reduce resistance and biofouling during object manipulation.

Aging is a universal experience, evident through changes like wrinkles and graying hair. However, aging goes beyond the surface; it begins within our cells. Over time, our cells gradually lose their ability to perform essential functions, leading to a decline that affects every part of our bodies, from our cognitive abilities to our immune health.

To understand how cellular changes lead to age-related disorders, Calico scientists are using advanced RNA sequencing to map molecular changes in individual cells over time in the roundworm, C. elegans. Much like mapping networks of roads and landscapes, we’re charting the complexities of our biology. These atlases uncover cell characteristics, functions, and interactions, providing deeper insights into how our bodies age.

In the early 1990s, Cynthia Kenyon, Vice President of Aging Research at Calico, and her former team at UCSF discovered genes in C. elegans that control lifespan; these genes, which influence IGF1 signaling, function similarly to extend lifespan in many other organisms, including mammals. The genetic similarities between this tiny worm and more complex animals make it a useful model for studying the aging process. In work published in Cell Reports last year, our researchers created a detailed map of gene activity in every cell of the body of C. elegans throughout its development, providing a comprehensive blueprint of its cellular diversity and functions. They found that aging is an organized process, not merely random deterioration. Each cell type follows its own aging path, with many activating cell-specific protective gene expression pathways, and with some cell types aging faster than others. Even within the same cell type, the rate of aging can vary.

The idea of the brain as a computer is everywhere. So much so we have forgotten it is a model and not the reality. It’s a metaphor that has lead some to believe that in the future they’ll be uploaded to the digital ether and thereby achieve immortality. It’s also a metaphor that garners billions of dollars in research funding every year. Yet researchers argue that when we dig down into our grey matter our biology is anything but algorithmic. And increasingly, critics contend that the model of the brain as computer is sending scientists (and their resources) nowhere fast. Is our attraction to the idea of the brain as computer an accident of current human technology? Can we find a better metaphor that might lead to a new paradigm?