Toggle light / dark theme

Deep within our brain’s temporal lobes, two almond-shaped cell masses help keep us alive. This tiny region, called the amygdala, assists with a variety of brain activities. It helps us learn and remember. It triggers our fight-or-flight response. It even promotes the release of a feel-good chemical called dopamine.

Scientists have learned all this by studying the amygdala over hundreds of years. But we still haven’t reached a full understanding of how these processes work.

Now, Cold Spring Harbor Laboratory neuroscientist Bo Li has brought us several important steps closer. His lab recently made a series of discoveries that show how called somatostatin-expressing (Sst+) central amygdala (CeA) neurons help us learn about threats and rewards. He also demonstrated how these neurons relate to dopamine. The discoveries could lead to future treatments for anxiety or .

The resulting materials could be used for capturing greenhouse gases.

MIT researchers have used a computational model to identify about 10,000 possible metal-organic framework MOF structures that they classify as “ultrastable.” These states make them good candidates for applications such as converting methane gas to methanol.

“When people come up with hypothetical MOF materials, they don’t necessarily know beforehand how stable that material is,” said in a statement published on Tuesday Heather Kulik, an MIT associate professor of chemistry and chemical engineering and the senior author of the study.

Researchers at Texas A&M University have discovered a 1,000% difference in the storage capacity of metal-free, water-based battery electrodes. These batteries are different from lithium-ion batteries that contain cobalt. The group’s goal of researching metal-free batteries stems from having better control over the domestic supply chain since cobalt and lithium are outsourced. This safer chemistry […].

The molecules in our bodies are in constant communication. Some of these molecules provide a biochemical fingerprint that could indicate how a wound is healing, whether or not a cancer treatment is working or that a virus has invaded the body. If we could sense these signals in real time with high sensitivity, then we might be able to recognize health problems faster and even monitor disease as it progresses.

Now Northwestern University researchers have developed a new technology that makes it easier to eavesdrop on our body’s inner conversations.

While the body’s chemical signals are incredibly faint—making them difficult to detect and analyze—the researchers have developed a new method that boosts signals by more than 1,000 times. Transistors, the building block of electronics, can boost weak signals to provide an amplified output. The new approach makes signals easier to detect without complex and bulky electronics.

Max Planck scientists explore the possibilities of artificial intelligence in materials science and publish their review in the journal Nature Computational Science.

Advanced materials become increasingly complex due to the high requirements they have to fulfil regarding sustainability and applicability. Dierk Raabe, and colleagues reviewed the use of artificial intelligence in materials science and the untapped spaces it opens if combined with physics-based simulations. Compared to traditional simulation methods, AI has several advantages and will play a crucial role in material sciences in the future.

Advanced materials are urgently needed for everyday life, be it in high technology, mobility, infrastructure, green energy or medicine. However, traditional ways of discovering and exploring new materials encounter limits due to the complexity of chemical compositions, structures and targeted properties. Moreover, new materials should not only enable novel applications, but also include sustainable ways of producing, using and recycling them.

Summary: A newly designed dry sensor that can measure brain activity may someday enable mind control of robotic systems.

Source: American Chemical Society.

It sounds like something from science fiction: Don a specialized, electronic headband and control a robot using your mind. But now, recent research published in ACS Applied Nano Materials has taken a step toward making this a reality.

Advanced materials are urgently needed for everyday life, be it in high technology, mobility, infrastructure, green energy or medicine. However, traditional ways of discovering and exploring new materials encounter limits due to the complexity of chemical compositions, structures and targeted properties. Moreover, new materials should not only enable novel applications, but also include sustainable ways of producing, using and recycling them.

Researchers from the Max-Planck-Institut für Eisenforschung (MPIE) review the status of physics-based modelling and discuss how combining these approaches with artificial intelligence can open so far untapped spaces for the design of complex materials.

They published their perspective in the journal Nature Computational Science (“Accelerating the design of compositionally complex materials via physics-informed artificial intelligence”).

This year’s NVIDIA GPU Technology Conference (GTC) could not have come at a more auspicious time for the company. The hottest topic in technology today is the Artificial Intelligence (AI) behind ChatGPT, other related Large Language Models (LLMs), and their applications for generative AI applications. Underlying all this new AI technology are NVIDIA GPUs. NVIDIA’s CEO Jensen Huang doubled down on support for LLMs and the future of generative AI based on it. He’s calling it “the iPhone moment for AI.” Using LLMs, AI computers can learn the languages of people, programs, images, or chemistry. Using the large knowledge base and based on a query, they can create new, unique works: this is generative AI.

Jumbo sized LLM’s are taking this capability to new levels, specifically the latest GPT 4.0, which was introduced just prior to GTC. Training these complex models takes thousands of GPUs, and then applying these models to specific problems require more GPUs as well for inference. Nvidia’s latest Hopper GPU, the H100, is known for training, but the GPU can also be divided into multiple instances (up to 7), which Nvidia calls MIG (Multi-Instance GPU), to allow multiple inference models to be run on the GPU. It’s in this inference mode that the GPU transforms queries into new outputs, using trained LLMs.

Nvidia is using its leadership position to build new business opportunities by being a full-stack supplier of AI, including chips, software, accelerator cards, systems, and even services. The company is opening up its services business in areas such as biology, for example. The company’s pricing might be based on use time, or it could be based on the value of the end product built with its services.