Toggle light / dark theme

NASA’s Atmospheric Waves Experiment (AWE) represents a cutting-edge initiative in space research, focused on studying atmospheric gravity waves. These waves play a crucial role in the dynamics of Earth’s atmosphere, particularly in the upper layers like the mesosphere, ionosphere, and thermosphere. AWE operates from its unique vantage point aboard the International Space Station (ISS).

One of the primary objectives of AWE is to observe and analyze atmospheric gravity waves (AGWs) in the mesopause region, which is about 54 miles (87 kilometers) above the Earth’s surface. By studying these waves, AWE aims to deepen our understanding of how weather events on Earth’s surface can generate these waves and how they propagate through and affect the atmosphere’s higher regions. This research is vital for comprehending the broader impacts of AGWs on the ionosphere-thermosphere-mesosphere system, particularly in terms of space weather effects, which have implications for satellite operations and communication systems.

AWE is led by Ludger Scherliess at Utah State University in Logan, and it is managed by the Explorers Program Office at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Utah State University’s Space Dynamics Laboratory built the AWE instrument and provides the mission operations center.

A team of researchers has been studying the Ngogo community of wild chimpanzees in Uganda’s Kibale National Park for over twenty years. Their recent publication in the journal Science reveals that female chimpanzees in this population can experience menopause and have a postreproductive lifespan.

Prior to the study, these traits had only been found among mammals in a few species of toothed whales, and among primates — only in humans. These new demographic and physiological data can help researchers better understand why menopause and post-fertile survival occur in nature, and how it evolved in the human species.

This weekend in Vietnam, VNExpress quotes a government diplomat who gushed that Huang is “skipping luxury dinner parties at hotels and high-end restaurants.” He explained that “Jensen chooses street food with flavors and experiences that are hard to match anywhere else.”

If you want to follow in Huang’s footsteps, the source says that the Nvidia CEO was pictured at a sidewalk restaurant on Luong Ngoc Quyen Street (Hanoi). Additionally, he stopped at a restaurant on Hang Non Street to enjoy beef pho and drink coconut water. He also went to a Goan hotpot restaurant in Hang Thiec and drank Giang coffee on Nguyen Huu Huan Street, according to the source report.

Huang didn’t just spend his time eating and drinking in Hanoi this weekend. A Redditor shared some images and information about the Nvidia boss turning up at a “small LAN party.” In the images, you can see Huang on stage at one of the Vikings eSports Arena locations in Hanoi (there seem to be five of these internet cafe-style venues in the city). He posed for photos with various LAN party attendees, and it also looks like he took part in some kind of awards ceremony.

Philosophy of science.


We call it perception. We call it measurement. We call it analysis. But in the end it’s about how we take the world as it is, and derive from it the impression of it that we have in our minds.

We might have thought that we could do science “purely objectively” without any reference to observers or their nature. But what we’ve discovered particularly dramatically in our Physics Project is that the nature of us as observers is critical even in determining the most fundamental laws we attribute to the universe.

But what ultimately does an observer—say like us—do? And how can we make a theoretical framework for it? Much as we have a general model for the process of computation —instantiated by something like a Turing machine —we’d like to have a general model for the process of observation: a general “observer theory”

At its core, Abundance360 is a year-round program for entrepreneurs, investors, and executives who want to create positive change. It offers them a unique opportunity to unlock their potential and access the latest technologies, tools, and connections needed to succeed in today’s world.

In addition to the Summit, Workshops, and Masterminds, members benefit from the support of a close-knit community of like-minded individuals who share the goal of creating a better future for humanity.

In work that could lead to more robust quantum computing, Princeton researchers have succeeded in forcing molecules into quantum entanglement.

For the first time, a team of Princeton physicists has been able to link together individual molecules into special states that are quantum mechanically “entangled.” In these bizarre states, the molecules remain correlated with each other—and can interact simultaneously—even if they are miles apart, or indeed, even if they occupy opposite ends of the universe. This research was published in the journal Science.

Molecular entanglement: a breakthrough for practical applications.

Summary: Researchers created a revolutionary system that can non-invasively convert silent thoughts into text, offering new communication possibilities for people with speech impairments due to illnesses or injuries.

The technology uses a wearable EEG cap to record brain activity and an AI model named DeWave to decode these signals into language. This portable system surpasses previous methods that required invasive surgery or cumbersome MRI scanning, achieving state-of-the-art EEG translation performance.

It shows promise in enhancing human-machine interactions and in aiding those who cannot speak, with potential applications in controlling devices like bionic arms or robots.

Cerebras introduces gigaGPT: GPT-3 sized models in 565 lines of code.


GigaGPT is Cerebras’ implementation of Andrei Karpathy’s nanoGPT – the simplest and most compact code base to train and fine-tune GPT models. Whereas nanoGPT can train models in the 100M parameter range, gigaGPT trains models well over 100B parameters. We do this without introducing additional code or relying on third party frameworks – the entire repo is just 565 lines of code. Instead gigaGPT utilizes the large memory and compute capacity of Cerebras hardware to enable large scale training on vanilla torch.nn code. With no modifications, gigaGPT supports long context lengths and works with a variety of optimizers.

Why gigaGPT

While the transformer architecture is simple, training a large transformer on a large number of GPUs is hard. Beyond a few billion parameters, vanilla GPT models run out of memory on even the latest GPUs. Training larger models requires breaking up models into smaller pieces, distributing them to multiple GPUs, coordinating the workload among the workers, and assembling the results. This is typically done via LLM scaling frameworks such as Megatron, DeepSpeed, NeoX, Fairscale, and Mosaic Foundry. Though powerful, these frameworks introduce significant complexity.