Toggle light / dark theme

A new study published in the journal Neuroscience has found that long-term exposure to high-altitude conditions can slow down the way people recognize faces and change the way their brains process emotions. The research compared young adults living at high altitudes with those living at lower altitudes and found that the high-altitude group not only took longer to recognize emotional faces but also showed distinct changes in their brain activity.

Long-term residence in high-altitude environments has been linked to a greater occurrence of mental health challenges, such as anxiety and depression. Statistics show that depression is significantly more common in high-altitude regions compared to lower areas. Studies focusing on people who migrate to or work in high-altitude places, like those in Tibet or the Himalayas, have consistently shown that the reduced oxygen levels at these elevations can negatively impact emotional well-being. Moving to high altitude regions has been reported to increase the chances of experiencing depression, anxiety, and even suicidal thoughts.

Depression is known to be closely related to negative patterns in how we think and process information. A strong connection exists between depression and a tendency towards negative thinking, fixating on negative thoughts, and difficulty controlling impulsive behaviors. This relationship has been observed in soldiers stationed in high-altitude areas; those with poorer mental health tend to exhibit stronger negative biases in their thinking. This negative thinking bias can also influence how we perceive facial expressions, which is important for social interactions. For instance, individuals with depression tend to show heightened brain responses to negative facial expressions and take longer to shift their attention away from them. Brain activity patterns, measured through electroencephalography, can even be used to detect depression based on how the brain reacts to emotional stimuli.

Researchers in Germany have developed a special technique that will allow better control over atomic reflections in quantum sensors. This new approach uses carefully engineered light pulses as atomic mirrors to cut noise and sharpen quantum measurements.

There’s a big difference between regular and quantum sensors. The former relies on classical physics to measure properties like temperature, pressure, or motion. However, their measurements are affected by factors like thermal noise, material quality, and environmental disturbances.

WASHINGTON — Space infrastructure company Redwire has secured a contract to provide an additional satellite platform for a U.S. Space Force orbital refueling experiment.

The satellite order, announced Feb. 11, is for a third Mako satellite bus for the Space Force’s Tetra-6 in-orbit refueling experiment scheduled for 2027. The prime contractor for the experiment, Arcfield, had previously ordered two Mako platforms for the Tetra-5 experiment, scheduled for 2025.

The Tetra-5 and Tetra-6 missions represent key tests of in-space refueling capabilities, a sector of the market closely being watched by military and commercial stakeholders as they seek to extend satellite lifespans.

In today’s AI news, Backed by $200 million in funding, Scott Wu and his team at Cognition are building an AI tool that could potentially disintegrate the whole industry, at a $2 Billion valuation. Devin is an autonomous AI agent that, in theory, writes the code itself—no people involved—and can complete entire projects typically assigned to developers.

In other advancements, OpenAI is changing how it trains AI models to explicitly embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says in a new policy. OpenAI is releasing a significantly expanded version of its Model Spec, a document that defines how its AI models should behave — and is making it free for anyone to use or modify.

Then, xAI, the artificial intelligence company founded by Elon Musk, is set to launch Grok 3 on Monday, Feb. 17. According to xAI, this latest version of its chatbot, which Musk describes as “scary smart,” represents a major step forward, improving reasoning, computational power and adaptability. Grok 3’s development was accelerated by its Colossus supercomputer, which was built in just eight months, powered by 100,000 Nvidia H100 GPUs.

And, large language models can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that with just a small batch of well-curated examples, you can train an LLM for tasks that were thought to require tens of thousands of training instances.

S new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. ” + Then, join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusses with Link Ventures’ John Werner, the future of artificial intelligence and how open-source development is driving innovation. In this wide-ranging conversation, LeCun explains why AI systems won’t “take over” but will instead serve as empowering assistants.

A worldwide MASS BAN of DeepSeek AI has just begun, and the implications are shocking! Governments, corporations, and AI regulators are now cracking down on one of the fastest-growing AI models, sparking intense debates about AI safety, censorship, and control. But why is DeepSeek AI being banned, and what does this mean for the future of artificial intelligence?

In this video, we break down why countries are banning DeepSeek AI, the real reasons behind this massive restriction, and what this means for the AI industry and everyday users. Is this about security risks, misinformation, or something even bigger? And how will OpenAI, Google, and other tech giants respond to this sudden AI crackdown?

With the AI revolution accelerating faster than governments can regulate, this global ban on DeepSeek could signal the beginning of tighter AI control worldwide. But is this about protecting people—or protecting power? Watch till the end to find out!

Why is DeepSeek AI being banned? What does this mean for the future of AI? Is this the start of global AI censorship? This video will answer all these questions and more—so don’t miss it!

2-minute pitch for my startup Cathedral Therapeutics (co-founded with David Curiel)! If you’re interested, please feel free to reach out.


Cathedral’s novel technology protects adeno-associated virus (AAV) gene therapies from the immune system so that all patients can access the life-changing cures they need. We encapsulate AAVs inside of hollow organelles found in human cells called protein vaults to make vaultAAV complexes. This approach shields the encapsulated AAVs from antibodies so that they can enter cells and deliver beneficial DNA. https://www.cathedraltherapeutics.com/

Neptune and Uranus are the seventh and eighth planets from the sun, and as such share a lot of the same characteristics. Though they are different colors (Neptune is bluer than Uranus’ cyan hue) and have different numbers of rings and moons, both planets are ice giants that are similar sizes at just over 30,000 miles (50,000 kilometers) wide each. They also weigh about the same, with Neptune coming in at 1.024 × 1026 kg (about 17 times the mass of Earth) and Uranus weighing 8.682 × 1025 kg (about 14 times the mass of Earth). What’s more, both planets have upper atmospheres made up of mostly hydrogen, helium, and methane. Put simply, then, the characteristics of Neptune and those of Uranus are very similar despite their different colors.

Now, it seems the two worlds could have yet another thing in common, and this one is particularly intriguing when compared with Earth. Neptune and Uranus might be home to some incredibly deep oceans that make our own look like puddles.

Earth’s own oceans are already mysterious enough. They cover roughly 70% of the planet’s surface, yet only a small portion of our ocean has been explored, with the Nippon Foundation-Gebco stating that as of June 2024, just 26.1% of the entire seafloor had been mapped. The deepest trench in the ocean, known as the Challenger Deep, sits beneath the western Pacific Ocean, southwest of the U.S. territorial island of Guam, and is roughly 35,876 feet deep. Just what life is like at such depths remains somewhat of a mystery, with the deep ocean already proving to be home to prehistoric sea animals that are, frankly, nothing short of nightmare fuel. But the deepest ocean trenches on Earth are absolutely nothing compared to the depths of the oceans that might well exist on Neptune and Uranus.

Buildings cost a lot these days. But when concrete buildings are being constructed, there’s another material that can make them less expensive: mud.

MIT researchers have developed a method to use lightly treated mud, including soil from a building site, as the “formwork” molds into which concrete is poured. The technique deploys 3D printing and can replace the more costly method of building elaborate wood formworks for concrete construction.

“What we’ve demonstrated is that we can essentially take the ground we’re standing on, or waste soil from a construction site, and transform it into accurate, highly complex, and flexible formwork for customized concrete structures,” says Sandy Curth, a PhD student in MIT’s Department of Architecture who has helped spearhead the project.