Toggle light / dark theme

A new episode of Robots In Space will premiere on Saturday, January 4th at 1 PM PT.

The topic is the mysterious Noctis Labyrinthus — Labyrinth of Night — on Mars! Mars Express is one of the spacecraft investigating it.


Discover why Noctis Labyrinthus could be humanity’s future home on Mars! Join aerospace engineer Mike DiVerde as he analyzes groundbreaking research from three Mars orbital missions that revealed extensive water systems in this equatorial region. Using data from Mars Express, MRO, and Mars Global Surveyor, learn how ancient groundwater shaped this mysterious labyrinth of canyons near the massive Tharsis volcanoes. This comprehensive analysis explains why Noctis Labyrinthus, with its potential subsurface water, caves, and strategic location, might be the perfect site for future Mars colonization. Whether you’re passionate about Mars exploration or curious about humanity’s next giant leap, this evidence-based examination of Mars geology and potential habitable zones will change how you think about settling the Red Planet.

In the following link you will find the six most outstanding articles/videos published during the month of December on the website “El Radar del Rejuvenecimiento”

[ https://mailsystem.es/campaign-content/50033](https://mailsystem.es/campaign-content/50033 news/videos are all from reputable scientific sources and almost always published in English, but each of them includes a summary in Spanish.


As the calendar flips to the second quarter of the century, conversations about the transformative potential of Artificial Intelligence (AI) are reaching a fever pitch.

Hula hooping is so commonplace that we may overlook some interesting questions it raises: “What keeps a hula hoop up against gravity?” and “Are some body types better for hula hooping than others?” A team of mathematicians explored and answered these questions with findings that also point to new ways to better harness energy and improve robotic positioners.

The results are the first to explain the physics and mathematics of hula hooping.

“We were specifically interested in what kinds of body motions and shapes could successfully hold the hoop up and what physical requirements and restrictions are involved,” explains Leif Ristroph, an associate professor at New York University’s Courant Institute of Mathematical Sciences and the senior author of the paper, which appears in the Proceedings of the National Academy of Sciences.

Detecting infrared light is critical in an enormous range of technologies, from remote controls to autofocus systems to self-driving cars and virtual reality headsets. That means there would be major benefits from improving the efficiency of infrared sensors, such as photodiodes.

Researchers at Aalto University have developed a new type of infrared photodiode that is 35% more responsive at 1.55 µm, the key wavelength for telecommunications, compared to other germanium-based components. Importantly, this new device can be manufactured using current production techniques, making it highly practical for adoption.

“It took us eight years from the idea to proof-of-concept,” says Hele Savin, a professor at Aalto University.

As DARPA forges ahead with this new initiative, it raises important questions about the balance between enhancing national security and safeguarding individual privacy and civil liberties.

The potential repercussions of deploying sophisticated algorithms to interpret human behavior could lead to ethical dilemmas and increased scrutiny from civil rights advocates.

In summary, DARPA’s Theory of Mind program is positioned at the intersection of technology and national security, focusing on leveraging machine learning to improve decision-making in complex scenarios.

This is according to AI ethicists from the University of Cambridge, who say we are at the dawn of a “lucrative yet troubling new marketplace for digital signals of intent”, from buying movie tickets to voting for candidates. They call this the Intention Economy.

Researchers from Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of “persuasive technologies” – one hinted at in recent corporate announcements by tech giants.

“Anthropomorphic” AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioural data, often gleaned via informal, conversational spoken dialogue.

Heninger and Johnson argue that chaos theory limits the capacity of even superintelligent systemsin simulating/predicting events within our universe, implying that Yudkowskian warnings may not accurately describe the nature of ASI risk. #aisafety #ai #tech


Completely eliminating the uncertainty would require making measurements with perfect precision, which does not seem to be possible in our universe. We can prove that fundamental sources of uncertainty make it impossible to know important things about the future, even with arbitrarily high intelligence. Atomic scale uncertainty, which is guaranteed to exist by Heisenberg’s Uncertainty Principle, can make macroscopic motion unpredictable in a surprisingly short amount of time. Superintelligence is not omniscience.

Chaos theory thus allows us to rigorously show that there are ceilings on some particular abilities. If we can prove that a system is chaotic, then we can conclude that the system offers diminishing returns to intelligence. Most predictions of the future of a chaotic system are impossible to make reliably. Without the ability to make better predictions, and plan on the basis of these predictions, intelligence becomes much less useful.

This does not mean that intelligence becomes useless, or that there is nothing about chaos which can be reliably predicted.

This is a low-effort post. I mostly want to get other people’s takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikita Balesni, Jérémy Scheurer, and Cody Rushing for comments and discussion.

I think short timelines, e.g. AIs that can replace a top researcher at an AGI lab without losses in capabilities by 2027, are plausible. Some people have posted ideas on what a reasonable plan to reduce AI risk for such timelines might look like (e.g. Sam Bowman’s checklist, or Holden Karnofsky’s list in his 2022 nearcast), but I find them insufficient for the magnitude of the stakes (to be clear, I don’t think these example lists were intended to be an extensive plan).