Toggle light / dark theme

We can calculate the travel times for the SpaceX Starship to reach Mars. It is relatively easy to get 90 day trips each way with SpaceX Starship. This is faster than the usual 180–270 one-way travel times. This can be faster because we will have a lot more fuel to enable more direct routes to Mars. We could catch up Mars in 1/6th of an orbit instead of half of an orbit around the Sun.

There are ways to use extra expandable Starship tankers that fly with the main Starship and then transfer the extra fuel for deceleration from higher speed.

If there is more things built and working in orbit around the Earth, then this can be used to enable more ways to save fuel for faster or bigger missions. This can be done with reusable tugs to move a fully fueled Mars bound ship to higher orbits or even to escape velocity.

Princeton physicists have uncovered a groundbreaking quantum phase transition in superconductivity, challenging established theories and highlighting the need for new approaches to understanding quantum mechanics in solids.

Princeton physicists have discovered an abrupt change in quantum behavior while experimenting with a three-atom.

An atom is the smallest component of an element. It is made up of protons and neutrons within the nucleus, and electrons circling the nucleus.

Summary: The year 2023 witnessed groundbreaking discoveries in neuroscience, offering unprecedented insights into the human brain.

From animal-free brain organoids to the effects of optimism on cognitive skills, these top 10 articles have unveiled the mysteries of the mind.

Research revealed the risks of dietary trends like “dry scooping” and the impact of caffeine on brain plasticity. Additionally, the year showcased the potential of mushroom compounds for memory enhancement and the unexpected influence of virtual communication on brain activity.

Year 2017


North Korea claims it has again tested a hydrogen bomb underground and that it “successfully” loaded it onto the tip of an intercontinental ballistic missile, a claim that if true, crosses a “red line” drawn by South Korea’s president last month.

In a state media announcement, North Korea confirmed the afternoon tremors in its northeast were indeed caused by the test of a nuclear device, and that leader Kim Jong Un personally signed off on the test.

“North Korea has conducted a major Nuclear Test. Their words and actions continue to be very hostile and dangerous to the United States,” President Trump tweeted Sunday morning in response. “North Korea is a rogue nation which has become a great threat and embarrassment to China, which is trying to help but with little success.”

Heat is the enemy of quantum uncertainty. By arranging light-absorbing molecules in an ordered fashion, physicists in Japan have maintained the critical, yet-to-be-determined state of electron spins for 100 nanoseconds near room temperature.

The innovation could have a profound impact on progress in developing quantum technology that doesn’t rely on the bulky and expensive cooling equipment currently needed to keep particles in a so-called ‘coherent’ form.

Unlike the way we describe objects in our day-to-day living, which have qualities like color, position, speed, and rotation, quantum descriptions of objects involve something less settled. Until their characteristics are locked in place with a quick look, we have to treat objects as if they are smeared over a wide space, spinning in different directions, yet to adopt a simple measurement.

How hard would it be to train an AI model to be secretly evil? As it turns out, according to AI researchers, not very — and attempting to reroute a bad apple AI’s more sinister proclivities might backfire in the long run.

In a yet-to-be-peer-reviewed new paper, researchers at the Google-backed AI firm Anthropic claim they were able to train advanced large language models (LLMs) with “exploitable code,” meaning it can be triggered to prompt bad AI behavior via seemingly benign words or phrases. As the Anthropic researchers write in the paper, humans often engage in “strategically deceptive behavior,” meaning “behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity.” If an AI system were trained to do the same, the scientists wondered, could they “detect it and remove it using current state-of-the-art safety training techniques?”

Unfortunately, as it stands, the answer to that latter question appears to be a resounding “no.” The Anthropic scientists found that once a model is trained with exploitable code, it’s exceedingly difficult — if not impossible — to train a machine out of its duplicitous tendencies. And what’s worse, according to the paper, attempts to reign in and reconfigure a deceptive model may well reinforce its bad behavior, as a model might just learn how to better hide its transgressions.

One brain to rule them

Two researchers have revealed how they are creating a single super-brain that can pilot any robot, no matter how different they are.

Sergey Levine and Karol Hausman wrote in IEEE Spectrum that generative AI, which can create text and images, is not enough for robotics because the Internet does not have enough data on how robots interact with the world.