Toggle light / dark theme

Boeing and SpaceX have shared footage of U.S. Space Force’s secretive X-37B mini-shuttles in space with a payload-laden service module attached. A brief video clip showing the X-37B with the module separating from its launch rocket after being lofted into space in 2020 was included in a video montage shown ahead of the latest launch of an X-37B yesterday. You can find out more about what we can expect from the new X-37B mission in The War Zone’s previous reporting.

SpaceX broadcast the video montage that included the clip in question just minutes before a Falcon Heavy rocket with an X-37B on top blasted off from the Kennedy Space Center in Florida last night. User @DutchSpace on X, formerly known as Twitter, was among the first to spot the clip of the X-37B separating into space.

The montage begins at approximately 3:38 in the runtime of the video seen below.

Fourier Intelligence has been manufacturing exoskeletons and rehabilitation devices since 2017. The Singapore-based company launched its first generation of humanoid robots this year, designated the GR-1.

The humanoid platform includes 40 degrees of freedom distributed throughout its body, which measures 1.65 m (5 ft., 5 in.) in height and weighs 55 kg (121.2 lb.). The joint module that is fitted at the hip of the robot is capable of producing a peak torque of 300 Nm, which allows it to walk at a speed of 5 kph (3.1 mph) and carry goods that weigh 50 kg (110.2 lb.).

Making the leap from exoskeleton development to humanoid design is a logical progression, as the humanoid platform shares many of the mechanical and electrical design elements that Fourier developed for its core product line. Actuation is a core competency of the company, and by designing and building actuators, it claimed that it can optimize the cost/performance of the system.

For decades, a substantial number of proteins, vital for treating various diseases, have remained elusive to oral drug therapy. Traditional small molecules often struggle to bind to proteins with flat surfaces or require specificity for particular protein homologs. Typically, larger biologics that can target these proteins demand injection, limiting patient convenience and accessibility.

In a new study published in Nature Chemical Biology, scientists from the laboratory of Professor Christian Heinis at EPFL have achieved a significant milestone in drug development. Their research opens the door to a new class of orally available drugs, addressing a long-standing challenge in the pharmaceutical industry.

“There are many diseases for which the targets were identified but drugs binding and reaching them could not be developed,” says Heinis. “Most of them are types of cancer, and many targets in these cancers are protein-protein interactions that are important for the tumor growth but cannot be inhibited.”

1. AGI could be achieved or we will get even closer. There will OpenAI releasing GPT5 and updates of Google LLM like improved Gemini.

Definition’s for AI AGI = artificial general intelligence = a machine that performs at the level of an average (median) human.

ASI = artificial superintelligence = a machine that performs at the level of an expert human in practically any field.

NASA has pushed forward a revolutionary new rocket technology at its Marshall Space Flight Center in Huntsville, Alabama. Engineers at the facility fired the 3D-printed Rotating Detonation Rocket Engine (RDRE) for a record 251 seconds with 5,800 lb (2,631 kg) of thrust.

For over six decades, NASA has relied on chemical rockets to launch its vehicles into space. It works, but chemical rockets suffer from the fact that they’ve been operating in the neighborhood of their theoretical limit since 1942. This isn’t helped by the fact that most liquid rockets are essentially unchanged in their basic design since the days of the German V2s.

To squeeze a bit more performance out of rocket engines, NASA is looking at a fundamentally different design with the RDRE.

ChatGPT may do an impressive job at correctly answering complex questions, but a new study suggests it may be absurdly easy to convince the AI chatbot that it’s in the wrong.

A team at Ohio State University challenged (LLMs) like ChatGPT to a variety of debate-like conversations in which a user pushed back when the chatbot presented a correct answer.

Through experimenting with a broad range of reasoning puzzles, including math, common sense, and logic, the study found that when presented with a challenge, the model was often unable to defend its correct beliefs and instead blindly believed invalid arguments made by the user.