Toggle light / dark theme

A UK company with lofty aspirations around sustainable space travel has test-fired a rocket engine powered in part by plastic waste. Pulsar Fusion’s hybrid rocket engine is part of an ambitious journey that also involves the development of nuclear fusion technology for high-speed propulsion, which could cut travel times to Mars in half.

The idea of incorporating recycled plastic waste into hybrid rocket fuels is something we have seen explored before. Virgin Galactic flirted with the idea back in 2014 through the use of a rocket powered by a fuel based on a class of thermoset plastics, though this was swiftly abandoned after a failed test flight. Scottish company Skyrora is another outfit working on such a technology, having successfully tested out its Ecosene fuel made from converted plastic waste.

More Stories

Gen. David Thompson, vice chief of space operations for the US Space Force, said Saturday China is developing its space capabilities at “twice the rate” of the US.

On a panel of US space experts and leaders speaking at the Reagan National Defense Forum in a panel moderated by CNN’s Kristin Fisher, Gen. Thompson warned China could overtake the US in space capabilities by the end of the decade.

“The fact, that in essence, on average, they are building and fielding and updating their space capabilities at twice the rate we are means that very soon, if we don’t start accelerating our development and delivery capabilities, they will exceed us,” Gen. Thompson said, adding, “2030 is not an unreasonable estimate.”

Black holes are one of the greatest mysteries of the universe—for example, a black hole with the mass of our sun has a radius of only 3 kilometers. Black holes in orbit around each other emit gravitational radiation—oscillations of space and time predicted by Albert Einstein in 1916. This causes the orbit to become faster and tighter, and eventually, the black holes merge in a final burst of radiation. These gravitational waves propagate through the universe at the speed of light, and are detected by observatories in the U.S. (LIGO) and Italy (Virgo). Scientists compare the data collected by the observatories against theoretical predictions to estimate the properties of the source, including how large the black holes are and how fast they are spinning. Currently, this procedure takes at least hours, often months.

An interdisciplinary team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam is using state-of-the-art machine learning methods to speed up this process. They developed an algorithm using a , a complex computer code built from a sequence of simpler operations, inspired by the human brain. Within seconds, the system infers all properties of the binary black-hole source. Their research results are published today in Physical Review Letters.

“Our method can make very accurate statements in a few seconds about how big and massive the two were that generated the gravitational waves when they merged. How fast do the black holes rotate, how far away are they from Earth and from which direction is the gravitational wave coming? We can deduce all this from the observed data and even make statements about the accuracy of this calculation,” explains Maximilian Dax, first author of the study Real-Time Gravitational Wave Science with Neural Posterior Estimation and Ph.D. student in the Empirical Inference Department at MPI-IS.

Computer engineers at Duke University have developed a new AI method for accurately predicting the power consumption of any type of computer processor more than a trillion times per second while barely using any computational power itself. Dubbed APOLLO, the technique has been validated on real-world, high-performance microprocessors and could help improve the efficiency and inform the development of new microprocessors.

The approach is detailed in a paper published at MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, one of the top-tier conferences in computer architecture, where it was selected the conference’s best publication.

“This is an intensively studied problem that has traditionally relied on extra circuitry to address,” said Zhiyao Xie, first author of the paper and a Ph.D. candidate in the laboratory of Yiran Chen, professor of electrical and computer engineering at Duke. “But our approach runs directly on the microprocessor in the background, which opens many new opportunities. I think that’s why people are excited about it.”

Security experts around the world raced Friday to patch one of the worst computer vulnerabilities discovered in years, a critical flaw in open-source code widely used across industry and government in cloud services and enterprise software.

“I’d be hard-pressed to think of a company that’s not at risk,” said Joe Sullivan, chief security officer for Cloudflare, whose online infrastructure protects websites from malicious actors. Untold millions of servers have it installed, and experts said the fallout would not be known for several days.

New Zealand’s computer emergency response team was among the first to report that the flaw in a Java-language utility for Apache servers used to log user activity was being “actively exploited in the wild” just hours after it was publicly reported Thursday and a patch released.

The Artificial Intelligence industry should create a global community of hackers and “threat modelers” dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.

This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge’s Center for the Study of Existential Risk (CSER), who have authored a new “call to action” published today in the journal Science.

They say that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties”—paying out rewards for revealing ethical flaws—to prove their integrity before releasing AI for use on the wider public.