Toggle light / dark theme

Researchers from the University of Jyväskylä were able to simplify the most popular technique of artificial intelligence, deep learning, using 18th-century mathematics. They also found that classical training algorithms that date back 50 years work better than the more recently popular techniques. Their simpler approach advances green IT and is easier to use and understand.

The recent success of artificial intelligence is significantly based on the use of one core technique: . Deep learning refers to techniques where networks with a large number of data processing layers are trained using massive datasets and a substantial amount of computational resources.

Deep learning enables computers to perform such as analyzing and generating images and music, playing digitized games and, most recently in connection with ChatGPT and other generative AI techniques, acting as a conversational agent that provides high-quality summaries of existing knowledge.

NASA will begin a new RS-25 test series Oct. 5, the final round of certification testing ahead of production of an updated set of the engines for the SLS (Space Launch System) rocket. The engines will help power future Artemis missions to the Moon and beyond.

A series of 12 tests stretching into 2024 is scheduled to occur on the Fred Haise Test Stand at NASA’s Stennis Space Center near Bay St. Louis, Mississippi. The tests are a key step for lead SLS engines contractor Aerojet Rocketdyne, an L3Harris Technologies company, to produce engines that will help power the SLS rocket, beginning with Artemis V.

NASA and our industry partners continue to make steady progress toward restarting production of the RS-25 engines for the first time since the space shuttle era as we prepare for our more ambitious missions to deep space under Artemis with the SLS rocket,” said Johnny Heflin, liquid engines manager for SLS at NASA’s Marshall Space Flight Center in Huntsville, Alabama. “The upcoming fall test series builds off previous hot fire testing already conducted at NASA Stennis to help certify a new design that will make this storied spaceflight engine even more powerful.”

That looks promising. 90% accuracy isn’t bad. Now the trick is getting there though we have options on our own solar system possibly. You never know until you try. I doubt we’ll find high level life remnants but perhaps something much less like at most insect level but more likely microbial. I’m just guessing of course.


A team of scientists supported in part by NASA have outlined a simple and reliable method to search for signs of past or present life on other worlds that employs machine learning techniques. The results show that the method can distinguish both modern and ancient biosignatures with an accuracy of 90 percent.

The method is able to detect whether or not a sample contains materials that were tied to biological activity. What the research team refers to as a “routine analytical method” could be performed with instruments on missions including spacecraft, landers, and rovers, even before samples are returned to Earth. In addition, the method could be used to shed light on the history of ancient rocks on our own planet.

The team used molecular analyses of 134 samples containing carbon from abiotic and biotic sources to train their software to predict a new sample’s origin. Using pyrolysis gas chromatography, the method can detect subtle differences in a sample’s molecular patterns and determine whether or not a sample is biotic in origin. When testing the method, samples originating from a wide variety of biotic sources were identified, including things like shells, human hair, and cells preserved in fine-grained rock. The method was even able to identify remnants of life that have been altered by geological processes, such as coal and amber.

“Wish I had this to cite,” lamented Jacob Andreas, a professor at MIT, who had just published a paper exploring the extent to which language models mirror the internal motivations of human communicators.

Jan Leike, the head of alignment at OpenAI, who is chiefly responsible for guiding new models like GPT-4 to help, rather than harm, human progress, responded to the paper by offering Burns a job, which Burns initially declined, before a personal appeal from Sam Altman, the cofounder and CEO of OpenAI, changed his mind.

“Collin’s work on ‘Discovering Latent Knowledge in Language Models Without Supervision’ is a novel approach to determining what language models truly believe about the world,” Leike says. “What’s exciting about his work is that it can work in situations where humans don’t actually know what’s true themselves, so it could apply to systems that are smarter than humans.”