Toggle light / dark theme

Astronomers using data from NASA’s Chandra X-ray Observatory and other telescopes have identified a new threat to life on planets like Earth: a phase during which intense X-rays from exploded stars can affect planets over 100 light-years away. This result, as outlined in our latest press release, has implication for the study of exoplanets and their habitability.

This newly found threat comes from a supernova’s blast wave striking dense gas surrounding the exploded star, as depicted in the upper right of our artist’s impression. When this impact occurs it can produce a large dose of X-rays that reaches an Earth-like planet (shown in the lower left, illuminated by its host star out of view to the right) months to years after the explosion and may last for decades. Such intense exposure may trigger an extinction event on the planet.

A new study reporting this threat is based on X-ray observations of 31 and their aftermath—mostly from NASA’s Chandra X-ray Observatory, Swift and NuSTAR missions, and ESA’s XMM-Newton—show that planets can be subjected to lethal doses of located as much as about 160 light-years away. Four of the supernovae in the study (SN 1979C, SN 1987A, SN 2010jl, and SN 1994I) are shown in composite images containing Chandra data in the supplemental image.

Synthesis AI, a startup that specializes in synthetic data technologies, announced that they have developed a new technology for digital human creation, which enables one to create highly realistic 3D digital humans from text prompts using generative AI and VFX pipelines.

The technology showcased by Synthesis AI allows users to input specific text descriptions such as age, gender, ethnicity, hairstyle, and clothing to generate a 3D model that matches the specifications. Users can also edit the 3D model by changing text prompts or using sliders to adjust features like facial expressions and lighting.

The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots.

The key question driving new research published today in Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less .

“A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.”