Children who develop rhabdomyosarcoma face a high risk of recurrence, but a recent effort to profile this cancer in depth offers new hope for developing durable treatments.

Using the James Webb Space Telescope (JWST), an international team of astronomers has performed deep and high spectral resolution imaging of a distant protocluster of galaxies, designated A2744-z7p9OD. Results of the new observations, published July 8 on the arXiv preprint server, shed more light on the properties of this protocluster, revealing that it hosts a remarkably evolved core.
Galaxy clusters are collections of hundreds to thousands of galaxies bound together by gravity. Such clusters are the most immense gravitationally bound structures in the universe, and therefore they could serve as excellent laboratories for studying galaxy evolution and cosmology.
Of special interest for astronomers are studies of protoclusters of galaxies—the progenitors of clusters. These objects, found at high redshifts (over 2.0), could provide essential information about the early phases of the universe.
An international research team led by the Photonic Network Laboratory at the National Institute of Information and Communications Technology (NICT, President: TOKUDA Hideyuki Ph.D.), and including Sumitomo Electric Industries, Ltd. (Sumitomo Electric, President: INOUE Osamu) have set a new world record in optical fiber communications, achieving data transmission at 1.02 petabits per second over a distance of 1,808 kilometers (roughly equivalent to the distance from Sapporo to Fukuoka, from Missouri to Montana or from Berlin to Naples).
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed energy-efficient NPU technology that demonstrates substantial performance improvements in laboratory testing.
Their specialised AI chip ran AI models 60% faster while using 44% less electricity than the graphics cards currently powering most AI systems, based on results from controlled experiments.
KAIST researchers develop energy-efficient NPU technology that runs AI models 60% faster while using 44% less power than current GPU systems.
View recent discussion. Abstract: Scaling language models unlocks impressive capabilities, but the accompanying computational and memory demands make both training and deployment expensive. Existing efficiency efforts typically target either parameter sharing or adaptive computation, leaving open the question of how to attain both simultaneously. We introduce Mixture-of-Recursions (MoR), a unified framework that combines the two axes of efficiency inside a single Recursive Transformer. MoR reuses a shared stack of layers across recursion steps to achieve parameter efficiency, while lightweight routers enable adaptive token-level thinking by dynamically assigning different recursion depths to individual tokens.