Toggle light / dark theme

Keywords: With sufficiently advanced SETI, we might discover brief broadcasts or occasional episodes of minor galactic engineering occurring in small portions of a very few galaxies. But because of the acceleration of complexification and the vast distances between civilizations, it seems impossible that even an earliest-to-emerge civilization, however oligarchic, could prevent multi-local transcensions in any galaxy. In theory, one can imagine a contrarian civilization releasing interstellar probes, carefully designed not to increase their intelligence (and so, never be able to transcend) as they replicate. But what could such probes do besides extinguish primitive life? They certainly couldn’t prevent multilocal transcensions. There seems no game theoretic value to such a strategy, in a universe dominated by accelerating transcension. Finally, if constrained transcension is the overwhelming norm, we should have much greater success searching for the norm, not the rare exception. As Cirkovic (2008) and Shostak (2010) have recently argued, we need SETI strategies that focus on places where advanced postbiological civilizations are likely to live. In the transcension hypothesis, this injunction would include using optical SETI to discover the galactic transcension zone, and define its outward-growing edge. We should look for rapid and artificial processes of formation of planet-mass black holes, for leakage signals and early METI emanating from life-supporting planets, and for the regular cessation of these signals as or soon after these civilizations enter into their technological singularities.

9.

Researchers at the University of Basel have developed a new method for calculating phase diagrams of physical systems that works similarly to ChatGPT. This artificial intelligence could even automate scientific experiments in the future.

A year and a half ago, ChatGPT was released, and ever since, there has been hardly anything that cannot be created with this new form of artificial intelligence: texts, images, videos, and even music. ChatGPT is based on so-called generative models, which, using a complex algorithm, can create something entirely new from known information.

A research team led by Professor Christoph Bruder at the University of Basel, together with colleagues at the Massachusetts Institute of Technology (MIT) in Boston, have now used a similar method to calculate phase diagrams of physical systems.

Creating robots to safely aid disaster victims is one challenge; executing flexible robot control that takes advantage of the material’s softness is another. The use of pliable soft materials to collaborate with humans and work in disaster areas has drawn much recent attention. However, controlling soft dynamics for practical applications has remained a significant challenge.

In collaboration with the University of Tokyo and Bridgestone Corporation, Kyoto University has now developed a method to control pneumatic artificial muscles, which are soft robotic actuators. Rich dynamics of these drive components can be exploited as a computational resource.

Artificial muscles control rich soft component dynamics by using them as a computational resource. (Image: MEDICAL FIG.)

Blood test detects stroke quickly:


The Testing for Identification of Markers of Stroke trial shows the accuracy of a new blood test for identifying stroke.

A team of scientists has developed a new test by combining blood-based biomarkers with a clinical score. The main goal was to identify patients experiencing large vessel occlusion (LVO) stroke.

LVO strokes, a severe form of stroke, are often characterized by a sudden onset of symptoms and significant neurological damage. They occur when an artery in the brain is blocked, depriving the brain of essential oxygen.

Our approach to analyzing and mitigating future risks posed by advanced AI models.

Google DeepMind has consistently pushed the boundaries of AI, developing models that have transformed our understanding of what’s possible. We believe that AI technology on the horizon will provide society with invaluable tools to help tackle critical global challenges, such as climate change, drug discovery, and economic productivity. At the same time, we recognize that as we continue to advance the frontier of AI capabilities, these breakthroughs may eventually come with new risks beyond those posed by present-day models.

Today, we are introducing our Frontier Safety Framework — a set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them. Our Framework focuses on severe risks resulting from powerful capabilities at the model level, such as exceptional agency or sophisticated cyber capabilities. It is designed to complement our alignment research, which trains models to act in accordance with human values and societal goals, and Google’s existing suite of AI responsibility and safety practices.