Toggle light / dark theme

We study the question of how to decompose Hilbert space into a preferred tensor-product factorization without any pre-existing structure other than a Hamiltonian operator, in particular the case of a bipartite decomposition into “system” and “environment.” Such a decomposition can be defined by looking for subsystems that exhibit quasi-classical behavior. The correct decomposition is one in which pointer states of the system are relatively robust against environmental monitoring (their entanglement with the environment does not continually and dramatically increase) and remain localized around approximately-classical trajectories. We present an in-principle algorithm for finding such a decomposition by minimizing a combination of entanglement growth and internal spreading of the system. Both of these properties are related to locality in different ways.

Scientists have created synthetic organisms that can self-replicate. Known as “Xenobots,” these tiny millimeter-wide biological machines now have the ability to reproduce — a striking leap forward in synthetic biology.

Published in the Proceedings of the National Academy of Sciences 0, a joint team from the University of Vermont, Tufts University, and Harvard University used Xenopus laevis frog embryonic cells to construct the Xenobots.

Their original work began in 2020 when the Xenobots were first “built.” The team designed an algorithm that assembled countless cells together to construct various biological machines, eventually settling on embryonic skin cells from frogs.

A new method of identifying gravitational wave signals using quantum computing could provide a valuable new tool for future astrophysicists.

A team from the University of Glasgow’s School of Physics & Astronomy have developed a to drastically cut down the time it takes to match gravitational wave signals against a vast databank of templates.

This process, known as matched filtering, is part of the methodology that underpins some of the gravitational wave signal discoveries from detectors like the Laser Interferometer Gravitational Observatory (LIGO) in America and Virgo in Italy.

AI will completely take over game development by the early 2030s. To a point where there will be almost no human developers. Just people telling AI what they want to play and it builds it in real time.


Over the past few years we’ve seen massive improvements in AI technology, from GPT-3, AI picture generation to self-driving cars and drug discovery. But can machine learning progress change games?

Note: AI has many subsets, in this article when I say AI I’m referring to machine learning algorithms.

First important question to ask is, will AI even change anything? Why use machine learning when you can just hardcode movement and dialogues? The answer to this question can be found in replayability and immersive gameplay.

Local consciousness, or our phenomenal mind, is emergent, whereas non-local consciousness, or universal mind, is immanent. Material worlds come and go, but fundamental consciousness is ever-present, according to the Cybernetic Theory of Mind. From a new science of consciousness to simulation metaphysics, from evolutionary cybernetics to computational physics, from physics of time and information to quantum cosmology, this novel explanatory theory for a deeper understanding of reality is combined into one elegant theory of everything.

#CyberneticTheoryofMind #Consciousness #Evolution #Mind #Documentary


Based on The Cybernetic Theory of Mind eBook series (2022) by Alex M. Vikoulov as well as his magnum opus The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution (2020), comes a recently-released documentary Consciousness: Evolution of the Mind.

This film, hosted by the author of the book from which the narrative is derived, is now available for viewing on demand on Vimeo, Plex, Tubi, Xumo, Social Club TV and other global networks with its worldwide premiere aired on June 8, 2021. IMDb-accredited film, rated TV-PG. This is a futurist’s take on the nature of consciousness and reverse engineering of our thinking in order to implement it in cybernetics and advanced AI systems.

What mechanism may link quantum physics to phenomenology? What properties are inherently associated with consciousness? What is Experiential Realism? How can we successfully approach the Hard Problem of Consciousness, or perhaps, circumvent it? What is the Quantum Algorithm of Consciousness? Are free-willing conscious AIs even possible? These are some of the questions addressed in this Part V of the documentary.

Why do industrial robots require teams of engineers and thousands of lines of code to perform even the most basic, repetitive tasks while giraffes, horses, and many other animals can walk within minutes of their birth?

My colleagues and I at the USC Brain-Body Dynamics Lab began to address this question by creating a robotic limb that learned to move, with no prior knowledge of its own structure or environment [1,2]. Within minutes, G2P, our reinforcement learning algorithm implemented in MATLAB®, learned how to move the limb to propel a treadmill (Figure 1).

Reward maximisation is one strategy that works for reinforcement learning to achieve general artificial intelligence. However, deep reinforcement learning algorithms shouldn’t depend on reward maximisation alone.


Identifying dual-purpose therapeutic targets implicated in aging and disease will extend healthspan and delay age-related health issues.

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.