Toggle light / dark theme

Generally, information at lower levels is more fine-grained but can be coarse-grained at higher levels. However, only information processed at specific scales of coarse-graining appears to be available for conscious awareness. We do not have direct experience of information available at the scale of individual neurons, which is noisy and highly stochastic. Neither do we have experience of more macro-scale interactions, such as interpersonal communications. Neurophysiological evidence suggests that conscious experiences co-vary with information encoded in coarse-grained neural states such as the firing pattern of a population of neurons. In this article, we introduce a new information al theory of consciousness: Information Closure Theory of Consciousness (ICT). We hypothesize that conscious processes are processes which form non-trivial information al closure (NTIC) with respect to the environment at certain coarse-grained scales. This hypothesis implies that conscious experience is confined due to information al closure from conscious processing to other coarse-grained scales. Information Closure Theory of Consciousness (ICT) proposes new quantitative definitions of both conscious content and conscious level. With the parsimonious definitions and a hypothesize, ICT provides explanations and predictions of various phenomena associated with consciousness. The implications of ICT naturally reconcile issues in many existing theories of consciousness and provides explanations for many of our intuitions about consciousness. Most importantly, ICT demonstrates that information can be the common language between consciousness and physical reality.

Imagine you are a neuron in Alice’s brain. Your daily work is to collect neurotransmitters through dendrites from other neurons, accumulate membrane potential, and finally send signals to other neurons through action potentials along axons. However, you have no idea that you are one of the neurons in Alice’s supplementary motor area and are involved in many motor control processes for Alice’s actions, such as grabbing a cup. You are ignorant of intentions, goals, and motor plans that Alice has at any moment, even though you are part of the physiological substrate responsible for all these actions. A similar story also happens in Alice’s conscious mind. To grab a cup, for example, Alice is conscious of her intention and visuosensory experience of this action. However, her conscious experience does not reflect the dynamic of your membrane potential or the action potentials you send to other neurons every second.

People like the veteran computer scientist Ray Kurzweil had anticipated that humanity would reach the technological singularity (where an AI agent is just as smart as a human) for yonks, outlining his thesis in ‘The Singularity is Near’ (2005) – with a projection for 2029.

Disciples like Ben Goertzel have claimed it can come as soon as 2027. Nvidia’s CEO Jensen Huang says it’s “five years away”, joining the likes of OpenAI CEO Sam Altman and others in predicting an aggressive and exponential escalation. Should these predictions be true, they will also introduce a whole cluster bomb of ethical, moral, and existential anxieties that we will have to confront. So as The Matrix turns 25, maybe it wasn’t so far-fetched after all?

Sitting on tattered armchairs in front of an old boxy television in the heart of a wasteland, Morpheus shows Neo the “real world” for the first time. Here, he fills us in on how this dystopian vision of the future came to be. We’re at the summit of a lengthy yet compelling monologue that began many scenes earlier with questions Morpheus poses to Neo, and therefore us, progressing to the choice Neo must make – and crescendoing into the full tale of humanity’s downfall and the rise of the machines.

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

In recent years, artificial intelligence technologies, especially machine learning algorithms, have made great strides. These technologies have enabled unprecedented efficiency in tasks such as image recognition, natural language generation and processing, and object detection, but such outstanding functionality requires substantial computational power as a foundation.