Toggle light / dark theme

The purpose of the attention schema theory is to explain how an information-processing device, the brain, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is offered as a possible starting point for building artificial consciousness. Given current technology, it should be possible to build a machine that contains a rich internal model of what consciousness is, attributes that property of consciousness to itself and to the people it interacts with, and uses that attribution to make predictions about human behavior. Such a machine would “believe” it is conscious and act like it is conscious, in the same sense that the human machine believes and acts.

This article is part of a special issue on consciousness in humanoid robots. The purpose of this article is to summarize the attention schema theory (AST) of consciousness for those in the engineering or artificial intelligence community who may not have encountered previous papers on the topic, which tended to be in psychology and neuroscience journals. The central claim of this article is that AST is mechanistic, demystifies consciousness and can potentially provide a foundation on which artificial consciousness could be engineered. The theory has been summarized in detail in other articles (e.g., Graziano and Kastner, 2011; Webb and Graziano, 2015) and has been described in depth in a book (Graziano, 2013). The goal here is to briefly introduce the theory to a potentially new audience and to emphasize its possible use for engineering artificial consciousness.

The AST was developed beginning in 2010, drawing on basic research in neuroscience, psychology, and especially on how the brain constructs models of the self (Graziano, 2010, 2013; Graziano and Kastner, 2011; Webb and Graziano, 2015). The main goal of this theory is to explain how the brain, a biological information processor, arrives at the claim that it possesses a non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim. The theory does not address how the brain might actually possess a non-physical essence. It is not a theory that deals in the non-physical. It is about the computations that cause a machine to make a claim and to assign a high degree of certainty to the claim. The theory is in the realm of science and engineering.

Prototypes, slideware and vaporware is easy. LG showed a cool prototype self driving concept car at CES 2024. There was also new AI marketing buzzwords and AI promises.

The concept car has swiveling seats so that passengers can look at any direction. There were also great LG screens to immerse passengers with video.

Real AI is emerging this year but there will also be a lot of AI Hype.

The company says it’s focusing on ‘quality and reliability’ while also laying off hundreds.


Several “underutilized” Google Assistant features will soon be joining the infamous Google graveyard — such as the ability to use your voice to send an email, video, or audio messages — as the search giant introduces changes it says will make the feature easier to use. The company is also changing how the microphone works in the Google app and Pixel Search bar.

Starting January 26th, users who activate any of the 17 Assistant features being removed will be notified that it’s being discontinued, with most features departing for good on February 26th, according to 9to5Google. This news comes less than a day after Google announced it was laying off around a thousand employees, some of whom worked on Google Assistant.

The Assistant features being removed will impact mobile, smartwatch, and smart speaker/display devices, though Google does offer workarounds to replicate some lost functionality. However, some features such as the Calm meditation service integration, are being removed entirely. The alternatives users are directed to are also not directly equivalent to many of the deleted features.

LimX Dyamics claims CL-1 is one of the few humanoid robots around the world that achieves dynamic stair climbing based on real-time terrain perception.


The Chinese company claims that CL-1 stands out as one of the few models capable of dynamic stair climbing through real-time terrain perception among the global array of humanoid robots. This is achieved through sophisticated motion control and AI algorithms developed by LimX Dynamics, complemented by their proprietary high-performance actuators and hardware systems.

Environment perception

One approach to AI uses a process called machine learning. In machine learning, a computer model is built to predict what may happen in the real world. The model is taught to analyze and recognize patterns in a data set. This training enables the model to then make predictions about new data. Some AI programs can also teach themselves to ask new questions and make novel connections between pieces of information.

“Computer models and humans can really work well together to improve human health,” explains Dr. Grace C.Y. Peng, an NIH expert on AI in medicine. “Computers are very good at doing calculations at a large scale, but they don’t have the intuitive capability that we have. They’re powerful, but how helpful they’re going to be lies in our hands.”

Researchers are exploring ways to harness the power of AI to improve health care. These include assisting with diagnosing and treating medical conditions and delivering care.

As we turn our attention from the front of the eye to the back, we also look to the future. Many studies have combined oculomics with AI tools to predict biological age from retinal biomarkers, such as retinal vasculature [1, 6], and even linked this to chronic disease risk, such as cardiovascular disease and cancer [7]. High resolution imaging tools also enable direct visualisation of the neural layers within the retina, which can show signs of neurodegenerative diseases, such as Alzheimer’s disease [1, 6], Parkinson’s disease [8], multiple sclerosis [6, 9], and even rare conditions, such as Lafora disease [10]. In many cases, the oculomic signs are present before symptoms arise. For example, it has been shown that proteins related to Alzheimer’s disease (such as amyloid-beta) accumulate at least one decade prior to cognitive decline [11] and these proteins also accumulate in the retina [12]. This is particularly pertinent to clinical research and drug development, as it enables identification of those who may benefit from intervention before irreversible damage has taken place.

Advances in imaging technology mean that we can now detect biomarkers at cellular resolution. We are continually finding new applications for imaging techniques to detect disease before it takes hold, providing the opportunity to intervene and potentially avoid disease altogether. It’s definitely an exciting time for oculomics research!

Crystallomancy has come a long way since Ancient Roman times, and it makes one wonder whether the scryers of the past could have predicted the transformation of orb-gazing from a mystical art to a rigorous science. Not only does Oculomics enable us to look into your past and present, but also has the potential to look into your future, providing you the opportunity to change your “fate”. Although we cannot be sure what form the advancements in imaging and AI tools will take over the coming years, we can be sure of one thing – that oculomics has a promising future in the quest for longevity.

The Inevitable Shift towards Machine Labor.
Impact Multiplier of Artificial Cognition and Synthetic Minds.
Economic Benefits of Cognition and Embodied Services.
Addressing Displacement with UBI Funded with Cognitive Services Impact Multipliers.

Navigating the Future with AI, Robotics, and UBI
Introduction.
In the context of the inevitable shift from human labor to machines, particularly in the realm of cognitive and physical tasks, the introduction of advanced technologies like Tesla’s Optimus robot and the development of artificial cognition and synthetic minds carry profound implications.

The Inevitable Shift towards Machine Labor.
The transition from human to machine labor in both cognitive and physical domains is becoming increasingly unavoidable. Technologies like Tesla Optimus represent a significant leap in this direction.

A new artificial intelligence tool that interprets medical images with unprecedented clarity does so in a way that could allow time-strapped clinicians to dedicate their attention to critical aspects of disease diagnosis and image interpretation.

The tool, called iStar (Inferring Super-Resolution Tissue Architecture), was developed by researchers at the Perelman School of Medicine at the University of Pennsylvania, who believe they can help clinicians diagnose and better treat cancers that might otherwise go undetected.

The imaging technique provides both highly detailed views of individual cells and a broader look at the full spectrum of how people’s genes operate, which would allow doctors and researchers to see cancer cells that might otherwise have been virtually invisible. This tool can be used to determine whether safe margins were achieved through cancer surgeries and automatically provide annotation for microscopic images, paving the way for molecular disease diagnosis at that level.