Toggle light / dark theme

In an exciting development for quantum computing, researchers from the University of Chicago’s Department of Computer Science, Pritzker School of Molecular Engineering, and Argonne National Laboratory have introduced a classical algorithm that simulates Gaussian boson sampling (GBS) experiments.

In this sense, the cemi theory incorporates Chalmers’ (Chalmers 1995) ‘double-aspect’ principle that information has both a physical, and a phenomenal or experiential aspect. At the particulate level, a molecule of the neurotransmitter glutamate encodes bond energies, angles, etc. but nothing extrinsic to itself. Awareness makes no sense for this kind matter-encoded information: what can glutamate be aware of except itself? Conversely, at the wave level, information encoded in physical fields is physically unified and can encode extrinsic information, as utilized in TV and radio signals. This EM field-based information will, according to the double-aspect principle, be a suitable substrate for experience. As proposed in my earlier paper (McFadden 2002a) ‘awareness will be a property of any system in which information is integrated into an information field that is complex enough to encode representations of real objects in the outside world (such as a face)’. Nevertheless, awareness is meaningless unless it can communicate so only fields that have access to a motor system, such as the cemi field, are candidates for any scientific notion of consciousness.

I previously proposed (McFadden 2013b), that complex information acquires its meaning, in the sense of binding of all of the varied aspects of a mental object, in the brain’s EM field. Here, I extend this idea to propose that meaning is an algorithm experienced, in its entirety from problem to its solution, as a single percept in the global workspace of brain’s EM field. This is where distributed information encoded in millions of physically separated neurons comes together. It is where Shakespeare’s words are turned into his poetry. It is also, where problems and solutions, such as how to untangle a rope from the wheels of a bicycle, are grasped in their entirety.

There are of course many unanswered questions, such as degree and extent of synchrony required to encode conscious thoughts, the influence of drugs or anaesthetics on the cemi field or whether cemi fields are causally active in animal brains. Yet the cemi theory provides a new paradigm in which consciousness is rooted in an entirely physical, measurable and artificially malleable physical structure and is amenable to experimental testing. The cemi field theory thereby delivers a kind of dualism, but it is a scientific dualism built on the distinction between matter and energy, rather than matter and spirit. Consciousness is what algorithms that exist simultaneously in the space of the brain’s EM field, feel like.

An international research team, led by Professor Gong Xiao from the National University of Singapore, has achieved a groundbreaking advancement in photonic-electronic integration. Their work, published in Light: Science & Applications (“Thin film ferroelectric photonic-electronic memory”), features Postdoc Zhang Gong and PhD student Chen Yue as co-first authors. They developed a non-volatile photonic-electronic memory chip utilizing a micro-ring resonator integrated with thin-film ferroelectric material.

This innovation successfully addresses the challenge of dual-mode operation in non-volatile memory, offering compatibility with silicon-based semiconductor processes for large-scale integration. The chip operates with low voltage, boasts a large memory window, high endurance, and multi-level storage capabilities. This breakthrough is poised to accelerate the development of next-generation photonic-electronic systems, with significant applications in optical interconnects, high-speed data communication, and neuromorphic computing.

As big data and AI grow, traditional computers struggle with large-scale tasks. Photonic computing offers potential, but interfacing with electronic chips is challenging. Current storage can’t handle dual-mode operations, and OEO conversion adds losses and delays. A non-volatile memory for efficient data exchange between photonic and electronic chips is essential.

But what if you’re a manufacturer without the budget, bandwidth or time to invest in advanced digital transformation right now? You can still take practical steps to move forward. Start with fundamental data collection and analytic tools to lay the groundwork. Leveraging visibility solutions like barcode scanning, wearables or other basic Internet of Things (IoT) devices can help monitor machines and provide insights and improvements.

Quality is the final piece of the equation. Once you’re further down the path to transformation, implement visibility solutions and augment and upskill workers with technology to optimize quality. To drive quality even further, add advanced automation solutions. You don’t have to boil the ocean on your digital transformation journey—take it one step at a time from wherever you’re starting.

Most manufacturers (87%) in Zebra’s study agree it’s a challenge to pilot new technologies or move beyond the pilot phase, yet they plan to advance digital maturity by 2029. With the right technology tools and solutions in place to advance visibility, augment workers and optimize quality, they will get there.

We revise the dynamics of interacting vector-like dark energy, a theoretical framework proposed to explain the accelerated expansion of the universe. By investigating the interaction between vector-like dark energy and dark matter, we analyze its effects on the cosmic expansion history and the thermodynamics of the accelerating universe. Our results demonstrate that the presence of interaction significantly influences the evolution of vector-like dark energy, leading to distinct features in its equation of state and energy density. We compare our findings with observational data and highlight the importance of considering interactions in future cosmological studies.

Researchers from the University of Pisa developed a quantum subroutine to improve matrix multiplication for AI and machine learning applications.

When you multiply two large matrices—this is a common task in fields like machine learning, but it can be time-consuming, even for powerful computers…


In a recent study published in IEEE Access, a team of researchers from the University of Pisa introduced a quantum subroutine designed to streamline matrix multiplication. This subroutine is a new feature in the toolbox of matrix multiplication that could improve computational efficiency, particularly in applications like machine learning and data processing.

It’s A Matrix World And We’re Just Living In It

Break it down: How AI can learn from the brain.

In a recent paper titled “A sensory-motor theory of the neocortex” published in the journal Nature Neuroscience, Rao posited that the brain uses active predictive coding (APC) to understand the world and break down complicated problems into simpler…


When you reach out to pet a dog, you expect it to feel soft. If it doesn’t feel like how you expect, your brain uses that feedback to inform your next action — maybe you pull your hand away. Previous models of how the brain works have typically separated perception and action. For Allen School professor Rajesh Rao, those two processes are closely intertwined, and their relationship can be mapped using a computational algorithm.

“This flips the traditional paradigm of perception occurring before action,” said Rao, the Cherng Jia and Elizabeth Yun Hwang Professor in the Allen School and University of Washington Department of Electrical & Computer Engineering and co-director of the Center for Neurotechnology.

You know the classic game Pong with the paddles and ball that moves across the screen? Imagine the ball and paddles synchronized to music. Victor Tao approached the challenge as an optimization problem to figure out where the paddle and balls should go, based on the beats of a song:

Fortunately there is a mature field dedicated to optimizing an objective (screen utilization) with respect to variables (the locations of bounces) in the presence of constraints on those variables (physics and the beats of the song). If we write our requirements as a constrained optimization problem, we can use an off-the-shelf solver to compute optimal paddle positions instead of designing an algorithm ourselves.

The result is Song Pong, and the Python code is on GitHub. [via Waxy].