Toggle light / dark theme

In today’s episode, William Hahn explores how Wolfram’s universal computation and Leibniz’s layered consciousness might converge in modern AI, potentially yielding a new evolutionary step in machine self-awareness.

As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe.

Rubin Gruber Sandbox (referenced by Will): https://www.fau.edu/sandbox.

➡️Join My New Substack (Personal Writings): https://curtjaimungal.substack.com.

The latent circuit model captures task-related neural activity in the low-dimensional subspace spanned by the columns of Q, with dynamics within this subspace generated by the neural circuit Eq. (2). We infer the latent circuit parameters (Q, wrec, win and wout) from neural activity y by minimizing the loss function L = ∑k,tyQx2 + ∥ zwoutx2, where k and t index trials and time within a trial, respectively (Methods).

In the latent circuit model, the heterogeneity of single-neuron responses has three possible sources: mixing of task inputs to the latent circuit via win, recurrent interactions among latent nodes via wrec and linear mixing of representations in single neurons via the embedding Q. The orthonormality constraint on Q implies that the projection defined by the transpose matrix QT is a dimensionality reduction in which projection onto the i th column of Q correlates with the activity of the i th node in the latent circuit. Conversely, the image of each latent node i is a high-dimensional activity pattern given by the column qi of the matrix Q. Thus, the latent circuit provides a dimensionality reduction that incorporates an explicit mechanistic hypothesis for how the resulting low-dimensional dynamics are generated.

In general, it is not obvious under what circumstances we can satisfactorily fit a latent circuit model to the responses of a high-dimensional system. If, for example, solutions to cognitive tasks that emerge in large systems are qualitatively different from mechanisms operating in small circuits, then we should not be able to adequately fit task-related dynamics of the large system with a low-dimensional circuit model. However, the existence of a low-dimensional circuit solution that accurately captures dynamics of the large system would suggest that this circuit mechanism may be latent in the high-dimensional system.

Harvard University and the Chinese University of Hong Kong researchers have developed a technique that increases the solubility of drug molecules by up to three orders of magnitude. This could be a breakthrough in drug formulation and delivery.

Over 60% of pharmaceutical drug candidates suffer from poor water solubility, which limits their bioavailability and therapeutic viability. Conventional techniques such as particle-size reduction, solid dispersion, lipid-based systems, and mesoporous confinement often have drug-specific limitations, can be costly to implement, and are prone to stability issues.

The newly developed approach addresses these issues by leveraging the competitive adsorption mechanism of drug molecules and water on engineered silica surfaces. It avoids chemical modification of drug molecules or using additional solubilizing agents to achieve solubility, potentially replacing multiple drug delivery technologies.

Formation of biomolecular condensates composed of proteins and RNA facilitates the regulation of gene expression by modulating translation or facilitating RNA processing. Now, synthetic ribonucleoprotein granules created with engineered intrinsically disordered proteins selectively sequester mRNA and enhance protein translation in cells. These highly liquid-like condensates exchange biomolecules across the cell and facilitate target mRNA and ribosome partitioning.

Particle detectors play a crucial role in our understanding of the fundamental building blocks of the universe. They allow scientists to study the behavior and properties of the particles produced in high-energy collisions. Such particles are boosted to near the speed of light in large accelerators and then smashed into targets or other particles where they are then analyzed with detectors. Traditional detectors, however, lack the needed sensitivity and precision for certain types of research.

Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have made a significant breakthrough in the field of high-energy particle detection in recent experiments conducted at the Test Beam Facility at DOE’s Fermi National Accelerator Laboratory (Fermilab).

They have found a new use for the superconducting nanowire photon detectors (SNSPDs) already employed for detecting photons, the fundamental particles of light. These incredibly sensitive and precise detectors work by absorbing individual photons. The absorption generates small electrical changes in the superconducting nanowires at very low temperatures, allowing for the detection and measurement of photons. Specialized devices able to detect individual photons are crucial for quantum cryptography (the science of keeping information secret and secure), advanced optical sensing (precision measurement using light) and quantum computing.