Toggle light / dark theme

All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have. Consequently we know that an LLM cannot have subjective experiences of those states. In other words, it cannot be sentient.

An LLM is a mathematical model coded on silicon chips. It is not an embodied being like humans. It does not have a “life” that needs to eat, drink, reproduce, experience emotion, get sick, and eventually die.

It is important to understand the profound difference between how humans generate sequences of words and how an LLM generates those same sequences. When I say “I am hungry,” I am reporting on my sensed physiological states. When an LLM generates the sequence “I am hungry,” it is simply generating the most probable completion of the sequence of words in its current prompt. It is doing exactly the same thing as when, with a different prompt, it generates “I am not hungry,” or with yet another prompt, “The moon is made of green cheese.” None of these are reports of its (nonexistent) physiological states. They are simply probabilistic completions.

Watch Project Astra factorise a maths problem and even correct a graph. All shot on a prototype glasses device, in a single take in real time.

Project Astra is a prototype that explores the future of AI assistants. Building on our Gemini models, we’ve developed AI agents that can quickly process multimodal information, reason about the context you’re in, and respond to questions at a conversational pace, making interactions feel much more natural.

More about Project Astra: deepmind.google/project-astra

Karmela Padavic-Callaghan is a science writer reporting on physics, materials science and quantum technology. Karmela earned a PhD in theoretical condensed matter physics and atomic, molecular and optical physics from the University of Illinois Urbana-Champaign. Their research has been published in peer-reviewed journals, including Physical Review Letters and New Journal of Physics.

They studied ultracold atomic systems in novel geometries in microgravity and the interplay of disorder and quasiperiodicity in one-dimensional systems, including metamaterials. During their doctoral training, they also participated in several art-based projects, including co-developing a course on physics and art and serving as a production manager for a devised theatre piece titled Quantum Voyages.

Before joining New Scientist, Karmela was an assistant professor at Bard High School Early College in New York City, where they taught high school and college courses in physics and mathematics. Karmela’s freelance writing has been featured in Wired, Scientific American, Slate, MIT Technology Review, Quanta Magazine and Physics World.

An international research team has shown that phonons, the quantum particles behind material vibrations, can be classified using topology, much like electronic bands in materials. This breakthrough could lead to the development of new materials with unique thermal, electrical, and mechanical properties, enhancing our understanding and manipulation of solid-state physics.

An international group of researchers has found that quantum particles, which play a key role in the vibrations of materials affecting their stability and other characteristics, can be classified through topology. Known as phonons, these particles represent the collective vibrational patterns of atoms within a crystal structure. They create disturbances that spread like waves to nearby atoms. Phonons are crucial for several properties of solids, such as thermal and electrical conductivity, neutron scattering, and quantum states including charge density waves and superconductivity.

The spectrum of phonons—essentially the energy as a function of momentum—and their wave functions, which represent their probability distribution in real space, can be computed using ab initio first principle codes. However, these calculations have so far lacked a unifying principle. For the quantum behavior of electrons, topology—a branch of mathematics—has successfully classified the electronic bands in materials. This classification shows that materials, which might seem different, are actually very similar.

“Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, like that of sculpture.”

- Bertrand Russell (1972 — 1970) A History of Western Philosophy

https://mathshistory.st-andrews.ac.uk/Biographies/Russell/


The book was written during the Second World War, having its origins in a series of lectures on the history of philosophy that Russell gave at the Barnes Foundation in Philadelphia during 1941 and 1942.[2] Much of the historical research was done by Russell’s third wife Patricia. In 1943, Russell received an advance of $3000 from the publishers, and between 1944 and 1945 he wrote the book while living at Bryn Mawr College. The book was published in 1946 in the United Kingdom and a year later in the US. It was re-set as a ‘new edition’ in 1961, but no new material was added. Corrections and minor revisions were made to printings of the British first edition and for 1961’s new edition; no corrections seem to have been transferred to the American edition (even Spinoza’s birth year remains wrong).

Perturbative expansion is a valuable mathematical technique which is widely used to break down descriptions of complex quantum systems into simpler, more manageable parts. Perhaps most importantly, it has enabled the development of quantum field theory (QFT): a theoretical framework that combines principles from classical, quantum, and relativistic physics, and serves as the foundation of the Standard Model of particle physics.

If you use the web for more than just browsing (that’s pretty much everyone), chances are you’ve had your fair share of “CAPTCHA rage,” the frustration stemming from trying to discern a marginally legible string of letters aimed at verifying that you are a human. CAPTCHA, which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” was introduced to the Internet a decade ago and has seen widespread adoption in various forms — whether using letters, sounds, math equations, or images — even as complaints about their use continue.

A large-scale Stanford study a few years ago concluded that “CAPTCHAs are often difficult for humans.” It has also been reported that around 1 in 5 visitors will leave a website rather than complete a CAPTCHA.

A longstanding belief is that the inconvenience of using CAPTCHAs is the price we all pay for having secured websites. But there’s no escaping that CAPTCHAs are becoming harder for humans and easier for artificial intelligence programs to solve.

Artificial neural networks (ANNs) show a remarkable pattern when trained on natural data irrespective of exact initialization, dataset, or training objective; models trained on the same data domain converge to similar learned patterns. For example, for different image models, the initial layer weights tend to converge to Gabor filters and color-contrast detectors. Many such features suggest global representation that goes beyond biological and artificial systems, and these features are observed in the visual cortex. These findings are practical and well-established in the field of machines that can interpret literature but lack theoretical explanations.

Localized versions of canonical 2D Fourier basis functions are the most observed universal features in image models, e.g. Gabor filters or wavelets. When vision models are trained on tasks like efficient coding, classification, temporal coherence, and next-step prediction goals, these Fourier features pop up in the model’s initial layers. Apart from this, Non-localized Fourier features have been observed in networks trained to solve tasks where cyclic wraparound is allowed, for example, modular arithmetic, more general group compositions, or invariance to the group of cyclic translations.

Researchers from KTH, Redwood Center for Theoretical Neuroscience, and UC Santa Barbara introduced a mathematical explanation for the rise of Fourier features in learning systems like neural networks. This rise is due to the downstream invariance of the learner that becomes insensitive to certain transformations, e.g., planar translation or rotation. The team has derived theoretical guarantees regarding Fourier features in invariant learners that can be used in different machine-learning models. This derivation is based on the concept that invariance is a fundamental bias that can be injected implicitly and sometimes explicitly into learning systems due to the symmetries in natural data.