Toggle light / dark theme

By stepping outside the box of our usual way of thinking about numbers, my colleagues and I have recently shown that arithmetic has biological roots and is a natural consequence of how perception of the world around us is organized.

Our results explain why arithmetic is true and suggest that mathematics is a realization in symbols of the fundamental nature and creativity of the mind.

Thus, the miraculous correspondence between mathematics and physical reality that has been a source of wonder from the ancient Greeks to the present—as explored in astrophysicist Mario Livio’s book Is God a Mathematician?—suggests the mind and world are part of a common unity.

Google DeepMind researchers recently developed a technique to improve math ability in AI language models like ChatGPT by using other AI models to improve prompting—the written instructions that tell the AI model what to do. It found that using human-style encouragement improved math skills dramatically, in line with earlier results.

In a paper called “Large Language Models as Optimizers” listed this month on arXiv, DeepMind scientists introduced Optimization by PROmpting (OPRO), a method to improve the performance of large language models (LLMs) such as OpenAI’s ChatGPT and Google’s PaLM 2. This new approach sidesteps the limitations of traditional math-based optimizers by using natural language to guide LLMs in problem-solving. “Natural language” is a fancy way of saying everyday human speech.

A team of researchers in Japan claims to have figured out a way to translate the clucking of chickens with the use of artificial intelligence.

As detailed in a yet-to-be-peer-reviewed preprint, the team led by University of Tokyo professor Adrian David Cheok — who has previously studied sex robots — came up with a “system capable of interpreting various emotional states in chickens, including hunger, fear, anger, contentment, excitement, and distress” by using “cutting-edge AI technique we call Deep Emotional Analysis Learning.”

They say the technique is “rooted in complex mathematical algorithms” and can even be used to adapt to the ever-changing vocal patterns of chickens, meaning that it only gets better at deciphering “chicken vocalizations” over time.

Scientists from the University of Texas at Dallas have identified a previously unknown “housekeeping” process in kidney cells that ejects unwanted content, resulting in cells that rejuvenate themselves and remain functioning and healthy.

This unique self-renewal method, distinct from known regeneration processes in other body tissues, sheds light on how the kidneys can maintain their health throughout one’s life in the absence of injury or illness. The team detailed their findings in a study recently published in Nature Nanotechnology.

Unlike the liver and skin, where cells divide to create new daughter cells and regenerate the organ, cells in the proximal tubules of the kidney are mitotically quiescent — they do not divide to create new cells. In cases of a mild injury or disease, kidney cells do have limited repair capabilities, and stem cells in the kidney can form new kidney cells, but only up to a point, said Dr. Jie Zheng, professor of chemistry and biochemistry in the School of Natural Sciences and Mathematics and co-corresponding author of the study.

From the oxygen-carrying corpuscles in our blood to the branching neurons that govern our thoughts, our body is built of a dazzling variety of cells.

Researchers from institutions in Germany, Canada, Spain, and the US have published a comprehensive study of how many individual cells of each type there are in typical bodies.

Based on an exhaustive analysis of over 1,500 published sources, most adult males contain a total of around 36 trillion cells, while adult females tend to have some 28 trillion cells. A 10-year-old child, by comparison, would have in the region of 17 trillion.

The universe is bigger than you think.

This means any deep-space future awaiting humanity outside our solar system will remain beyond the span of a single life until we develop a means of propulsion that outclasses conventional rockets. And, when three studies rocked the world earlier this year, it felt like a dream come true: Warp drive was no longer science fiction, potentially unlocking a theoretical basis to build faster-than-light warp drive engines that could cut a trip to Mars down to minutes.

However, a recent study shared in a preprint journal cast doubt on the theory, pointing to a gap in the math that could put the viability of a physical warp drive back into the realm of speculation.

Quantum behavior is a strange, fragile thing that hovers on the edge of reality, between a world of possibility and a Universe of absolutes. In that mathematical haze lies the potential of quantum computing; the promise of devices that could quickly solve algorithms that would take classic computers too long to process.

For now, quantum computers are confined to cool rooms close to absolute zero (−273 degrees Celsius) where particles are less likely to tumble out of their critical quantum states.

Breaking through this temperature barrier to develop materials that still exhibit quantum properties at room temperatures has long been the goal of quantum computing. Though the low temperatures help keep the particle’s properties from collapsing out of their useful fog of possibility, the bulk and expense of the equipment limits their potential and ability to be scaled up for general use.

When people program new deep learning AI models — those that can focus on the right features of data by themselves — the vast majority rely on optimization algorithms, or optimizers, to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers — derivative-based optimizers— run into trouble handling real-world applications.

In a new paper, researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions.

The researchers write, “Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.”