Mar 6, 2023
An Applied Mathematician Strengthens AI With Pure Math
Posted by Kelvin Dafiaghor in categories: mathematics, robotics/AI
Lek-Heng Lim uses tools from algebra, geometry and topology to answer questions in machine learning.
Lek-Heng Lim uses tools from algebra, geometry and topology to answer questions in machine learning.
Two mathematicians from Australia and France have come up with a new, faster way to multiply extremely long numbers together.
In doing so, they have cracked an algorithmic puzzle that remained unsolved by some of the world’s best-known math minds, for almost fifty years.
For fans of bioethical nightmares, it’s been a real stonker of a month. First, we had the suggestion that we use comatose women’s wombs to house surrogate pregnancies. Now, it appears we might have a snazzy idea for what to do with their brains, too: to turn them into hyper-efficient biological computers.
Lately, you see, techies have been worrying about the natural, physical limits of conventional, silicon-based computing. Recent developments in ‘machine learning’, in particular, have required exponentially greater amounts of energy – and corporations are concerned that further technological progress will soon become environmentally unsustainable. Thankfully, in a paper published this week, a team of American scientists pointed out something rather nifty: that the walnut-shaped, spongy computer in your skull doesn’t appear to be bound by anything like the same limitations – and that it might, therefore, provide us with something of a solution.
The human brain, the paper explains, is slower than machines at performing basic tasks (like mathematical sums), but much, much better at processing complex problems that involve limited, or ambiguous, data. Humans learn, that is, how to make smart decisions quickly, even when we only have small fragments of information to go on, in a way that computers simply can’t. For anything more sophisticated than arithmetic, sponge beats silicon by a mile.
Year 2022 face_with_colon_three
For more than 250 years, mathematicians have wondered if the Euler equations might sometimes fail to describe a fluid’s flow. A new computer-assisted proof marks a major breakthrough in that quest.
Jewelry designs inspired by mathematical objects called strange attractors bring chaos theory to a new audience.
A new study out of the UK says there are mathematics to crowds, and people move in predictable patterns like straight lanes and parabolas.
A Case Western Reserve University doctoral student uses new mathematics to discover shrinking and chunky features of Devonian Period animals, including Cleveland’s prehistoric sea monster. Read the article to learn more.
“A machine… of intelligent design.”
For the algorithm: UBI, singularity, Sydney bing ai, bing ai, microsoft, OpenAI, open ai, andrej karpathy, Ilya Sutskever, agi, artificial general intelligence, AI, ai, artificial intelligence, deep learning, how do neural networks work?, neural networks, machine learning, chatgpt, ChatGPT, GPT, bing, math, google, tech, technology, utopia.
Claim your SPECIAL OFFER for MagellanTV here: https://try.magellantv.com/arvinash Start your free trial TODAY so you can watch The Most Powerful Black Holes in the Universe” about the extremes of time and space, and the rest of MagellanTV’s science collection: https://www.magellantv.com/watch/the-most-powerful-black-hol…niverse-4k.
REFERENCES
The Tempest by Peter Cawdron: https://tinyurl.com/2ep4uzvs.
Inside Black Holes: https://youtu.be/iUr8Obv_DeA
How Black Holes form: https://youtu.be/7xCgnMqIgPI
How Stable orbits form around Black Holes: https://tinyurl.com/2klz9mfd.
Continue reading “Black Holes Are Even Weirder Than You Thought!” »
Is self-consciousness necessary for consciousness? The answer is yes. So there you have it—the answer is yes. This was my response to a question I was asked to address in a recent AEON piece (https://aeon.co/essays/consciousness-is-not-a-thing-but-a-process-of-inference). What follows is based upon the notes for that essay, with a special focus on self-organization, self-evidencing and self-modeling. I will try to substantiate my (polemic) answer from the perspective of a physicist. In brief, the argument goes as follows: if we want to talk about creatures, like ourselves, then we have to identify the characteristic behaviors they must exhibit. This is fairly easy to do by noting that living systems return to a set of attracting states time and time again. Mathematically, this implies the existence of a Lyapunov function that turns out to be model evidence (i.e., self-evidence) in Bayesian statistics or surprise (i.e., self-information) in information theory. This means that all biological processes can be construed as performing some form of inference, from evolution through to conscious processing. If this is the case, at what point do we invoke consciousness? The proposal on offer here is that the mind comes into being when self-evidencing has a temporal thickness or counterfactual depth, which grounds inferences about the consequences of my action. On this view, consciousness is nothing more than inference about my future; namely, the self-evidencing consequences of what I could do.
There are many phenomena in the natural sciences that are predicated on the notion of “self”; namely, self-information, self-organization, self-assembly, self-evidencing, self-modeling, self-consciousness and self-awareness. To what extent does one entail the others? This essay tries to unpack the relationship among these phenomena from first (variational) principles. Its conclusion can be summarized as follows: living implies the existence of “lived” states that are frequented in a characteristic way. This mandates the optimization of a mathematical function called “surprise” (or self-information) in information theory and “evidence” in statistics. This means that biological processes can be construed as an inference process; from evolution through to conscious processing. So where does consciousness emerge? The proposal offered here is that conscious processing has a temporal thickness or depth, which underwrites inferences about the consequences of action.