Toggle light / dark theme

Amidst rapid technological advancements, Tiny AI is emerging as a silent powerhouse. Imagine algorithms compressed to fit microchips yet capable of recognizing faces, translating languages, and predicting market trends. Tiny AI operates discreetly within our devices, orchestrating smart homes and propelling advancements in personalized medicine.

Tiny AI excels in efficiency, adaptability, and impact by utilizing compact neural networks, streamlined algorithms, and edge computing capabilities. It represents a form of artificial intelligence that is lightweight, efficient, and positioned to revolutionize various aspects of our daily lives.

Looking into the future, quantum computing and neuromorphic chips are new technologies taking us into unexplored areas. Quantum computing works differently than regular computers, allowing for faster problem-solving, realistic simulation of molecular interactions, and quicker decryption of codes. It is not just a sci-fi idea anymore; it’s becoming a real possibility.

In the rapidly evolving landscape of artificial intelligence, the quest for hardware that can keep pace with the burgeoning computational demands is relentless. A significant breakthrough in this quest has been achieved through a collaborative effort spearheaded by Purdue University, alongside the University of California San Diego (UCSD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris. This collaboration marks a pivotal advancement in the field of neuromorphic computing, a revolutionary approach that seeks to emulate the human brain’s mechanisms within computing architecture.

The Challenges of Current AI Hardware

The rapid advancements in AI have ushered in complex algorithms and models, demanding an unprecedented level of computational power. Yet, as we delve deeper into the realms of AI, a glaring challenge emerges: the inadequacy of current silicon-based computer architectures in keeping pace with the evolving demands of AI technology.

Scientific Reports –a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.

This paper introduces a novel theoretical framework for understanding consciousness, proposing a paradigm shift from traditional biological-centric views to a broader, universal perspective grounded in thermodynamics and systems theory. We posit that consciousness is not an exclusive attribute of biological entities but a fundamental feature of all systems exhibiting a particular form of intelligence. This intelligence is defined as the capacity of a system to efficiently utilize energy to reduce internal entropy, thereby fostering increased order and complexity. Supported by a robust mathematical model, the theory suggests that subjective experience, or what is often referred to as qualia, emerges from the intricate interplay of energy, entropy, and information within a system. This redefinition of consciousness and intelligence challenges existing paradigms and extends the potential for understanding and developing Artificial General Intelligence (AGI). The implications of this theory are vast, bridging gaps between cognitive science, artificial intelligence, philosophy, and physics, and providing a new lens through which to view the nature of consciousness itself.

Consciousness, traditionally viewed through the lens of biology and neurology, has long been a subject shrouded in mystery and debate. Philosophers, scientists, and thinkers have pondered over what consciousness is, how it arises, and why it appears to be a unique trait of certain biological organisms. The “hard problem” of consciousness, a term coined by philosopher David Chalmers, encapsulates the difficulty in explaining why and how physical processes in the brain give rise to subjective experiences.

Current research in cognitive science, neuroscience, and artificial intelligence offers various theories of consciousness, ranging from neural correlates of consciousness (NCCs) to quantum theories. However, these theories often face limitations in fully explaining the emergence and universality of consciousness.

On Saturday, Chinese scholars unveiled a preliminary proposal draft in Beijing that could potentially shape the nation’s forthcoming artificial intelligence (AI) law.

The proposal draft pays attention to the development issues of industrial practice in the three areas of data, computing power and algorithms, Zhao Jingwu, an associate professor from BeiHang University Law School, told the Global Times.

Zhao said that the proposal also introduces the AI insurance system that encourages the intervention of the insurance market through policy incentives, exploring insurance products suitable for the AI industry. In addition, it proposes the enhancement of citizens’ digital literacy, aiming to prevent and control the security risks of the technology from the user end.

A team from the University of Geneva (UNIGE) has succeeded in modeling an capable of this cognitive prowess. After learning and performing a series of basic tasks, this AI was able to provide a linguistic description of them to a “sister” AI, which in turn performed them. These promising results, especially for robotics, are published in Nature Neuroscience.

Performing a new without prior training, on the sole basis of verbal or written instructions, is a unique human ability. What’s more, once we have learned the task, we are able to describe it so that another person can reproduce it. This dual capacity distinguishes us from other species which, to learn a new task, need numerous trials accompanied by positive or negative reinforcement signals, without being able to communicate it to their congeners.

A sub-field of (AI)—Natural language processing—seeks to recreate this human faculty, with machines that understand and respond to vocal or textual data. This technique is based on artificial neural networks, inspired by our biological neurons and by the way they transmit electrical signals to one another in the brain. However, the neural calculations that would make it possible to achieve the cognitive feat described above are still poorly understood.

1:05 — OpenAI board saga 18:31 — Ilya Sutskever 24:40 — Elon Musk lawsuit 34:32 — Sora 44:23 — GPT-4 55:32 — Memory & privacy 1:02:36 — Q* 1:06:12 — GPT-5 1:09:27 — $7 trillion of compute 1:17:35 — Google and Gemini…


Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors:
- Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off.
- Shopify: https://shopify.com/lex to get $1 per month trial.
- BetterHelp: https://betterhelp.com/lex to get 10% off.
- ExpressVPN: https://expressvpn.com/lexpod to get 3 months free.

TRANSCRIPT:

“Our research in the past decade has made analog memristor a viable technology,” said Dr. Qiangfei Xia. “It is time to move such a great technology into the semiconductor industry to benefit the broad AI hardware community.”


Digital computing has become the norm in our everyday lives, but their limits are being reached in terms of computing power. Can analog computing step in and outperform them? This is what a recent study published in Science hopes to address as a team of researchers from the University of Southern California, TetraMem Inc., and the University of Massachusetts, Amherst (UMass Amherst) have spent the last decade developing memristors, which are capable of overcoming the computing limits of digital computing. This study holds the potential to help researchers develop more efficient methods in storing data without the drawbacks of holding too much of it, thus creating a clog.

“In this work, we propose and demonstrate a new circuit architecture and programming protocol that can efficiently represent high-precision numbers using a weighted sum of multiple, relatively low-precision analog devices, such as memristors, with a greatly reduced overhead in circuitry, energy and latency compared with existing quantization approaches,” said Dr. Qiangfei Xia, who is a professor of Electrical & Computer Engineering at UMass Amherst and a co-author on the study.