Toggle light / dark theme

Recently, computation in memory becomes very hot due to the urgent needs of high computing efficiency in artificial intelligence applications. In contrast to von-neumann architecture, computation in memory technology avoids the data movement between CPU/GPU and memory which could greatly reduce the power consumption. Memristor is one ideal device which could not only store information with multi-bits, but also conduct computing using ohm’s law. To make the best use of the memristor in neuromorphic systems, a memristor-friendly architecture and the software-hardware collaborative design methods are essential, and the key problem is how to utilize the memristor’s analog behavior. We have designed a generic memristor crossbar based architecture for convolutional neural networks and perceptrons, which take full consideration of the analog characteristics of memristors. Furthermore, we have proposed an online learning algorithm for memristor based neuromorphic systems which overcomes the varation of memristor cells and endue the system the ability of reinforcement learning based on memristor’s analog behavior.

Full abstract and speaker details can be found here: https://nus.edu/3cSFD3e.

Register for free for all our upcoming webinars at https://nus.edu/3bcN9pS.

Join Randal Koene, a computational neuroscientist, as he dives into the intricate world of whole brain emulation and mind uploading, while touching on the ethical pillars of AI. In this episode, Koene discusses the importance of equal access to AI, data ownership, and the ethical impact of AI development. He explains the potential future of AGI, how current social and political systems might influence it, and touches on the scientific and philosophical aspects of creating a substrate-independent mind. Koene also elaborates on the differences between human cognition and artificial neural networks, the challenge of translating brain structure to function, and efforts to accelerate neuroscience research through structured challenges.

00:00 Introduction to Randal Koene and Whole Brain Emulation.
00:39 Ethical Considerations in AI Development.
02:20 Challenges of Equal Access and Data Ownership.
03:40 Impact of AGI on Society and Development.
05:58 Understanding Mind Uploading.
06:39 Randall’s Journey into Computational Neuroscience.
08:14 Scientific and Philosophical Aspects of Substrate Independent Minds.
13:07 Brain Function and Memory Processes.
25:34 Whole Brain Emulation: Current Techniques and Challenges.
32:12 The Future of Neuroscience and AI Collaboration.

SingularityNET is a decentralized marketplace for artificial intelligence. We aim to create the world’s global brain with a full-stack AI solution powered by a decentralized protocol.

We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

The most recent email you sent was likely encrypted using a tried-and-true method that relies on the idea that even the fastest computer would be unable to efficiently break a gigantic number into factors.

Quantum computers, on the other hand, promise to rapidly crack complex cryptographic systems that a classical computer might never be able to unravel. This promise is based on a quantum factoring proposed in 1994 by Peter Shor, who is now a professor at MIT.

But while researchers have taken great strides in the last 30 years, scientists have yet to build a quantum computer powerful enough to run Shor’s algorithm.

The team has released the width-pruned version of the model on Hugging Face under the Nvidia Open Model License, which allows for commercial use. This makes it accessible to a wider range of users and developers who can benefit from its efficiency and performance.

“Pruning and classical knowledge distillation is a highly cost-effective method to progressively obtain LLMs [large language models] of smaller size, achieving superior accuracy compared to training from scratch across all domains,” the researchers wrote. “It serves as a more effective and data-efficient approach compared to either synthetic-data-style fine-tuning or pretraining from scratch.”

This work is a reminder of the value and importance of the open-source community to the progress of AI. Pruning and distillation are part of a wider body of research that is enabling companies to optimize and customize LLMs at a fraction of the normal cost. Other notable works in the field include Sakana AI’s evolutionary model-merging algorithm, which makes it possible to assemble parts of different models to combine their strengths without the need for expensive training resources.

Inside the cult of TESCREALism and the dangerous fantasies of Silicon Valley’s self-appointed demigods, for Document’s Spring/Summer 2024 issue.

As legend has it, Steve Jobs once asked Larry Kenyon, an engineer tasked with developing the Mac computer, to reduce its boot time by 10 seconds. Kenyon said that was impossible. “What if it would save a person’s life?” Jobs asked. Then, he went to a whiteboard and laid out an equation: If 5 million users spent an additional 10 seconds waiting for the computer to start, the total hours wasted would be equivalent to 100 human lifetimes every year. Kenyon shaved 28 seconds off the boot time in a matter of weeks.

Often cited as an example of the late CEO’s “reality distortion field,” this anecdote illustrates the combination of charisma, hyperbole, and marketing with which Jobs convinced his disciples to believe almost anything—elevating himself to divine status and creating “a cult of personality for capitalists,” as Mark Cohen put it in an article about his death for the Australian Broadcasting Corporation. In helping to push the myth of the genius tech founder into the cultural mainstream, Jobs laid the groundwork for future generations of Silicon Valley investors and entrepreneurs who have, amid the global decline of organized religion, become our secular messiahs. They preach from the mounts of Google and Meta, selling the public on digital technology’s saving grace, its righteous ability to reshape the world.

A recent study by UC San Diego researchers brings fresh insight into the ever-evolving capabilities of AI. The authors looked at the degree to which several prominent AI models, GPT-4, GPT-3.5, and the classic ELIZA could convincingly mimic human conversation, an application of the so-called Turing test for identifying when a computer program has reached human-level intelligence.

The results were telling: In a five-minute text-based conversation, GPT-4 was mistakenly identified as human 54 percent of the time, contrasted with ELIZA’s 22 percent. These findings not only highlight the strides AI has made but also underscore the nuanced challenges of distinguishing human intelligence from algorithmic mimicry.

The important twist in the UC San Diego study is that it clearly identifies what constitutes true human-level intelligence. It isn’t mastery of advanced calculus or another challenging technical field. Instead, what stands out about the most advanced models is their social-emotional persuasiveness. For an AI to catch (or fool a human) it has to be able to effectively imitate the subtleties of human conversation. When judging whether their interlocutor was an AI or a human, participants tended to focus on whether responses were overly formal, contained excessively correct grammar, or repetitive sentence structures, or exhibited an unnatural tone. Participants flagged stilted or inconsistent personalities or senses of humor as non-human.