Toggle light / dark theme

For these reasons, and more, it seems unlikely to me that LLM technology alone will provide a route to “true AI.” LLMs are rather strange, disembodied entities. They don’t exist in our world in any real sense and aren’t aware of it. If you leave an LLM mid-conversation, and go on holiday for a week, it won’t wonder where you are. It isn’t aware of the passing of time or indeed aware of anything at all. It’s a computer program that is literally not doing anything until you type a prompt, and then simply computing a response to that prompt, at which point it again goes back to not doing anything. Their encyclopedic knowledge of the world, such as it is, is frozen at the point they were trained. They don’t know of anything after that.

And LLMs have never experienced anything. They are just programs that have ingested unimaginable amounts of text. LLMs might do a great job at describing the sensation of being drunk, but this is only because they have read a lot of descriptions of being drunk. They have not, and cannot, experience it themselves. They have no purpose other than to produce the best response to the prompt you give them.

This doesn’t mean they aren’t impressive (they are) or that they can’t be useful (they are). And I truly believe we are at a watershed moment in technology. But let’s not confuse these genuine achievements with “true AI.” LLMs might be one ingredient in the recipe for true AI, but they are surely not the whole recipe — and I suspect we don’t yet know what some of the other ingredients are.

This post is also available in: he עברית (Hebrew)

With the rise of artificial intelligence technology, many experts raised their concerns regarding the emissions of warehouses full of the computers needed to power these AI systems. IBM’s new “brain-like” chip prototype could make artificial intelligence more energy efficient, since its efficiency, according to the company, comes from components that work in a similar way to connections in human brains.

Thanos Vasilopoulos, a scientist at IBM’s research lab spoke to BBC News, saying that compared to traditional computers, “the human brain is able to achieve remarkable performance while consuming little power.” This superior energy efficiency would mean large and more complex workloads could be executed in low-power or battery-constrained environments like cars, mobile phones, and cameras. “Additionally, cloud providers will be able to use these chips to reduce energy costs and their carbon footprint,” he added.

Autistic individuals will soon be able to practice and improve their social skills through realistic conversations with avatars powered by generative AI technology.

Autism is a developmental disorder that affects how people interact and communicate, often hindering diagnosed individuals in social situations. In many cases, autistic individuals struggle to initiate conversations, respond to the initiations and non-verbal cues of others, maintain eye contact, and take on another person’s perspective.

The web-based Skill Coach application developed by Israeli startup Arrows lets users engage in conversations in a variety of scenarios – from chatting with a stranger in a café to making small talk with a work colleague.

Angel provides driver and customer support. Naomi teaches high school students. But they are not humans. They’re HumAIns, artificial intelligence (AI) agents.

Designed to handle unscripted communication tasks autonomously and proactively, Angel and Naomi use human-like thought processes to lead conversations instead of merely answering questions. And they get smarter with time and experience.

This generative AI wizardry comes from Jerusalem-based Inpris Innovative Products, founded in 2011 by human-machine interaction expert Nissan Yaron and his father, software architect Ben-Etzion Yaron.

WASHINGTON — Artificial intelligence startup Wallaroo Labs won a $1.5 million contract from the U.S. Space Force to continue the development of machine learning models for edge computers in orbit.

The New York-based company, known as Wallaroo.ai, is partnered with New Mexico State University for the Small Business Technology Transfer Phase 2 contract, announced Aug. 15. The team last year won a Phase 1 award.

Wallaroo.ai created a software platform that helps businesses assess the performance of AI applications when deployed on edge computers.

Elon Musk delves into the groundbreaking potential of Neuralink, a revolutionary venture aimed at interfacing with the human brain to tackle an array of brain-related disorders. Musk envisions a future where Neuralink’s advancements lead to the resolution of conditions like autism, schizophrenia, memory loss, and even spinal cord injuries.

Elon Musk discusses the transformative power of Neuralink, highlighting its role in restoring motor control after spinal cord injuries, revitalizing brain function post-stroke, and combating genetically or trauma-induced brain diseases. Musk’s compelling insights reveal how interfacing with neurons at an intricate level can pave the way for repairing and enhancing brain circuits using cutting-edge technology.

Discover the three-layer framework Musk envisions: the primary layer akin to the limbic system, the more intelligent cortex as the secondary layer, and the potential tertiary layer where digital superintelligence might exist. Musk’s thought-provoking perspective raises optimism about the coexistence of a digital superintelligence with the human brain, fostering a harmonious relationship between these layers of consciousness.

Elon Musk emphasises the urgency of Neuralink’s mission, stressing the importance of developing a human brain interface before the advent of digital superintelligence and the elusive singularity. By doing so, he believes we can mitigate existential risks and ensure a stable future for humanity and consciousness as we navigate the uncharted territories of technological evolution.

We’re just at the beginning of an AI revolution that will redefine how we live and work. In particular, deep neural networks (DNNs) have revolutionized the field of AI and are increasingly gaining prominence with the advent of foundation models and generative AI. But running these models on traditional digital computing architectures limits their achievable performance and energy efficiency. There has been progress in developing hardware specifically for AI inference, but many of these architectures physically split the memory and processing units. This means the AI models are typically stored in a discrete memory location, and computational tasks require constantly shuffling data between the memory and processing units. This process slows down computation and limits the maximum achievable energy efficiency.

Dunno if anyone has already posted this.


The chip showcases critical building blocks of a scalable mixed-signal architecture.

The researchers also found a spot in the brain’s temporal lobe that reacted when volunteers heard the 16th notes of the song’s guitar groove. They proposed that this particular area might be involved in our perception of rhythm.

The findings offer a first step toward creating more expressive devices to assist people who can’t speak. Over the past few years, scientists have made major breakthroughs in extracting words from the electrical signals produced by the brains of people with muscle paralysis when they attempt to speak.

But a significant amount of the information conveyed through speech comes from what linguists call “prosodic” elements, like tone — “the things that make us a lively speaker and not a robot,” Dr. Schalk said.

Venture capitalist a16z rebuilds a research paper with “AI Town” and releases the code. AI Town uses a language model to simulate a Sims-like virtual world in which all characters can flexibly pursue their motives and make decisions based on prompts.

In April, a team of researchers from Google and Stanford published the research paper Smallville. OpenAI’s GPT-3.5 simulates AI agents in a small digital town based solely on prompts.

Each character has an occupation, personality, and relationships with other characters, which are specified in an initial description. With further prompts, the AI agents begin to observe, plan, and make decisions.