If You Enjoyed This Episode You Must Watch This One With Mustafa Suleyman Google AI Exec: https://youtu.be/CTxnLsYHWuI0:00 Intro 02:54 Why is this podcast im…
Category: robotics/AI – Page 699
Google’s AI Still Giving Idiotic Answers Nearly a Year After Launch
Despite having been in public beta mode for nearly a year, Google’s search AI is still spitting out confusing and often incorrect answers.
As the Washington Post found when assessing Google’s Search Generative Experience, or SGE for short, the AI-powered update to the tech giant’s classic search bar is still giving incorrect or misleading answers nearly a year after it was introduced last May.
While SGE no longer tells users that they can melt eggs or that slavery was good, it does still hallucinate, which is AI terminology for confidently making stuff up. A search for a made-up Chinese restaurant called “Danny’s Dan Dan Noodles” in San Francisco, for example, spat out references to “long lines and crazy wait times” and even gave phony citations about 4,000-person lines and a two-year waitlist.
The Neuron vs the Synapse: Which One Is in the Driving Seat?
A new theoretical framework for plastic neural networks predicts dynamical regimes where synapses rather than neurons primarily drive the network’s behavior, leading to an alternative candidate mechanism for working memory in the brain.
The brain is an immense network of neurons, whose dynamics underlie its complex information processing capabilities. A neuronal network is often classed as a complex system, as it is composed of many constituents, neurons, that interact in a nonlinear fashion (Fig. 1). Yet, there is a striking difference between a neural network and the more traditional complex systems in physics, such as spin glasses: the strength of the interactions between neurons can change over time. This so-called synaptic plasticity is believed to play a pivotal role in learning. Now David Clark and Larry Abbott of Columbia University have derived a formalism that puts neurons and the connections that transmit their signals (synapses) on equal footing [1]. By studying the interacting dynamics of the two objects, the researchers take a step toward answering the question: Are neurons or synapses in control?
Clark and Abbott are the latest in a long line of researchers to use theoretical tools to study neuronal networks with and without plasticity [2, 3]. Past studies—without plasticity—have yielded important insights into the general principles governing the dynamics of these systems and their functions, such as classification capabilities [4], memory capacities [5, 6], and network trainability [7, 8]. These works studied how temporally fixed synaptic connectivity in a network shapes the collective activity of neurons. Adding plasticity to the system complicates the problem because then the activity of neurons can dynamically shape the synaptic connectivity [9, 10].
Theory of Coupled Neuronal-Synaptic Dynamics
A new theoretical framework for plastic neural networks predicts dynamical regimes where synapses rather than neurons primarily drive the network’s behavior, leading to an alternative candidate mechanism for working memory in the brain.
See more in Physics
Click to Expand.
When It Comes to Making Generative AI Food Smart, Small Language Models Are Doing the Heavy Lifting
Since ChatGPT debuted in the fall of 2022, much of the interest in generative AI has centered around large language models. Large language models, or LLMs, are the giant compute-intensive computer models that are powering the chatbots and image generators that seemingly everyone is using and talking about nowadays.
While there’s no doubt that LLMs produce impressive and human-like responses to most prompts, the reality is most general-purpose LLMs suffer when it comes to deep domain knowledge around things like, say, health, nutrition, or culinary. Not that this has stopped folks from using them, with occasionally bad or even laughable results and all when we ask for a personalized nutrition plan or to make a recipe.
LLMs’ shortcomings in creating credible and trusted results around those specific domains have led to growing interest in what the AI community is calling small language models (SLMs). What are SLMs? Essentially, they are smaller and simpler language models that require less computational power and fewer lines of code, and often, they are specialized in their focus.
CEO of Google DeepMind Says AI Industry Full of Hype and Grifters
Talk about the call coming from inside the house!
In an interview with The Financial Times, Google DeepMind CEO Demis Hassibis likened the frothiness shrouding the AI gold rush to that of the crypto industry’s high-dollar funding race, saying that the many billions being funneled into AI companies and projects brings a “bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever.”
“Some of that has now spilled over into AI,” the CEO added, “which I think is a bit unfortunate.”
Boston Dynamic’s SPOT becomes the world’s first hero robodog
A Boston Dynamic’s SPOT robotic dog has officially become the first of its kind to become a police dog hero. The robodog in question, a Massachusetts State Police SPOT unit, was shot in the line of duty.
According to the police department, the robodog’s actions may have saved human lives. Called “Roscoe,” the robot dog was involved in a police action to deal with a person barricaded in their home.