Toggle light / dark theme

Despite having been in public beta mode for nearly a year, Google’s search AI is still spitting out confusing and often incorrect answers.

As the Washington Post found when assessing Google’s Search Generative Experience, or SGE for short, the AI-powered update to the tech giant’s classic search bar is still giving incorrect or misleading answers nearly a year after it was introduced last May.

While SGE no longer tells users that they can melt eggs or that slavery was good, it does still hallucinate, which is AI terminology for confidently making stuff up. A search for a made-up Chinese restaurant called “Danny’s Dan Dan Noodles” in San Francisco, for example, spat out references to “long lines and crazy wait times” and even gave phony citations about 4,000-person lines and a two-year waitlist.

A new theoretical framework for plastic neural networks predicts dynamical regimes where synapses rather than neurons primarily drive the network’s behavior, leading to an alternative candidate mechanism for working memory in the brain.

The brain is an immense network of neurons, whose dynamics underlie its complex information processing capabilities. A neuronal network is often classed as a complex system, as it is composed of many constituents, neurons, that interact in a nonlinear fashion (Fig. 1). Yet, there is a striking difference between a neural network and the more traditional complex systems in physics, such as spin glasses: the strength of the interactions between neurons can change over time. This so-called synaptic plasticity is believed to play a pivotal role in learning. Now David Clark and Larry Abbott of Columbia University have derived a formalism that puts neurons and the connections that transmit their signals (synapses) on equal footing [1]. By studying the interacting dynamics of the two objects, the researchers take a step toward answering the question: Are neurons or synapses in control?

Clark and Abbott are the latest in a long line of researchers to use theoretical tools to study neuronal networks with and without plasticity [2, 3]. Past studies—without plasticity—have yielded important insights into the general principles governing the dynamics of these systems and their functions, such as classification capabilities [4], memory capacities [5, 6], and network trainability [7, 8]. These works studied how temporally fixed synaptic connectivity in a network shapes the collective activity of neurons. Adding plasticity to the system complicates the problem because then the activity of neurons can dynamically shape the synaptic connectivity [9, 10].

Since ChatGPT debuted in the fall of 2022, much of the interest in generative AI has centered around large language models. Large language models, or LLMs, are the giant compute-intensive computer models that are powering the chatbots and image generators that seemingly everyone is using and talking about nowadays.

While there’s no doubt that LLMs produce impressive and human-like responses to most prompts, the reality is most general-purpose LLMs suffer when it comes to deep domain knowledge around things like, say, health, nutrition, or culinary. Not that this has stopped folks from using them, with occasionally bad or even laughable results and all when we ask for a personalized nutrition plan or to make a recipe.

LLMs’ shortcomings in creating credible and trusted results around those specific domains have led to growing interest in what the AI community is calling small language models (SLMs). What are SLMs? Essentially, they are smaller and simpler language models that require less computational power and fewer lines of code, and often, they are specialized in their focus.

Talk about the call coming from inside the house!

In an interview with The Financial Times, Google DeepMind CEO Demis Hassibis likened the frothiness shrouding the AI gold rush to that of the crypto industry’s high-dollar funding race, saying that the many billions being funneled into AI companies and projects brings a “bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever.”

“Some of that has now spilled over into AI,” the CEO added, “which I think is a bit unfortunate.”

A Boston Dynamic’s SPOT robotic dog has officially become the first of its kind to become a police dog hero. The robodog in question, a Massachusetts State Police SPOT unit, was shot in the line of duty.

According to the police department, the robodog’s actions may have saved human lives. Called “Roscoe,” the robot dog was involved in a police action to deal with a person barricaded in their home.

Tesla seems to be making some serious headway with its Full Self-Driving (FSD) suite with the release of V12.3. But while FSD is currently the company’s flagship advanced driver-assist system, basic Autopilot still plays a huge role in Tesla’s electric cars. With this in mind, Tesla seems to be doubling down on educating drivers about the proper use of basic Autopilot, as well as the system’s limitations.

As could be seen in the company’s Tesla Tutorials channel on YouTube, the company has released a thorough tutorial focused on basic Autopilot’s features and proper use. The video is over four minutes long, and all throughout its duration, Tesla highlighted that the features of basic Autopilot does not make vehicles autonomous. The company also emphasized that basic Autopilot is designed to work with a fully attentive driver.

The video fully discussed the capabilities and limitations of basic Autopilot’s two main features, Traffic-Aware Cruise Control (TACC) and Autosteer (Beta). The Tesla Tutorial video discussed how to engage both features, how to set their specific parameters, and how they are disengaged. Overall, it is quite encouraging to see Tesla publishing a tutorial that’s purely focused on basic Autopilot.