Toggle light / dark theme

When It Comes to Making Generative AI Food Smart, Small Language Models Are Doing the Heavy Lifting

Since ChatGPT debuted in the fall of 2022, much of the interest in generative AI has centered around large language models. Large language models, or LLMs, are the giant compute-intensive computer models that are powering the chatbots and image generators that seemingly everyone is using and talking about nowadays.

While there’s no doubt that LLMs produce impressive and human-like responses to most prompts, the reality is most general-purpose LLMs suffer when it comes to deep domain knowledge around things like, say, health, nutrition, or culinary. Not that this has stopped folks from using them, with occasionally bad or even laughable results and all when we ask for a personalized nutrition plan or to make a recipe.

LLMs’ shortcomings in creating credible and trusted results around those specific domains have led to growing interest in what the AI community is calling small language models (SLMs). What are SLMs? Essentially, they are smaller and simpler language models that require less computational power and fewer lines of code, and often, they are specialized in their focus.

CEO of Google DeepMind Says AI Industry Full of Hype and Grifters

Talk about the call coming from inside the house!

In an interview with The Financial Times, Google DeepMind CEO Demis Hassibis likened the frothiness shrouding the AI gold rush to that of the crypto industry’s high-dollar funding race, saying that the many billions being funneled into AI companies and projects brings a “bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever.”

“Some of that has now spilled over into AI,” the CEO added, “which I think is a bit unfortunate.”

Boston Dynamic’s SPOT becomes the world’s first hero robodog

A Boston Dynamic’s SPOT robotic dog has officially become the first of its kind to become a police dog hero. The robodog in question, a Massachusetts State Police SPOT unit, was shot in the line of duty.

According to the police department, the robodog’s actions may have saved human lives. Called “Roscoe,” the robot dog was involved in a police action to deal with a person barricaded in their home.

Tesla’s new basic Autopilot video tutorial is clear and focused on safety

Tesla seems to be making some serious headway with its Full Self-Driving (FSD) suite with the release of V12.3. But while FSD is currently the company’s flagship advanced driver-assist system, basic Autopilot still plays a huge role in Tesla’s electric cars. With this in mind, Tesla seems to be doubling down on educating drivers about the proper use of basic Autopilot, as well as the system’s limitations.

As could be seen in the company’s Tesla Tutorials channel on YouTube, the company has released a thorough tutorial focused on basic Autopilot’s features and proper use. The video is over four minutes long, and all throughout its duration, Tesla highlighted that the features of basic Autopilot does not make vehicles autonomous. The company also emphasized that basic Autopilot is designed to work with a fully attentive driver.

The video fully discussed the capabilities and limitations of basic Autopilot’s two main features, Traffic-Aware Cruise Control (TACC) and Autosteer (Beta). The Tesla Tutorial video discussed how to engage both features, how to set their specific parameters, and how they are disengaged. Overall, it is quite encouraging to see Tesla publishing a tutorial that’s purely focused on basic Autopilot.

Pushing material boundaries for better electronics

A recently tenured faculty member in MIT’s departments of Mechanical Engineering and Materials Science and Engineering, Kim has made numerous discoveries about the nanostructure of materials and is funneling them directly into the advancement of next-generation electronics.

His research aims to push electronics past the inherent limits of silicon — a material that has reliably powered transistors and most other electronic elements but is reaching a performance limit as more computing power is packed into ever smaller devices.

Today, Kim and his students at MIT are exploring materials, devices, and systems that could take over where silicon leaves off. Kim is applying his insights to design next-generation devices, including low-power, high-performance transistors and memory devices, artificial intelligence chips, ultra-high-definition micro-LED displays, and flexible electronic “skin.” Ultimately, he envisions such beyond-silicon devices could be built into supercomputers small enough to fit in your pocket.