The AI behavior models controlling how robots interact with the physical world haven’t been advancing at the crazy pace that GPT-style language models have – but new multiverse ‘world simulators’ from Nvidia and Google could change that rapidly.
There’s a chicken-and-egg issue slowing things down for AI robotics; large language model (LLM) AIs have enjoyed the benefit of massive troves of data to train from, since the Internet already holds an extraordinary wealth of text, image, video and audio data.
But there’s far less data for large behavior model (LBM) AIs to train on. Robots and autonomous vehicles are expensive and annoyingly physical, so data around 3D representations of real-world physical situations is taking a lot longer to collect and incorporate into AI models.
Leave a reply