Toggle light / dark theme

Language models often need more exposure to fruitful mistakes during training, hindering their ability to anticipate consequences beyond the next token. LMs must improve their capacity for complex decision-making, planning, and reasoning. Transformer-based models struggle with planning due to error snowballing and difficulty in lookahead tasks. While some efforts have integrated symbolic search algorithms to address these issues, they merely supplement language models during inference. Yet, enabling language models to search for training could facilitate self-improvement, fostering more adaptable strategies to tackle challenges like error compounding and look-ahead tasks.

Researchers from Stanford University, MIT, and Harvey Mudd have devised a method to teach language models how to search and backtrack by representing the search process as a serialized string, Stream of Search (SoS). They proposed a unified language for search, demonstrated through the game of Countdown. Pretraining a transformer-based language model on streams of search increased accuracy by 25%, while further finetuning with policy improvement methods led to solving 36% of previously unsolved problems. This showcases that language models can learn to solve problems via search, self-improve, and discover new strategies autonomously.

Recent studies integrate language models into search and planning systems, employing them to generate and assess potential actions or states. These methods utilize symbolic search algorithms like BFS or DFS for exploration strategy. However, LMs primarily serve for inference, needing improved reasoning ability. Conversely, in-context demonstrations illustrate search procedures using language, enabling the LM to conduct tree searches accordingly. Yet, these methods are limited by the demonstrated procedures. Process supervision involves training an external verifier model to provide detailed feedback for LM training, outperforming outcome supervision but requiring extensive labeled data.

The use of Artificial Intelligence (AI) in education has seen an increase in recent years. The rapid development of this new technology is having a major impact on education. In this edition of Edtech Mondays we will talk about the applications and benefits of AI in the education sector, the necessary implementation frameworks, the policy support and also seek to hear from the end users on the impact AI has or would have.

Tesla executive Rohan Patel clarified some facts about Supercharger NACS access for non-Tesla vehicles like Rivian and Ford.

Patel—Tesla’s Vice President of Public Policy and Business Development—recently replied to a question from Teslavangelist, who questioned the number of Supercharger stalls non-Tesla owners actually had access to with NACS connectors.

Tesla recently opened the Supercharger Network to Ford and Rivian electric vehicles (EVs) through its NACS connecter. Both automakers claim that NACS connectors provide Ford and Rivian owners access to over 15,000 Tesla Supercharger locations. Teslavangelist pointed out that non-Tesla EV owners only have access to V3 and V4 Superchargers, doubting they have access to 15,000 Supercharger stalls.

Nvidia presents Driving Everywhere with Large Language Model Policy Adaptation LLaDA is a simple yet powerful tool that enables human drivers and autonomous vehicles alike to by adapting their tasks and motion plans to traffic rules.

Nvidia presents Driving Everywhere with Large Language Model Policy Adaptation.

LLaDA is a simple yet powerful tool that enables human drivers and autonomous vehicles alike to by adapting their tasks and motion plans to traffic rules.

Paper page: https://huggingface.co/papers/2312.14150)


Join the discussion on this paper page.

On Saturday, Chinese scholars unveiled a preliminary proposal draft in Beijing that could potentially shape the nation’s forthcoming artificial intelligence (AI) law.

The proposal draft pays attention to the development issues of industrial practice in the three areas of data, computing power and algorithms, Zhao Jingwu, an associate professor from BeiHang University Law School, told the Global Times.

Zhao said that the proposal also introduces the AI insurance system that encourages the intervention of the insurance market through policy incentives, exploring insurance products suitable for the AI industry. In addition, it proposes the enhancement of citizens’ digital literacy, aiming to prevent and control the security risks of the technology from the user end.

India will lower import taxes on certain electric vehicles for companies committing to invest at least $500 million and setting up a local manufacturing facility within three years, a policy shift that could potentially bolster Tesla’s plans to enter the South Asian market.

Companies must invest a minimum of $500 million in the country and will have three years to establish local manufacturing for EVs with at least 25% of components sourced domestically, according to a government press release on Friday. Firms meeting these requirements will be allowed to import 8,000 EVs a year at a reduced import duty of 15% on cars costing $35,000 and above. India currently levies a tax of 70% to 100% on imported cars depending on their value.

The policy change is likely going to pave the way for Tesla to enter India, as the Elon Musk-led company has been in talks with the government to lower import duties on its electric cars for years. The move also aligns with India’s goal to boost the adoption of EVs and reduce its dependence on oil imports, with the country setting a target of achieving 30% electric car sales by 2030.