Toggle light / dark theme

Language processing programs can assign many kinds of information to a single word, like the human brain

From search engines to voice assistants, computers are getting better at understanding what we mean. That’s thanks to language-processing programs that make sense of a staggering number of words, without ever being told explicitly what those words mean. Such programs infer meaning instead through statistics—and a new study reveals that this computational approach can assign many kinds of information to a single word, just like the human brain.

The study, published April 14 in the journal Nature Human Behavior, was co-led by Gabriel Grand, a graduate student in and computer science who is affiliated with MIT’s Computer Science and Artificial Intelligence Laboratory, and Idan Blank Ph.D. ‘16, an assistant professor at the University of California at Los Angeles. The work was supervised by McGovern Institute for Brain Research investigator Ev Fedorenko, a cognitive neuroscientist who studies how the uses and understands language, and Francisco Pereira at the National Institute of Mental Health. Fedorenko says the rich knowledge her team was able to find within computational language models demonstrates just how much can be learned about the world through language alone.

The research team began its analysis of statistics-based language processing models in 2015, when the approach was new. Such models derive meaning by analyzing how often pairs of co-occur in texts and using those relationships to assess the similarities of words’ meanings. For example, such a program might conclude that “bread” and “apple” are more similar to one another than they are to “notebook,” because “bread” and “apple” are often found in proximity to words like “eat” or “snack,” whereas “notebook” is not.

Meta AI is sharing OPT-175B, the first 175-billion-parameter language model to be made available to the broader AI research community

Large language models — natural language processing (NLP) systems with more than 100 billion parameters — have transformed NLP and AI research over the last few years. Trained on a massive and varied volume of text, they show surprising new capabilities to generate creative text, solve basic math problems, answer reading comprehension questions, and more. While in some cases the public can interact with these models through paid APIs, full research access is still limited to only a few highly resourced labs. This restricted access has limited researchers’ ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues such as bias and toxicity.

In line with Meta AI’s commitment to open science, we are sharing Open Pretrained Transformer (OPT-175B), a language model with 175 billion parameters trained on publicly available data sets, to allow for more community engagement in understanding this foundational new technology. For the first time for a language technology system of this size, the release includes both the pretrained models and the code needed to train and use them. To maintain integrity and prevent misuse, we are releasing our model under a noncommercial license to focus on research use cases. Access to the model will be granted to academic researchers; those affiliated with organizations in government, civil society, and academia; along with industry research laboratories around the world.

We believe the entire AI community — academic researchers, civil society, policymakers, and industry — must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular, given their centrality in many downstream language applications. A much broader segment of the AI community needs access to these models in order to conduct reproducible research and collectively drive the field forward. With the release of OPT-175B and smaller-scale baselines, we hope to increase the diversity of voices defining the ethical considerations of such technologies.

Doing More with Spot

Over the last couple of years, we’ve continued to make improvements to Spot to better enable our customers. Today we’re adding to the list! https://bit.ly/3y68Ow1


Finally, Spot’s charger is now smarter and faster, bringing Spot’s newest battery models to full capacity in an hour or less. Users can refer to the OLED display for real-time information on battery charge and can continue to charge the robot directly or hot-swap batteries for continuous operation.

Expanded Payload Ecosystem

The robot itself is just one piece of the puzzle. The full Spot solution includes the community, customization options, and collaboration ecosystem that helps deliver the most value from the robot. Through Boston Dynamics and our partners, customers can outfit Spot with a variety of payloads, including additional cameras, sensors, laser scanners, and more. These payloads, paired with specialized software, enable Spot to collect and process the data that gives industrial teams valuable insights into what’s happening in their facilities. This ecosystem is constantly evolving, and today we are excited to announce two new pieces of hardware that will enable next-level computation, radio communications, and 5G connectivity.

Startups apply artificial intelligence to supply chain disruptions

A growing group of startups and established logistics firms have created a multi-billion-dollar industry applying artificial intelligence and other cutting-edge… See more.


LONDON, May 3 (Reuters) — Over the last two years a series of unexpected events has scrambled global supply chains. Coronavirus, war in Ukraine, Brexit and a container ship wedged in the Suez Canal have combined to delay deliveries of everything from bicycles to pet food.

In response, a growing group of startups and established logistics firms has created a multi-billion dollar industry applying the latest technology to help businesses minimize the disruption.

Interos Inc, Fero Labs, KlearNow Corp and others are using artificial intelligence and other cutting-edge tools so manufacturers and their customers can react more swiftly to supplier snarl-ups, monitor raw material availability and get through the bureaucratic thicket of cross-border trade.

Fast-acting enzyme breaks down plastics in as little as 24 hours

The idea of deploying enzymes to break down plastic waste is gaining momentum through a string of breakthroughs demonstrating how they can do so with increasing efficiency, and even reduce the material to simple molecules. A new study marks yet another step forward, with scientists leveraging machine learning to engineer an enzyme that degrades some forms of plastic in just 24 hours, with a stability that makes it well-suited to large-scale adoption.

Scientists have been exploring the potential of enzymes to aid in plastics recycling for more than a decade, but the last six years or so has seen some significant advances. In 2016, researchers in Japan unearthed a bacterium that used enzymes to break down PET plastics in a matter of weeks. An engineered version of these enzymes, dubbed PETase, improved the performance further, and in 2020 we saw scientists develop an even more powerful version that digested PET plastics at six times the speed.

A team at the University of Texas set out to address some of the shortcomings of these enzymes so far. According to the scientists, the application of the technology has been held back by an inability to function well at low temperatures and different pH ranges, lack of effectiveness directly tackling untreated plastic waste, and slow reaction rates.