Toggle light / dark theme

Meta’s AI translation work could provide a killer app for AR.


Social media conglomerate Meta has created a single AI model capable of translating across 200 different languages, including many not supported by current commercial tools. The company is open-sourcing the project in the hopes that others will build on its work.

The AI model is part of an ambitious R&D project by Meta to create a so-called “universal speech translator,” which the company sees as important for growth across its many platforms — from Facebook and Instagram, to developing domains like VR and AR. Machine translation not only allows Meta to better understand its users (and so improve the advertising systems that generate 97 percent of its revenue) but could also be the foundation of a killer app for future projects like its augmented reality glasses.

Meta, the Facebook parent company, has announced its new, artificial intelligence (AI)-driven language translation model, which claims to be able to translate 200 languages worldwide, in real-time. In a blog post from earlier today, Meta said that this is the first AI language translator model that brings a large number of fringe and lesser known languages from around the world — including fringe dialects from Asia and Africa.

The AI model can also carry out these translations without needing to first translate a language to English, and then translate it to the originally intended language. This, Meta said, does not only help in speeding up the translation time, but is a breakthrough of sorts since many of the 200 languages that its AI model can understand had little to no available public data for AI to train on.

The initiative is part of the company’s No Language Left Behind (NLLB) project, which it announced in February this year. The new AI model, called NLLB-200, has achieved up to 44% higher BLEU (Bilingual Evaluation Understudy) score in terms of its accuracy and quality of translation results. For Indian dialects, NLLB-200 is 70% better than existing AI models.

Circa 2018


Debugging code is drudgery. But SapFix, a new AI hybrid tool created by Facebook engineers, can significantly reduce the amount of time engineers spend on debugging, while also speeding up the process of rolling out new software. SapFix can automatically generate fixes for specific bugs, and then propose them to engineers for approval and deployment to production.

SapFix has been used to accelerate the process of shipping robust, stable code updates to millions of devices using the Facebook Android app — the first such use of AI-powered testing and debugging tools in production at this scale. We intend to share SapFix with the engineering community, as it is the next step in the evolution of automating debugging, with the potential to boost the production and stability of new code for a wide range of companies and research organizations.

SapFix is designed to operate as an independent tool, able to run either with or without Sapienz, Facebook’s intelligent automated software testing tool, which was announced at F8 and has already been deployed to production. In its current, proof-of-concept state, SapFix is focused on fixing bugs found by Sapienz before they reach production. The process starts with Sapienz, along with Facebook’s Infer static analysis tool, helping localize the point in the code to patch. Once Sapienz and Infer pinpoint a specific portion of code associated with a crash, it can pass that information to SapFix, which automatically picks from a few strategies to generate a patch.

Chinese researchers have reportedly developed artificial intelligence (AI) that can read the minds of Chinese Communist Party (CCP) officials.

A video report detailed the software’s features and attributed it to the Hefei Comprehensive National Science Center, a relatively new institute focused on health and environment, energy research, information management and artificial intelligence.

The technology essentially tests one’s level of loyalty to the CCP. According to the center, it would “further solidify their [members’] confidence and determination to be grateful to the party, listen to the party and follow the party.”

Graphcore and Korea’s Electronics and Telecommunications Research Institute (ETRI) have entered a multi-year partnership to develop new software approaches for high-efficiency AI compute.

Running from 2022 through 2025 and funded by the Korean government, the partnership will combine the world-leading capabilities of ETRI—Korea’s largest public research institute by R&D expenditure and license income—with Graphcore’s proven leadership in developing and commercialising efficient, high-performance compute systems for machine intelligence.

“We just open-sourced an AI model we built that can translate across 200 different languages — many which aren’t supported by current translation systems,” he said. “We call this project No Language Left Behind, and the AI modeling techniques we used from NLLB are helping us make high quality translations on Facebook and Instagram for languages spoken by billions of people around the world.”

Meta invests heavily in AI research, with hubs of scientists across the globe building realistic avatars for use in virtual worlds and tools to reduce hate speech across its platforms 0, among many other weird and wonderful things. This investment allows the company to ensure it stays at the cutting edge of innovation by working with the top AI researchers, while also maintaining a link with the wider research community by open-sourcing projects such as No Languages Left Behind.

The major challenge in creating a translation model that will work across rarer languages is that the researchers have a much smaller pool of data — in this case examples of sentences — to train the model versus, say, English. In many cases, they had to find people who spoke those languages to help them provide the data, and then check that the translations were correct.

A multi-institutional collaboration, which includes the U.S. Department of Energy’s (DOE) Argonne National Laboratory, has created a material that can be used to create computer chips that can do just that. It achieves this by using so-called “neuromorphic” circuitry and computer architecture to replicate brain functions. Purdue University professor Shriram Ramanathan led the team.

“Human brains can actually change as a result of learning new things,” said Subramanian Sankaranarayanan, a paper co-author with a joint appointment at Argonne and the University of Illinois Chicago. “We have now created a device for machines to reconfigure their circuits in a brain-like way.”

With this capability, artificial intelligence-based computers might do difficult jobs more quickly and accurately while using a lot less energy. One example is analyzing complicated medical images. Autonomous cars and robots in space that might rewire their circuits depending on experience are a more futuristic example.

When it comes to our state of mind and emotions, our faces can be quite telling. Facial expression is an essential aspect of nonverbal communication in humans. Even if we cannot explain how we do it, we can usually see in another person’s face how they are feeling. In many situations, reading facial expressions is particularly important. For example, a teacher might do it to check if their students are engaged or bored, and a nurse may do it to check if a patient’s condition has improved or worsened.

Thanks to advances in technology, computers can do a pretty good job when it comes to recognizing faces. Recognizing facial expressions, however, is a whole different story. Many researchers working in the field of artificial intelligence (AI) have tried to tackle this problem using various modeling and classification techniques, including the popular convolutional (CNNs). However, facial expression recognition is complex and calls for intricate neural networks, which require a lot of training and are computationally expensive.

In an effort to address these issues, a research team led by Dr. Jia Tian from Jilin Engineering Normal University in China has recently developed a new CNN model for facial expression recognition. As described in an article published in the Journal of Electronic Imaging, the team focused on striking a good balance between the training speed, memory usage, and recognition accuracy of the model.

On Tuesday, July 5, space physics and human studies dominated the science agenda aboard the International Space Station. The Expedition 67 crew also reconfigured a US airlock and put a new 3D printer through its paces.

The lack of gravity in space impacts a wide range of physics revealing new phenomena that researchers are studying to improve life for humans on and off the Earth. One such project uses artificial intelligence to adapt complicated glass manufacturing processes in microgravity with the goal of benefitting numerous Earth-and space-based industries. On Tuesday afternoon, NASA

Established in 1958, the National Aeronautics and Space Administration (NASA) is an independent agency of the United States Federal Government that succeeded the National Advisory Committee for Aeronautics (NACA). It is responsible for the civilian space program, as well as aeronautics and aerospace research. Its vision is “To discover and expand knowledge for the benefit of humanity.” Its core values are “safety, integrity, teamwork, excellence, and inclusion.”