Toggle light / dark theme

Tech major Google is reportedly slowing down its hiring processes for the rest of 2022. According to a memo by CEO Sundar Pichai to employees, obtained by The Verge, Google will still support its “most important opportunities”, and focus on hiring engineering, technical and other critical roles.

Until now, Google has remained relatively immune to economic uncertainties, and in fact, its sister brand YouTube did well in Q4 2020 — first year of the Covid-19 pandemic. It was reported that its ad revenue hit $6.9 billion — up by 46% quarter-on-quarter. Pichai, in his memo, also highlights that the company hired approximately 10,000 employees in the second quarter of this year, and has a “number of commitments for Q3”, Pichai said in the memo adding that “Google will pause the hiring process for the rest of the year”.

“For the balance of 2022 and 2023, we’ll focus our hiring on engineering, technical and other critical roles, and make sure the great talent we do hire is aligned with our long-term priorities,” he reportedly wrote in the memo.

What seems like a sci-fi movie can be turned into reality if Japan’s technology is to be believed. Humans can travel across different planets on a train in the near future! Yes, you read that right. Japan has laid out plans in a bid to send humans to Mars and the Moon, according to The Weather Channel India.

Japan has made plans to build a glass habitat structure that would copy Earth’s gravity, atmosphere and topography to make us feel like home.

Researchers from Japan’s Kyoto University in collaboration with Kajima Construction are working on this plan that might shake up space travel, the Weather Channel reported. The researchers announced this last week in a press conference, the EurAsian Times reported.

“We know now that in the early years of the twentieth century this world was being watched closely by intelligence greater than man’s…across an immense ethereal gulf, minds that to our minds as ours are to the beasts in the jungle, intellects vast, cool and unsympathetic, regarded this earth with envious eyes and slowly and surely drew their plans against us.” So began actor Orson Welles’ chilling Mercury Theater radio performance on October 30, 1938 that Martians were invading, leading terrified listeners to believe that Earth was under attack by hostile aliens.

Welles’ chilling performance was a dramatization of the H.G. Wells science-fiction classic, “The War of the Worlds,” and was part of a weekly series of dramatic broadcasts created in collaboration with the Mercury Theatre on the Air for CBS, according to a transcript of the program.

Using reinforcement learning (RL) to train robots directly in real-world environments has been considered impractical due to the huge amount of trial and error operations typically required before the agent finally gets it right. The use of deep RL in simulated environments has thus become the go-to alternative, but this approach is far from ideal, as it requires designing simulated tasks and collecting expert demonstrations. Moreover, simulations can fail to capture the complexities of real-world environments, are prone to inaccuracies, and the resulting robot behaviours will not adapt to real-world environmental changes.

The Dreamer algorithm proposed by Hafner et al. at ICLR 2020 introduced an RL agent capable of solving long-horizon tasks purely via latent imagination. Although Dreamer has demonstrated its potential for learning from small amounts of interaction in the compact state space of a learned world model, learning accurate real-world models remains challenging, and it was unknown whether Dreamer could enable faster learning on physical robots.

In the new paper DayDreamer: World Models for Physical Robot Learning, Hafner and a research team from the University of California, Berkeley leverage recent advances in the Dreamer world model to enable online RL for robot training without simulators or demonstrations. The novel approach achieves promising results and establishes a strong baseline for efficient real-world robot training.

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column, Perceptron, aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

In this batch of recent research, Meta open-sourced a language system that it claims is the first capable of translating 200 different languages with “state-of-the-art” results. Not to be outdone, Google detailed a machine learning model, Minerva, that can solve quantitative reasoning problems including mathematical and scientific questions. And Microsoft released a language model, Godel, for generating “realistic” conversations that’s along the lines of Google’s widely publicized Lamda. And then we have some new text-to-image generators with a twist.

Meta’s new model, NLLB-200, is a part of the company’s No Language Left Behind initiative to develop machine-powered translation capabilities for most of the world’s languages. Trained to understand languages such as Kamba (spoken by the Bantu ethnic group) and Lao (the official language of Laos), as well as over 540 African languages not supported well or at all by previous translation systems, NLLB-200 will be used to translate languages on the Facebook News Feed and Instagram in addition to the Wikimedia Foundation’s Content Translation Tool, Meta recently announced.