Toggle light / dark theme

The pretraining of BERT-type large language models — which can scale up to billions of parameters — is crucial for obtaining state-of-the-art performance on many natural language processing (NLP) tasks. This pretraining process however is expensive, and has become a bottleneck hindering the industrial application of such large language models.

In the new paper Token Dropping for Efficient BERT Pretraining, a research team from Google, New York University, and the University of Maryland proposes a simple but effective “token dropping” technique that significantly reduces the pretraining cost of transformer models such as BERT, without degrading performance on downstream fine-tuning tasks.

The team summarizes their main contributions as:

To effectively interact with humans in crowded social settings, such as malls, hospitals, and other public spaces, robots should be able to actively participate in both group and one-to-one interactions. Most existing robots, however, have been found to perform much better when communicating with individual users than with groups of conversing humans.

Hooman Hedayati and Daniel Szafir, two researchers at University of North Carolina at Chapel Hill, have recently developed a new data-driven technique that could improve how robots communicate with groups of humans. This method, presented in a paper presented at the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘22), allows robots to predict the positions of humans in conversational groups, so that they do not mistakenly ignore a person when their sensors are fully or partly obstructed.

“Being in a conversational group is easy for humans but challenging for robots,” Hooman Hedayati, one of the researchers who carried out the study, told TechXplore. “Imagine that you are talking with a group of friends, and whenever one of your friends blinks, she stops talking and asks if you are still there. This potentially annoying scenario is roughly what can happen when a robot is in conversational groups.”

Scientists at The University of Texas at Austin have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

Other labs have redesigned Cas9 to reduce off-target interactions, but so far, all these versions improve accuracy by sacrificing speed. SuperFi-Cas9, as this new version has been dubbed, is 4,000 times less likely to cut off-target sites but just as fast as naturally occurring Cas9. Bravo says you can think of the different lab-generated versions of Cas9 as different models of self-driving cars. Most models are really safe, but they have a top speed of 10 miles per hour.

“They’re safer than the naturally occurring Cas9, but it comes at a big cost: They’re going extremely slowly,” said Bravo. “SuperFi-Cas9 is like a self-driving car that has been engineered to be extremely safe, but it can still go at full speed.”

Physics World Stories podcast, Andrew Glester catches up with two engineers from the UK Atomic Energy Authority to learn more about this latest development. Leah Morgan, a physicist-turned-engineer explains why JET’s recent success is great news for the ITER project – a larger experimental fusion reactor currently under construction in Cadarache, France. Later in the episode, mechanical design engineer Helena Livesey talks about the important role of robotics for accessing equipment within the extreme conditions inside a tokamak device.

I’m trying to recall a sci-fi short story that I once read, about a spacecraft that’s attempting to travel farther from Earth than anyone ever has before. As it gets farther away, the crew start to experience unexplained psychological and neurological symptoms. One by one, they eventually become catatonic and need to be cared for in the ship’s infirmary, while their crewmates desperately try to determine the cause.

The protagonist is the last person to be affected, and just as they are starting to succumb, they come up with a theory: human consciousness is not just an individual phenomenon, but is somehow dependent on the collective effect of all the other human minds on Earth. So as the ship leaves Earth’s “sphere of influence”, its passengers lose their consciousness and intelligence. Having realized this, the protagonist is barely able to program the autopilot to turn around, and the narration describes their descent into insanity and subsequent return to consciousness.

The title might have contained a reference to “closeness”, “distance”, “solitude”, “togetherness”, or something along those lines. I have a vague sense that the theme and style reminded me of David Brin’s work, but having looked through his bibliography, I don’t think it’s one of his stories.

👉For business inquiries: [email protected].
✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this video we present the March 2022 news digest. The largest exhibition of technology Expo 2022 in Dubai, artificial intelligence that will replace programmers, new Atlas robot arms, an emotional android and opening of the GigaFactory Berlin by Elon Musk. All the most interesting news from the world of high-tech in one issue!

More interesting and useful content:

✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ

Standard image sensors, like the billion or so already installed in practically every smartphone in use today, capture light intensity and color. Relying on common, off-the-shelf sensor technology—known as CMOS—these cameras have grown smaller and more powerful by the year and now offer tens-of-megapixels resolution. But they’ve still seen in only two dimensions, capturing images that are flat, like a drawing—until now.

Researchers at Stanford University have created a new approach that allows standard image sensors to see in three dimensions. That is, these common cameras could soon be used to measure the distance to objects.

The engineering possibilities are dramatic. Measuring distance between objects with light is currently possible only with specialized and expensive —short for “light detection and ranging”—systems. If you’ve seen a self-driving car tooling around, you can spot it right off by the hunchback of technology mounted to the roof. Most of that gear is the car’s lidar crash-avoidance system, which uses lasers to determine distances between objects.