Toggle light / dark theme

There were the cleaners, with large padded feet, who were apparently polishing their way the whole length…’ — Arthur C. Clarke, 1972.

IceBot Antarctic (Planetary?) Robotic Explorers Made Of Ice ‘Some will combine in place to form more complicated structures, like excavators or centipedes.’ — Greg Bear, 2015.

Study: Robots Encourage Humans To Take Risks Not exactly Three Laws compliant.

BladeBUG Robots Clean Massive Wind Turbine Blades ‘There were the cleaners, with large padded feet, who were apparently polishing their way the whole length…’ — Arthur C. Clarke, 1972.

IceBot Antarctic (Planetary?) Robotic Explorers Made Of Ice ‘Some will combine in place to form more complicated structures, like excavators or centipedes.’ — Greg Bear, 2015.

Study: Robots Encourage Humans To Take Risks Not exactly Three Laws compliant.

Researchers at Harvard University have recently devised a system based on Wi-Fi sensing that could enhance the collaboration between robots operating in unmapped environments. This system, presented in a paper pre-published on arXiv, can essentially emulate antenna arrays in the air as a robot moves freely in a 2-D or 3D environment.

“The main goal of our paper was to leverage arbitrary 3D trajectories for a (UAV or UGV) equipped with an on-board estimation sensor,” Ninad Jadhav, one of the researchers who carried out the study, told TechXplore. “This allows a Wi-Fi-signal-receiving robot to estimate the spatial direction (in azimuth and elevation) of other neighboring robots by capturing all the wireless signal paths traveling between the transmitting and receiving robot (which we call AOA profile). Additionally, we also characterized how the trajectory shape impacts the AOA profile using Cramer Rao bound.”

In their previous studies, Jadhav and his colleagues focused on robot collaboration scenarios in which the robots followed 2-D trajectories with a limited set of geometries (e.g., linear or curved). The new system they created, on the other hand, is applicable to scenarios where robots are moving freely, following a wider range of trajectories.

This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.

“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”

Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).

Weird, right?

The team’s critical insight was to construct a “viral language” of sorts, based purely on its genetic sequences. This language, if given sufficient examples, can then be analyzed using NLP techniques to predict how changes to its genome alter its interaction with our immune system. That is, using artificial language techniques, it may be possible to hunt down key areas in a viral genome that, when mutated, allow it to escape roaming antibodies.

It’s a seriously kooky idea. Yet when tested on some of our greatest viral foes, like influenza (the seasonal flu), HIV, and SARS-CoV-2, the algorithm was able to discern critical mutations that “transform” each virus just enough to escape the grasp of our immune surveillance system.

Moscow has revealed a plan to spend $2.4 million on a giant database containing information about every single city resident, including passport numbers, insurance policies, salaries, car registrations – and even their pets.

It will also include work and tax details, school grades, and data from their ‘Troika’ care – Moscow’s unified transport payment system, used on the metro, busses and trains.

The new proposal will undoubtedly increase fears about ever-growing surveillance in the Russian capital, where the number of facial recognition cameras has recently been increased.

The far side of the moon is poised to become our newest and best window on the hidden history of the cosmos. Over the course of the next decade, astronomers are planning to perform unprecedented observations of the early universe from that unique lunar perch using radio telescopes deployed on a new generation of orbiters and robotic rovers.

These instruments will study the universe’s initial half-billion years—the first few hundred million or so of which make up the so-called cosmic “dark ages,” when stars and galaxies had yet to form. Bereft of starlight, this era is invisible to optical observations. Radio telescopes, however, can tune in to long-wavelength, low-frequency radio emissions produced by the gigantic clouds of neutral hydrogen that then filled the universe. But these emissions are difficult, if not downright impossible, to detect from Earth because they are either blocked or distorted by our planet’s atmosphere or swamped by human-generated radio noise.

Scientists have dreamed for decades of such studies that could take place on the moon’s far side, where they would be shielded from earthly transmissions and untroubled by any significant atmosphere to impede cosmic views. Now, with multiple space agencies pursuing lunar missions, those dreams are set to become reality.