Toggle light / dark theme

Robot dog may get to go to the moon

The robotic explorer GLIMPSE, created at ETH Zurich and the University of Zurich, has made it into the final round of a competition for prospecting resources in space. The long-term goal is for the robot to explore the south polar region of the moon.

The south polar region of the moon is believed to contain many resources that would be useful for lunar base operations, such as metals, water in the form of ice, and oxygen stored in rocks. But to find them, an explorer robot that can withstand the extreme conditions of this part of the moon is needed. Numerous craters make moving around difficult, while the low angle of the sunlight and thick layers of dust impede the use of light-based measuring instruments. Strong fluctuations in temperature pose a further challenge.

The European Space Agency (ESA) and the European Space Resources Innovation Center ESRIC called on European and Canadian engineering teams to develop robots and tools capable of mapping and prospecting the shadowy south polar region of the moon, between the Shoemaker and the Faustini craters. To do this, the researchers had to adapt terrestrial exploration technologies for the harsh conditions on the moon.

A weakly supervised machine learning model to extract features from microscopy images

Deep learning models have proved to be highly promising tools for analyzing large numbers of images. Over the past decade or so, they have thus been introduced in a variety of settings, including research laboratories.

In the field of biology, could potentially facilitate the quantitative analysis of microscopy images, allowing researchers to extract meaningful information from these images and interpret their observations. Training models to do this, however, can be very challenging, as it often requires the extraction of features (i.e., number of cells, area of cells, etc.) from microscopy images and the manual of training data.

Researchers at CERVO Brain Research Center, the Institute for Intelligence and Data, and Université Laval in Canada have recently developed an that could perform in-depth analyses of microscopy images using simpler, image-level annotations. This model, dubbed MICRA-Net (MICRoscopy Analysis ), was introduced in a paper published in Nature Machine Intelligence.

Lighting up artificial neural networks with optomemristors

A team of international scientists have performed difficult machine learning computations using a nano-scale device, named an “optomemristor.”

The chalcogenide thin-film device uses both light and to interact and emulate multi-factor biological computations of the mammalian brain while consuming very little energy.

To date, research on hardware for and machine learning applications has concentrated mainly on developing electronic or photonic synapses and neurons, and combining these to carry out basic forms of neural-type processing.

Evolvable neural units that can mimic the brain’s synaptic plasticity

Machine learning techniques are designed to mathematically emulate the functions and structure of neurons and neural networks in the brain. However, biological neurons are very complex, which makes artificially replicating them particularly challenging.

Researchers at Korea University have recently tried to reproduce the complexity of biological neurons more effectively by approximating the function of individual neurons and synapses. Their paper, published in Nature Machine Intelligence, introduces a of evolvable neural units (ENUs) that can adapt to mimic specific neurons and mechanisms of synaptic plasticity.

“The inspiration for our paper comes from the observation of the complexity of biological neurons, and the fact that it seems almost impossible to model all of that complexity produced by nature mathematically,” Paul Bertens, one of the researchers who carried out the study, told TechXplore. “Current artificial used in deep learning are very powerful in many ways, but they do not really match biological neural network behavior. Our idea was to use these existing artificial neural networks not to model the entire , but to model each individual neuron and synapse.”

A perspective on the study of artificial and biological neural networks

Evolution, the process by which living organisms adapt to their surrounding environment over time, has been widely studied over the years. As first hypothesized by Darwin in the mid 1800s, research evidence suggests that most biological species, including humans, continuously adapt to new environmental circumstances and that this ultimately enables their survival.

In recent years, researchers have been developing advanced computational techniques based on artificial neural networks, which are architectures inspired by in the . Models based on artificial neural networks are trained to optimize millions of synaptic weights over millions of observations in order to make accurate predictions or classify data.

Researchers at Princeton University have recently carried out a study investigating the similarities and differences between artificial and biological neural networks from an evolutionary standpoint. Their paper, published in Neuron, compares the evolution of biological neural networks with that of artificial ones using psychology theory.

Artificial intelligence is becoming sustainable

A research group from Politecnico di Milano has developed a new computing circuit that can execute advanced operations, typical of neural networks for artificial intelligence, in one single operation.

The circuit performance in terms of speed and paves the way for a new generation of computing accelerators that are more energy efficient and more sustainable on a global scale. The study has been recently published in the prestigious Science Advances.

Recognizing a face or an object, or correctly interpreting a word or a musical tune are operations that are today possible on the most common electronic gadgets, such as smartphones and tablets, thanks to artificial intelligence. For this to happen, complicated neural networks needs to be appropriately trained, which is so energetically demanding that, according to some studies, the that derives from the training of a complex can equal the emission of 5 cars throughout their whole life cycle.

What producers of Star Wars movies are getting wrong about androids

Robin Murphy, a roboticist at Texas A&M University has published a Focus piece in the journal Science Robotics outlining her views on the robots portrayed in “Star Wars,” most particularly those featured in “The Mandalorian” and “The Book of Boba Fett.” In her article, she says she believes that the portrayals of robots in both movies are quite creative, but suggests they are not wild enough to compete with robots that are made and used in the real world today.

Murphy begins by noting that one in particular, IG-11 in the Mandalorian, makes for good viewing with a rotating head that allows for shooting at targets in any direction, but she also notes that such a robot would very likely be overly susceptible to joint failure and would be saddled with huge computational demands. She suggests a more practical design would involve the use of fixed-array sensors.

Murphy also notes that robots in “Star Wars” do fail on occasion, generally during suspenseful scenes, which she further notes might explain why the empire met with its demise. As just one example, she wonders why the stormtroopers so often miss their targets. She also notes that in some ways, droids in “Star Wars” movies tend to be far more advanced than droids in the real world, allowing them to hold human-like jobs such as bartending, teaching or translating. In so doing, she points out, producers of the movies have shied away from showing them doing more mundane work, like mining.

The UK’s First Autonomous Passenger Bus Started Road Tests This Week

The steering wheel, gas, and brakes that safety drivers will use if they need to take over are separate from the system the buses use to navigate autonomously. During the initial two-week testing period, buses will run without passengers, but the companies involved are aiming to have riders on board by summer.

The self-driving software made by Fusion Processing, called CAVstar for “connected and autonomous vehicles,” isn’t limited to radar, lidar, or cameras, but rather integrates all three. The buses are clearly marked as autonomous so nearby drivers are aware that a computer’s running the show. The question is, how much will this impact drivers’ behavior and relevant driving decisions? Would you feel less rude cutting off a driverless bus? More obliged to let it pass you? Or just sort of confused by the whole situation?

Each bus can carry 36 passengers, and the number of planned trips per day mean the autonomous buses could move up to 10,000 passengers a week. The project’s leaders anticipate the self-driving buses reducing average trip time and improving schedule reliability of the route. This sounds like it’ll mostly be a good thing, but what will happen when, say, an elderly or disabled passenger needs some extra time to get on or off the bus?

/* */