Toggle light / dark theme

At 870 degrees Fahrenheit and 90 times Earth’s atmospheric pressure, we’re going to need something a little more robust than your Macbook to run future rovers.


Humanity has sent four rovers to Mars, and worldwide there are four more missions in the works to continue populating the red planet with robotic explorers. Why haven’t we sent a rover to Venus, our other next door planetary neighbor? Because the caustic surface of Venus will incinerate electronics with its 872º F temperatures and seize mechanical components with its immense atmospheric pressures. At 90 times the surface pressure of Earth, the surface of Venus is the equivalent of being almost 3,000 feet underwater.

The Great Galactic Ghoul might devour half the spacecraft we send to Mars, but Venus torched any ghouls living there long ago.

Fortunately, NASA recently took a big step toward achieving the dream of a Venusian rover. As reported by Ars Technica, researchers at the NASA Glenn Research Center built a computer chip that survived Venus-like conditions for an impressive 521 hours, almost 22 days. Even then, the experiment had to end not because the chip was breaking down, but because the Glenn Extreme Environments Rig (GEER) —the chamber that maintains simulated Venus temperatures and pressures—needed to be shut down after running for over three weeks straight.

Terms such as ‘Artificial Intelligence’ or ‘Neurotechnology’ were new some time not so long ago. We can’t evolve faster than our language does. We think in concepts and evolution itself is a linguistic, code-theoretic process. Do yourself a humongous favor, look over these 33 transhumanist neologisms. Here’s a fairly comprehensive glossary of thirty three newly-introduced concepts and terms from The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution by Russian-Amer… See More.

The U.S. #military, like many others around the world, is investing significant time and resources into expanding its electronic #warfare capabilities across the board, for offensive and defensive purposes, in the air, at sea, on land, and even in space. Now, advances in #machinelearning and #artificialintelligence mean that electronic warfare systems, no matter what their specific function, may all benefit from a new underlying concept known as advanced “Cognitive Electronic Warfare,” or #Cognitive EW. The main goal is to be able to increasingly automate and otherwise speed up critical processes, from analyzing electronic intelligence to developing new electronic warfare measures and countermeasures, potentially in real-time and across large swathes of networked platforms.


The holy grail of this concept is electronic warfare systems that can spot new or otherwise unexpected threats and immediately begin adapting to them.

SkyWatch Space Applications, the Canadian startup whose EarthCache platform helps software developers embed geospatial data and imagery in applications, announced a partnership Oct. 5 with Picterra, a Swiss startup with a self-service platform to help customers autonomously extract information from aerial and satellite imagery.

“One of the things that has been very difficult to achieve is this ability to easily and affordably access satellite data in a way that is fast but also in a way in which you can derive the insights you need for your particular business,” James Slifierz, SkyWatch CEO told SpaceNews. “What if you can merge both the accessibility of this data with an ease of developing and applying intelligence to the data so that any company in the world could have the tools to derive insights?”

SkyWatch’s EarthCache platform is designed to ease access to aerial and satellite imagery. However, SkyWatch doesn’t provide data analysis.

Picterra is not a data provider. Instead, the company helps customers build their own machine-learning algorithms to detect things like building footprints in imagery customers either upload or find in Picterra’s library of open-source imagery.

CERN’s Timepix particle detectors, developed by the Medipix2 Collaboration, help unravel the secret of a long-lost painting by the great Renaissance master, Raphael. 500 years ago, the Italian painter Raphael passed away, leaving behind him many works of art, paintings, frescoes, and engravings.


CERNs Timepix particle detectors, developed by the Medipix2 Collaboration, help unravel the secret of a long-lost painting by the great Renaissance master, Raphael.

500 years ago, the Italian painter Raphael passed away, leaving behind him many works of art, paintings, frescoes, and engravings. Like his contemporaries Michelangelo and Leonardo da Vinci, Raphael’s work made the joy of imitators and the greed of counterfeiters, who bequeathed us many copies, pastiches, and forgeries of the great master of the Renaissance.

For a long time, it was thought that The Madonna and Child, a painting on canvas from a private collection, was not created directly by the master himself. Property of Popes and later part of Napoleon’s war treasure, the painting changed hands several times before arriving in Prague during the 1930’s. Due to its history and numerous inconclusive examinations, its authenticity was questioned for a long time. It has now been attributed to Raphael by a group of independent experts. One of the technologies that provided them with key information, was a robotic x-ray scanner using CERN-designed chips.

An international team of Johannes Kepler University researchers is developing robots made from soft materials. A new article in the journal Communications Materials demonstrates how these kinds of soft machines react using weak magnetic fields to move very quickly—even grabbing a quick-moving fly that has landed on it.

When we imagine a moving machine, such as a robot, we picture something largely made out of hard materials, says Martin Kaltenbrunner. He and his team of researchers at the JKU’s Department of Soft Matter Physics and the LIT Soft Materials Lab have been working to build a -based system. When creating these kinds of systems, there is a basic underlying idea to create conducive conditions that support close robot-human interaction in the future—without the solid machine physically harming humans.

Robots of the future will be dexterous, capable of deep nuanced conversations, and fully autonomous. They are going to be indispensable to humans in the future.

Let me know in the comment section what you wish a robot could help you do…?

~ 2020s & The Future Beyond.
#Iconickelx

#artificialintelligence #Robots #AI #automation #Future #4IR #Innovation #digitaltransformation

The first artificial neural networks weren’t abstractions inside a computer, but actual physical systems made of whirring motors and big bundles of wire. Here I’ll describe how you can build one for yourself using SnapCircuits, a kid’s electronics kit. I’ll also muse about how to build a network that works optically using a webcam. And I’ll recount what I learned talking to the artist Ralf Baecker, who built a network using strings, levers, and lead weights.

I showed the SnapCircuits network last year to John Hopfield, a Princeton University physicist who pioneered neural networks in the 1980s, and he quickly got absorbed in tweaking the system to see what he could get it to do. I was a visitor at the Institute for Advanced Study and spent hours interviewing Hopfield for my forthcoming book on physics and the mind.

The type of network that Hopfield became famous for is a bit different from the deep networks that power image recognition and other A.I. systems today. It still consists of basic computing units—“neurons”—that are wired together, so that each responds to what the others are doing. But the neurons are not arrayed into layers: There is no dedicated input, output, or intermediate stages. Instead the network is a big tangle of signals that can loop back on themselves, forming a highly dynamic system.