Toggle light / dark theme

One of my favorite science fiction authors is/was Isaac Asimov (should we use the past tense since he is no longer with us, or the present tense because we still enjoy his writings?). In many ways Asimov was a futurist, but — like all who attempt to foretell what is to come — he occasionally managed to miss the mark.

Take his classic Foundation Trilogy, for example (before he added the two prequels and two sequels). On the one hand we have a Galactic Empire that spans the Milky Way with millions of inhabited worlds and quadrillions of people. Also, we have mighty space vessels equipped with hyperdrives that can convey people from one side of the galaxy to the other while they are still young enough to enjoy the experience.

On the other hand, in Foundation and Empire, when a message arrives at a spaceship via hyperwave for the attention of General Bel Riose, it’s transcribed onto a metal spool that’s placed in a message capsule that will open only to his thumbprint. Asimov simply never conceived of things like today’s wireless networks and tablet computers and suchlike.

Nanobots, tiny nano-sized robots and vehicles that can navigate through blood vessels to reach the site of a disease could be used to deliver drugs to tumours that are otherwise difficult to treat.

Once injected or swallowed, most drugs rely upon the movement of body fluids to find their way around the body. It means that some types of disease can be difficult to treat effectively in this way.

One aggressive type of brain tumour known as glioblastoma, for example, kills hundreds of thousands of people a year. But because it produces finger-like projections into a patient’s brain tissue that damage the blood vessels around them, it is hard for drugs to reach the tumour site.

It’s been over for human fighter pilots, it will come down to who has the best AI fighter aircraft. AI will also take over ground combat vehicles (tanks), ships, and last will be armed humanoid robot combat soldiers.


The reported testing of AI against Chinese fighter pilots mirrors US military efforts and underscores China’s major investments in this technology.

A recent string of problems suggests facial recognition’s reliability issues are hurting people in a moment of need. Motherboard reports that there are ongoing complaints about the ID.me facial recognition system at least 21 states use to verify people seeking unemployment benefits. People have gone weeks or months without benefits when the Face Match system doesn’t verify their identities, and have sometimes had no luck getting help through a video chat system meant to solve these problems.

ID.me chief Blake Hall blamed the problems on users rather than the technology. Face Match algorithms have “99.9% efficacy,” he said, and there was “no relationship” between skin tone and recognition failures. Hall instead suggested that people weren’t sharing selfies properly or otherwise weren’t following instructions.

Motherboard noted that at least some people have three attempts to pass the facial recognition check, though. The outlet also pointed out that the company’s claims of national unemployment fraud costs have ballooned rapidly in just the past few months, from a reported $100 billion to $400 billion. While Hall attributed that to expanding “data points,” he didn’t say just how his firm calculated the damage. It’s not clear just what the real fraud threat is, in other words.

We may have progressed beyond drinking mercury to try to prolong life. Instead, by a British government estimate, we have what may be called the ‘immortality industrial research complex’ – using genomics, artificial intelligence and other advanced sciences, and supported worldwide by governments, big business, academics and billionaires – that’s worth US$110 billion today and US$610 billion by 2025.


We are living longer than at any time in human history. And while the search is on for increased longevity if not immortality, new research suggests biological constraints will ultimately determine when you die.

But now a spy swims among them: Mesobot. Today in the journal Science Robotics, a team of engineers and oceanographers describes how they got a new autonomous underwater vehicle to lock onto movements of organisms and follow them around the ocean’s “twilight zone,” a chronically understudied band between 650 feet and 3200 feet deep, which scientists also refer to as mid-water. Thanks to some clever engineering, the researchers did so without flustering these highly sensitive animals, making Mesobot a groundbreaking new tool for oceanographers.

“It’s super cool from an engineering standpoint,” says Northeastern University roboticist Hanumant Singh, who develops ocean robots but wasn’t involved in this research. “It’s really an amazing piece of work, in terms of looking at an area that’s unexplored in the ocean.”

Mesobot looks like a giant yellow-and-black AirPods case, only it’s rather more waterproof and weighs 550 pounds. It can operate with a fiber-optic tether attached to a research vessel at the surface, or it can swim around freely.

Observing the secrets of the universe’s “Dark Ages” will require capturing ultra-long radio wavelengths—and we can’t do that on Earth.


The universe is constantly beaming its history to us. For instance: Information about what happened long, long ago, contained in the long-length radio waves that are ubiquitous throughout the universe, likely hold the details about how the first stars and black holes were formed. There’s a problem, though. Because of our atmosphere and noisy radio signals generated by modern society, we can’t read them from Earth.

That’s why NASA is in the early stages of planning what it would take to build an automated research telescope on the far side of the moon. One of the most ambitious proposals would build the Lunar Crater Radio Telescope, the largest (by a lot) filled-aperture radio telescope dish in the universe. Another duo of projects, called FarSide and FarView, would connect a vast array of antennas—eventually over 100000, many built on the moon itself and made out of its surface material—to pick up the signals. The projects are all part of NASA’s Institute for Advanced Concepts (NIAC) program, which awards innovators and entrepreneurs with funding to advance radical ideas in hopes of creating breakthrough aerospace concepts. While they are still hypothetical, and years away from reality, the findings from these projects could reshape our cosmological model of the universe.

“With our telescopes on the moon, we can reverse-engineer the radio spectra that we record, and infer for the first time the properties of the very first stars,” said Jack Burns, a cosmologist at the University of Colorado Boulder and the co-investigator and science lead for both FarSide and FarView. “We care about those first stars because we care about our own origins—I mean, where did we come from? Where did the Sun come from? Where did the Earth come from? The Milky Way?”

Stimulation of the nervous system with neurotechnology has opened up new avenues for treating human disorders, such as prosthetic arms and legs that restore the sense of touch in amputees, prosthetic fingertips that provide detailed sensory feedback with varying touch resolution, and intraneural stimulation to help the blind by giving sensations of sight.

Scientists in a European collaboration have shown that optic nerve stimulation is a promising neurotechnology to help the blind, with the constraint that current technology has the capacity of providing only simple visual signals.

Nevertheless, the scientists’ vision (no pun intended) is to design these simple visual signals to be meaningful in assisting the blind with daily living. Optic nerve stimulation also avoids invasive procedures like directly stimulating the brain’s visual cortex. But how does one go about optimizing stimulation of the optic nerve to produce consistent and meaningful visual sensations?

Now, the results of a collaboration between EPFL, Scuola Superiore Sant’Anna and Scuola Internazionale Superiore di Studi Avanzati, published today in Patterns, show that a new stimulation protocol of the optic nerve is a promising way for developing personalized visual signals to help the blind–that also take into account signals from the visual cortex. The protocol has been tested for the moment on artificial neural networks known to simulate the entire visual system, called convolutional neural networks (CNN) usually used in computer vision for detecting and classifying objects. The scientists also performed psychophysical tests on ten healthy subjects that imitate what one would see from optic nerve stimulation, showing that successful object identification is compatible with results obtained from the CNN.

“We are not just trying to stimulate the optic nerve to elicit a visual perception,” explains Simone Romeni, EPFL scientist and first author of the study. “We are developing a way to optimize stimulation protocols that takes into account how the entire visual system responds to optic nerve stimulation.”

“The research shows that you can optimize optic nerve stimulation using machine learning approaches. It shows more generally the full potential of machine learning to optimize stimulation protocols for neuroprosthetic devices,” continues Silvestro Micera, EPFL Bertarelli Foundation Chair in Translational Neural Engineering and Professor of Bioelectronics at the Scuola Superiore Sant’Anna.