Toggle light / dark theme

Stanford University engineers have developed an airborne method for imaging underwater objects by combining light and sound to break through the seemingly impassable barrier at the interface of air and water.

The researchers envision their hybrid optical-acoustic system one day being used to conduct drone-based biological marine surveys from the air, carry out large-scale aerial searches of sunken ships and planes, and map the ocean depths with a similar speed and level of detail as Earth’s landscapes. Their “Photoacoustic Airborne Sonar System” is detailed in a recent study published in the journal IEEE Access.

“Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth’s landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water,” said study leader Amin Arbabian, an associate professor of electrical engineering in Stanford’s School of Engineering. “Our goal is to develop a more robust system which can image even through murky water.”

Radar and LiDAR have been incredibly quick and effective tools for mapping and surveying the Earth’s surface from aircraft and satellites, but while they can deliver accurate readings through cloud and even forest canopy cover, they can’t tell you what’s below the surface of the sea. Seawater absorbs far too much of the signal.

Sonar remains the most effective way to map out the sea floor – but the vast majority of the oceans that form 70 percent of the Earth’s surface remain unmapped, because sonic waves have hitherto only been able to be sent out from underwater. Sound waves sent from air into water lose more than 99.9 percent of their energy in the translation; it’s why the outside world goes so wonderfully silent when you dive down to the bottom of the pool. The meagre remaining 0.1 percent of the energy does create a sonar signal, but that loses a further 99.9 percent of its energy upon coming back up from the water into the air.

Sonar is commonly used for submarine detection, among other things, by military forces the world over, chiefly using devices on the undersides of ships. But the closest things thus far to an airborne sonar system are “dippers” like Thales’ FLASH system; low-frequency, wide-band sonar systems that dangle from cables out the bottom of helicopters and dip into the sea below like noisy teabags. These methods are slow, expensive, and no good at covering large areas.

COVID-19 Research: Chinese researchers from the Beijing Institute of Biotechnology, Academy of Military Medical Sciences (AMMS)-China led by Professor Dr Wei Congwen, have found the role of an HDL (high-density lipoprotein) receptor in the facilitation of entry of the SARS CoV-2 into human host cells.

The SARS-CoV-2 infects host cells through binding of the viral spike protein (SARS-2-S) to the cell-surface receptor angiotensin-converting enzyme 2 (ACE2).

US army wants to be able read soldiers minds. This would enable machines to detect stress and soldier intentions to correct them. It could also allow them to communicate with each other with just their brain signals.


Communicating silently through the mind sounds at home in a Marvel film, but now the US Army is delivering technology to do it. With that said, it may be a while before tangible results are seen.

Research funded by the US Army has managed to decode brain signals that impact action, and has also managed to separate signals that change behaviour from those that do not.

As a result of this breakthrough, it’s hoped that machines will be able to understand the intentions of soldiers and correct them before action is taken. This could protect soldiers by detecting stress, and it seems the technology could have even more significant use if further research is successful.

Artificial intelligence is being developed that can analyze whether it’s own decision or prediction is reliable.

…An AI that is aware/determine or analyze it’s own weaknesses. Basically, it should help doctors or passengers of the AI know quickly the risk involved.


How might The Terminator have played out if Skynet had decided it probably wasn’t responsible enough to hold the keys to the entire US nuclear arsenal? As it turns out, scientists may just have saved us from such a future AI-led apocalypse, by creating neural networks that know when they’re untrustworthy.

These deep learning neural networks are designed to mimic the human brain by weighing up a multitude of factors in balance with each other, spotting patterns in masses of data that humans don’t have the capacity to analyse.

Consumer drones have over the years struggled with an image of being no more than expensive and delicate toys. But applications in industrial, military and enterprise scenarios have shown that there is indeed a market for unmanned aerial vehicles, and today, a startup that makes drones for some of those latter purposes is announcing a large round of funding and a partnership that provides a picture of how the drone industry will look in years to come.

Percepto, which makes drones — both the hardware and software — to monitor and analyze industrial sites and other physical work areas largely unattended by people, has raised $45 million in a Series B round of funding.

Alongside this, it is now working with Boston Dynamics and has integrated its Spot robots with Percepto’s Sparrow drones, with the aim being better infrastructure assessments, and potentially more as Spot’s agility improves.