Toggle light / dark theme

X-AR uses wireless signals and computer vision to enable users to perceive things that are invisible to the human eye (i.e., to deliver non-line-of-sight perception). It combines new antenna designs, wireless signal processing algorithms, and AI-based fusion of different sensors.

This design introduces three main innovations:

1) AR-conformal wide-band antenna that tightly matches the shape of the AR headset visor and provides the headset with Radio Frequency (RF) sensing capabilities. The antenna is flexible, lightweight, and fits on existing headsets without obstructing any of their cameras or the user’s field of view.

The field equations of Einstein’s General Relativity theory say that faster-than-light (FTL) travel is possible, so a handful of researchers are working to see whether a Star Trek-style warp drive, or perhaps a kind of artificial wormhole, could be created through our technology.

But even if shown feasible tomorrow, it’s possible that designs for an FTL system could be as far ahead of a functional starship as Leonardo da Vinci’s 16th century drawings of flying machines were ahead of the Wright Flyer of 1903. But this need not be a showstopper against human interstellar flight in the next century or two. Short of FTL travel, there are technologies in the works that could enable human expeditions to planets orbiting some of the nearest stars.

Certainly, feasibility of such missions will depend on geopolitical-economic factors. But it also will depend on the distance to nearest Earth-like exoplanet. Located roughly 4.37 light years away, Alpha Centauri is the Sun’s closest neighbor; thus science fiction, including Star Trek, has envisioned it as humanity’s first interstellar destination.

ABOVE: © ISTOCK.COM, CHRISTOPH BURGSTEDT

Artificial intelligence algorithms have had a meteoric impact on protein structure, such as when DeepMind’s AlphaFold2 predicted the structures of 200 million proteins. Now, David Baker and his team of biochemists at the University of Washington have taken protein-folding AI a step further. In a Nature publication from February 22, they outlined how they used AI to design tailor-made, functional proteins that they could synthesize and produce in live cells, creating new opportunities for protein engineering. Ali Madani, founder and CEO of Profluent, a company that uses other AI technology to design proteins, says this study “went the distance” in protein design and remarks that we’re now witnessing “the burgeoning of a new field.”

Proteins are made up of different combinations of amino acids linked together in folded chains, producing a boundless variety of 3D shapes. Predicting a protein’s 3D structure based on its sequence alone is an impossible task for the human mind, owing to numerous factors that govern protein folding, such as the sequence and length of the biomolecule’s amino acids, how it interacts with other molecules, and the sugars added to its surface. Instead, scientists have determined protein structure for decades using experimental techniques such as X-ray crystallography, which can resolve protein folds in atomic detail by diffracting X-rays through crystallized protein. But such methods are expensive, time-consuming, and depend on skillful execution. Still, scientists using these techniques have managed to resolve thousands of protein structures, creating a wealth of data that could then be used to train AI algorithms to determine the structures of other proteins. DeepMind famously demonstrated that machine learning could predict a protein’s structure from its amino acid sequence with the AlphaFold system and then improved its accuracy by training AlphaFold2 on 170,000 protein structures.

A NASA-led research team used satellite imagery and artificial intelligence methods to map billions of discrete tree crowns down to a 50-cm scale. The images encompassed a large swath of arid northern Africa, from the Atlantic to the Red Sea. Allometric equations based on previous tree sampling allowed the researchers to convert imagery into estimates of tree wood, foliage, root size, and carbon sequestration.

The new NASA estimation, published in the journal Nature, was surprisingly low. While the typical estimation of a region’s might rely on counting small areas and extrapolating results upwards, the NASA demonstrated technique only counts the trees that are actually there, down to the individual tree. Jules Bayala and Meine van Noordwijk published a News & Views article in the same journal commenting on the NASA team’s work.

The initial expectation of counting every scattered tree, in areas that previous models often represented by zero values, was erased by large overestimations in other areas of the earlier assessments. In previous attempts using satellites, cropland, and ground vegetation adversely affected optical images. If radar was used, topography, wetlands, and irrigated areas affected the radar backscatter, predicting higher stocks than the current NASA estimations.

Short and sweet. Everyone needs a daily dose of Sabine.


Is science close to explaining everything about our universe? Physicist Sabine Hossenfelder reacts.

Up next, Physics’ greatest mystery: Michio Kaku explains the God Equation ► https://youtu.be/B1GO1HPLp7Y

In his 1996 book “The End of Science”, John Horgan argued that scientists were close to answering nearly all of the big questions about our Universe. Was he right?

The theoretical physicist Sabine Hossenfelder doesn’t think so. As she points out, the Standard Model of physics, which describes the behavior of particles and their interactions, is still incomplete as it does not include gravity. What’s more, the measurement problem in quantum mechanics remains unsolved, and understanding this could lead to significant technological advancements.

ALGORITHMS TURN PHOTO SHAPSHOTS INTO 3D VIDEO AND OR IMMERSIVE SPACE. This has been termed “Neural Radiance Fields.” Now Google Maps wants to turn Google Maps into a gigantic 3D space. Three videos below demonstrate the method. 1) A simple demonstration, 2) Google’s immersive maps, and 3) Using this principle to make dark, grainy photographs clear and immersive.

This technique is different from “time of flight” cameras which make a 3D snapshot based on the time light takes to travel to and from objects, but combined with this technology, and with a constellation of microsatellites as large as cell phones, a new version of “Google Earth” with live, continual imaging of the whole planet could eventually be envisioned.

2) https://www.youtube.com/watch?v=EUP5Fry24ao.

3)


We present RawNeRF, a method for optimizing neural radiance fields directly on linear raw image data. More details at https://bmild.github.io/rawnerf.

Summary: Examining the cognitive abilities of the AI language model, GPT-3, researchers found the algorithm can keep up and compete with humans in some areas but falls behind in others due to a lack of real-world experience and interactions.

Source: Max Planck Institute.

Researchers at the Max Planck Institute for Biological Cybernetics in Tübingen have examined the general intelligence of the language model GPT-3, a powerful AI tool.