Menu

Blog

Archive for the ‘mapping’ category: Page 25

Sep 5, 2022

Apple Researchers Develop NeuMan: A Novel Computer Vision Framework that can Generate Neural Human Radiance Field from a Single Video

Posted by in categories: augmented reality, computing, mapping, neuroscience

Neural Radiance Fields (NeRF) were first developed, greatly enhancing the quality of new vision synthesis. It was first suggested as a way to rebuild a static picture using a series of posed photographs. However, it has been swiftly expanded to include dynamic and uncalibrated scenarios. With the assistance of sizable controlled datasets, recent work additionally concentrate on animating these human radiance field models, thereby broadening the application domain of radiance-field-based modeling to provide augmented reality experiences. In this study, They are focused on the case when just one video is given. They aim to rebuild the human and static scene models and enable unique posture rendering of the person without the need for pricey multi-camera setups or manual annotations.

Neural Actor can create inventive human poses, but it needs several films. Even with the most recent improvements in NeRF techniques, this is far from a simple task. The NeRF models must be trained using many cameras, constant lighting and exposure, transparent backgrounds, and precise human geometry. According to the table below, HyperNeRF cannot be controlled by human postures but instead creates a dynamic scene based on a single video. ST-NeRF uses many cameras to rebuild each person using a time-dependent NeRF model, although the editing is only done to change the bounding box. HumanNeRF creates a human model from a single video with masks that have been carefully annotated; however, it does not demonstrate generalization to novel postures.

With a model trained on a single video, Vid2Actor can produce new human poses, but it cannot model the surroundings. They solve these issues by proposing NeuMan, a system that can create unique human stances and novel viewpoints while reconstructing the person and the scene from a single in-the-wild video. Figure 1’s high-quality pose-driven rendering is made possible by NeuMan, a cutting-edge framework for training NeRF models for both the human and the scene. They first estimate the camera poses, the sparse scene model, the depth maps, the human stance, the human form, and the human masks from a moving camera’s video.

Sep 5, 2022

Anders Sandberg — Grand Futures — Thinking Truly Long Term

Posted by in categories: computing, mapping, space

Synopsis: How can we think rigorously about the far future, and use this to guide near-term projects? In this talk I will outline my “grand futures” project of mapping the limits of what advanced civilizations can achieve – in terms of survival, expanding in space, computation, mastery over matter and energy, and so on – and how this may interact with different theories about what truly has value.

For some fun background reading, see ‘What is the upper limit of value?‘which Anders Sandberg co-authored with David Manheim.

Continue reading “Anders Sandberg — Grand Futures — Thinking Truly Long Term” »

Sep 2, 2022

Google’s immersive Street View could be glimpse of metaverse

Posted by in categories: internet, mapping

Fifteen years after its launch, a Google Maps feature that lets people explore faraway places as though standing right there is providing a glimpse of the metaverse being heralded as the future of the internet.

There was not yet talk of online life moving to virtual worlds when a “far-fetched” musing by Google co-founder Larry Page prompted Street View, which lets users of the company’s free navigation service see imagery of map locations from the perspective of being there.

Now the metaverse is a tech-world buzz, with companies including Facebook parent Meta investing in creating online realms where people represented by videogame-like characters work, play, shop and more.

Sep 1, 2022

Robot Dogs and Drones 3D Mapping ‘Ghost Ships’ with Laser-based Sensors

Posted by in categories: drones, mapping, military, robotics/AI, virtual reality

Sounds like a sci-fi movie right? But it’s not. Naval Surface Warfare Center, Philadelphia Division is testing laser-based sensors on robot dogs or drones as a way to perform battle damage assessment, repair, installation, and modernization – all remotely.

NSWCPD’s Advanced Data Acquisition Prototyping Technology Virtual Environments (ADAPT.VE) engineers and scientists are testing new applications for light detection and ranging (LiDAR) to build 3D ship models aboard the ‘mothballed’ fleet of decommissioned ships at the Philadelphia Navy Yard.

Aug 29, 2022

Seven New Areas in the Insular Cortex Identified

Posted by in categories: mapping, neuroscience

Summary: Researchers at the Human Brain Project have identified and mapped 7 new areas of the insular cortex.

Source: Human Brain Project.

All newly detected areas are now available as 3D probability maps in the Julich Brain Atlas, and can be openly accessed via the HBP’s EBRAINS infrastructure.

Aug 24, 2022

Human Brain Project researchers map four new brain areas involved in many cognitive processes

Posted by in categories: mapping, neuroscience

The human dorsolateral prefrontal cortex is involved in cognitive control including attention selection, working memory, decision making and planning of actions. Changes in this brain region are suspected to play a role in schizophrenia, obsessive-compulsive disorder, depression and bipolar disorder, making it an important research target. Researchers from Forschungszentrum Jülich and Heinrich-Heine University Düsseldorf now provide detailed, three-dimensional maps of four new areas of the brain region.

In order to identify the borders between brain areas, the researchers statistically analysed the distribution of cells (the cytoarchitecture) in 10 post mortem human brains. After reconstructing the mapped areas in 3D, the researchers superimposed the maps of the 10 different brains and generated probability maps that reflect how much the localization and size of each area varies among individuals.

High inter-subject variability has been a major challenge for prior attempts to map this brain region leading to considerable discrepancies in pre-existing maps and inconclusive information making it very difficult to understand the specific involvement of individual brain areas in the different cognitive functions. The new probabilistic maps account for the variability between individuals and can be directly superimposed with datasets from functional studies in order to directly correlate structure and function of the areas.

Aug 23, 2022

Bionic underwater vehicle inspired by fish with enlarged pectoral fins

Posted by in categories: cyborgs, mapping, robotics/AI, transhumanism, transportation

Underwater robots are being widely used as tools in a variety of marine tasks. The RobDact is one such bionic underwater vehicle, inspired by a fish called Dactylopteridae known for its enlarged pectoral fins. A research team has combined computational fluid dynamics and a force measurement experiment to study the RobDact, creating an accurate hydrodynamic model of the RobDact that allows them to better control the vehicle.

The team published their findings in Cyborg and Bionic Systems on May 31, 2022.

Underwater robots are now used for many marine tasks, including in the fishery industry, underwater exploration, and mapping. Most of the traditional underwater robots are driven by a propeller, which is effective for cruising in at a stable speed. However, underwater robots often need to be able to move or hover at low speeds in turbulent waters, while performing a specific task. It is difficult for the propeller to move the robot in these conditions. Another factor when an is moving at low speeds in unstable flowing waters is the propeller’s “twitching” movement. This twitching generates unpredictable fluid pulses that reduce the robot’s efficiency.

Aug 16, 2022

Uncovering nature’s patterns at the atomic scale in living color

Posted by in categories: information science, mapping, robotics/AI

Color coding makes aerial maps much more easily understood. Through color, we can tell at a glance where there is a road, forest, desert, city, river or lake.

Working with several universities, the U.S. Department of Energy’s (DOE) Argonne National Laboratory has devised a method for creating color-coded graphs of large volumes of data from X-ray analysis. This new tool uses computational data sorting to find clusters related to physical properties, such as an atomic distortion in a . It should greatly accelerate future research on structural changes on the atomic scale induced by varying temperature.

The research team published their findings in the Proceedings of the National Academy of Sciences in an article titled “Harnessing interpretable and unsupervised to address big data from modern X-ray diffraction.”

Aug 16, 2022

Powerful Radio Pulses Originating Deep in the Cosmos Probe Hidden Matter Around Galaxies

Posted by in categories: mapping, space

Powerful cosmic radio pulses originating deep in the universe can be used to study hidden pools of gas cocooning nearby galaxies, according to a new study that was published last month in the journal Nature Astronomy.

So-called fast radio bursts, or FRBs, are pulses of radio waves that typically originate millions to billions of light-years away. (Radio waves are electromagnetic radiation like the light we see with our eyes but have longer wavelengths and lower frequencies). The first FRB was discovered in 2007, and since then, hundreds more have been detected. In 2020, Caltech’s STARE2 instrument (Survey for Transient Astronomical Radio Emission 2) and Canada’s CHIME (Canadian Hydrogen Intensity Mapping Experiment) detected a massive FRB that went off in our own Milky Way galaxy. Those earlier findings helped confirm the theory that the energetic events most likely originate from dead, magnetized stars called magnetars.

As more and more FRBs roll in, scientists are now investigating how they can be used to study the gas that lies between us and the bursts. Specifically, they would like to use the FRBs to probe halos of diffuse gas that surround galaxies. As the radio pulses travel toward Earth, the gas enveloping the galaxies is expected to slow the waves down and disperse the radio frequencies. In the new study, the research team looked at a sample of 474 distant FRBs detected by CHIME, which has discovered the most FRBs to date. They showed that the subset of two dozen FRBs that passed through galactic halos were indeed slowed down more than non-intersecting FRBs.

Aug 14, 2022

Visual-Inertial Multi-Instance Dynamic SLAM with Object-level Relocalisation

Posted by in categories: electronics, mapping

Simultaneous Localisation and Mapping (SLAM) is a task of simultaneously estimating the sensor pose as well as the surrounding scene geometry. However, most existing SLAM systems are designed for the static world, which is unrealistic.

A recent paper on arXiv.org proposes a robust object-level dynamic SLAM system.

Page 25 of 50First2223242526272829Last