Toggle light / dark theme

A few select users will now be able to enjoy Google Maps’ immersive view, according to a blog by the company published last month. The new feature is meant to allow users to reimagine how they explore and navigate, while helping them make more sustainable choices.

“Immersive view is an entirely new way to explore a place — letting you feel like you’re right there, even before you visit. Using advances in AI and computer vision, immersive view fuses billions of Street View and aerial images to create a rich, digital model of the world. And it layers helpful information on top like the weather, traffic, and how busy a place is,” said Chris Phillips, VP & General Manager, Geo, in the blog.

In this video, we will explore the positional system of the brain — hippocampal place cells. We will see how it relates to contextual memory and mapping of more abstract features.

OUTLINE:
00:00 Introduction.
00:53 Hippocampus.
1:27 Discovery of place cells.
2:56 3D navigation.
3:51 Role of place cells.
4:11 Virtual reality experiment.
7:47 Remapping.
11:17 Mapping of non-spatial dimension.
13:36 Conclusion.

_____________
REFERENCES:

1) Anderson, M.I., Jeffery, K.J., 2003. Heterogeneous Modulation of Place Cell Firing by Changes in Context. J. Neurosci. 23, 8827–8835. https://doi.org/10.1523/JNEUROSCI.23-26-08827.

2) Aronov, D., Nevers, R., Tank, D.W., 2017. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature 543719–722. https://doi.org/10.1038/nature21692

3) Bostock, E., Muller, R.U., Kubie, J.L., 1991. Experience-dependent modifications of hippocampal place cell firing. Hippocampus 1193-205. https://doi.org/10.1002/hipo.

Year 2022 😗😁


For decades we have dreamed of true holographic displays for entertainment, communication, and education. Star Wars had 3D projections rendered in real-time — the definition wasn’t great, but they were communicating across interplanetary distances — and Avatar had holographic maps showcasing the terrain of Pandora. In reality, we mostly have 2D images which show dimension and depth when viewed from different angles. That might be on the verge of changing.

Pierre-Alexandre Blanche from the Wyant College of Optical Sciences at the University of Arizona recently published a paper in Light: Advanced Manufacturing which acts as a roadmap toward true 3D holographic displays.

“3D movies exist already, and the effects are amazing,” Blanche told SYFY WIRE. “But we’re working toward diffraction-based display that will produce all the human visual cues. That’s what’s missing today in the world of 3D display. They’re always missing one or more visual cues.”

Deep learning has made significant strides in text generation, translation, and completion in recent years. Algorithms trained to predict words from their surrounding context have been instrumental in achieving these advancements. However, despite access to vast amounts of training data, deep language models still need help to perform tasks like long story generation, summarization, coherent dialogue, and information retrieval. These models have been shown to need help capturing syntax and semantic properties, and their linguistic understanding needs to be more superficial. Predictive coding theory suggests that the brain of a human makes predictions over multiple timescales and levels of representation across the cortical hierarchy. Although studies have previously shown evidence of speech predictions in the brain, the nature of predicted representations and their temporal scope remain largely unknown. Recently, researchers analyzed the brain signals of 304 individuals listening to short stories and found that enhancing deep language models with long-range and multi-level predictions improved brain mapping.

The results of this study revealed a hierarchical organization of language predictions in the cortex. These findings align with predictive coding theory, which suggests that the brain makes predictions over multiple levels and timescales of expression. Researchers can bridge the gap between human language processing and deep learning algorithms by incorporating these ideas into deep language models.

The current study evaluated specific hypotheses of predictive coding theory by examining whether cortical hierarchy predicts several levels of representations, spanning multiple timescales, beyond the neighborhood and word-level predictions usually learned in deep language algorithms. Modern deep language models and the brain activity of 304 people listening to spoken tales were compared. It was discovered that the activations of deep language algorithms supplemented with long-range and high-level predictions best describe brain activity.

After 12 years of work, a huge team of researchers from the UK, US, and Germany have completed the largest and most complex brain map to date, describing every neural connection in the brain of a larval fruit fly.

Though nowhere near the size and complexity of a human brain, it still covers a respectable 548,000 connections between a total of 3,016 neurons.

The mapping identifies the different types of neurons and their pathways, including interactions between the two sides of the brain, and between the brain and ventral nerve cord. This brings scientists closer to understanding how the movements of signals from neuron to neuron lead to behavior and learning.

South Korea’s giant leap into space started with a small step on the internet.

With treaties banning certain tech transfers, South Korea’s rocket scientists turned to a search service to find an engine they could mimic as the country embarked on an ambitious plan to build an indigenous space program. The nation launched its first home-grown rocket called Nuri in October 2021.

First evidence of magnetic fields in the Universe’s galactic web.

The cosmic web is the name astronomers give to the structure of our Universe. It refers to the clusters, filaments, dark matter, and voids that make up the basis of this ever-expanding Universe. We can observe this via optical telescopes by mapping the locations of galaxies.

In a new research published in Science Advances, for the first time, scientists claim to have observed shockwaves moving through these galaxy clusters and filaments that make up the galactic or cosmic web. A phenomenon that has long been a universal mystery.

ALGORITHMS TURN PHOTO SHAPSHOTS INTO 3D VIDEO AND OR IMMERSIVE SPACE. This has been termed “Neural Radiance Fields.” Now Google Maps wants to turn Google Maps into a gigantic 3D space. Three videos below demonstrate the method. 1) A simple demonstration, 2) Google’s immersive maps, and 3) Using this principle to make dark, grainy photographs clear and immersive.

This technique is different from “time of flight” cameras which make a 3D snapshot based on the time light takes to travel to and from objects, but combined with this technology, and with a constellation of microsatellites as large as cell phones, a new version of “Google Earth” with live, continual imaging of the whole planet could eventually be envisioned.

2) https://www.youtube.com/watch?v=EUP5Fry24ao.

3)


We present RawNeRF, a method for optimizing neural radiance fields directly on linear raw image data. More details at https://bmild.github.io/rawnerf.

► Skip the waitlist by signing up for Masterworks here: https://masterworks.art/ainews.
Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more. See important Masterworks disclosures: https://www.masterworks.com/about/disclaimer?utm_source=aine…disclaimer.

Premium Robots: https://taimine.com/
Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED

GenAug has been developed by Meta AI and the University of Washington, which utilizes pre-trained text-to-image generative artificial intelligence models to enable imitation-based learning in practical robots. Stanford artificial intelligence researchers have proposed a method, called ATCON to drastically improve the quality of attention maps and classification performance on unseen data. Google’s new SingSong AI can generate instrumental music that complements your singing.

AI News Timestamps:
0:00 New AI Robot Tech From Meta.
2:43 Computer Vision Breakthrough From Stanford.
4:15 Masterworks Art.
6:10 Google AI Music Generator.

#technology #tech #ai