Menu

Blog

Archive for the ‘mapping’ category: Page 29

Nov 1, 2022

Digital mapmaking innovations are revolutionizing travel

Posted by in categories: mapping, transportation

GPS for scuba divers, cars with built-in holographic maps, and other new tech mean ’you’ll never get lost again.‘.

Oct 27, 2022

Martian Impacts Seen and Heard

Posted by in categories: mapping, space

Linking acoustic and seismic signals from meteorite strikes to orbiter images is a step toward mapping the planet’s interior.

Oct 26, 2022

Entanglement-enhanced matter-wave interferometry in a high-finesse cavity

Posted by in categories: mapping, particle physics, quantum physics

Light-pulse matter-wave interferometers exploit the quantized momentum kick given to atoms during absorption and emission of light to split atomic wave packets so that they traverse distinct spatial paths at the same time. Additional momentum kicks then return the atoms to the same point in space to interfere the two matter-wave wave packets. The key to the precision of these devices is the encoding of information in the phase ϕ that appears in the superposition of the two quantum trajectories within the interferometer. This phase must be estimated from quantum measurements to extract the desired information. For N atoms, the phase estimation is fundamentally limited by the independent quantum collapse of each atom to an r.m.s. angular uncertainty \(\Delta {\theta }_{{\rm{SQL}}}=1/\sqrt{N}\) rad, known as the standard quantum limit (SQL)2.

Here we demonstrate a matter-wave interferometer31,32 with a directly observed interferometric phase noise below the SQL, a result that combines two of the most striking features of quantum mechanics: the concept that a particle can appear to be in two places at once and entanglement between distinct particles. This work is also a harbinger of future quantum many-body simulations with cavities26,27,28,29 that will explore beyond mean-field physics by directly modifying and probing quantum fluctuations or in which the quantum measurement process induces a phase transition30.

Quantum entanglement between the atoms allows the atoms to conspire together to reduce their total quantum noise relative to their total signal1,3. Such entanglement has been generated between atoms using direct collisional33,34,35,36,37,38,39 or Coulomb40,41 interactions, including relative atom number squeezing between matter waves in spatially separated traps33,35,39 and mapping of internal entanglement onto the relative atom number in different momentum states42. A trapped matter-wave interferometer with relative number squeezing was realized in ref. 35, but the interferometer’s phase was antisqueezed and thus the phase resolution was above the SQL.

Oct 19, 2022

NASA telescope takes 12-year time-lapse movie of entire sky

Posted by in categories: entertainment, mapping, space travel

Pictures of the sky can show us cosmic wonders; movies can bring them to life. Movies from NASA’s NEOWISE space telescope are revealing motion and change across the sky.

Every six months, NASA’s Near-Earth Object Wide Field Infrared Survey Explorer, or NEOWISE, completes one trip halfway around the Sun, taking images in all directions. Stitched together, those images form an “all-sky” map showing the location and brightness of hundreds of millions of objects. Using 18 all-sky maps produced by the spacecraft (with the 19th and 20th to be released in March 2023), scientists have created what is essentially a time-lapse movie of the sky, revealing changes that span a decade.

Continue reading “NASA telescope takes 12-year time-lapse movie of entire sky” »

Oct 16, 2022

The neural basis of syllogistic reasoning: An event-related potential study

Posted by in categories: mapping, neuroscience

The spatiotemporal analysis of brain activation during syllogistic reasoning, and the execution of 1 baseline task (BST) were performed in 14 healthy adult participants using high-density event-related brain potentials (ERPs). The following results were obtained: First, the valid syllogistic reasoning task (VSR) elicited a greater positive ERP deflection than the invalid syllogistic reasoning task (ISR) and BST between 300 and 400 ms after the onset of the minor premise. Dipole source analysis of the difference waves (VSR-BST and VSR-ISR) indicated that the positive components were localized in the vicinity of the occipito-temporal cortex, possibly related to visual premise processing. Second, VSR and ISR demonstrated greater negativity than BST developed at 600–700 ms. Dipole source analysis of difference waves (VSR-BST and ISR-BST) indicated that the negative components were mainly localized near the medial frontal cortex/the anterior cingulate cortex, possibly related to the manipulation and integration of premise information. Third, both VSR and ISR elicited a more positive ERP deflection than BST between 2,500 and 3,000 ms. Voltage maps of the difference waves (VSR-BST and VSR-ISR) demonstrated strong activity in the right frontal scalp regions. Results indicate that the reasoning tasks may require more mental effort to spatial processing of working memory.

Oct 11, 2022

Study upgrades one of the largest databases of neuronal types

Posted by in categories: bioengineering, computing, mapping, neuroscience

A study led by researchers from the Institute Cajal of Spanish Research Council (CSIC) in Madrid, Spain in collaboration with the Bioengineering Department of George Mason University in Virginia, U.S. has updated one of the world’s largest databases on neuronal types, Hippocampome.org.

The study, which is published in the journal PLOS Biology, represents the most comprehensive mapping performed to date between recoded in vivo and identified . This major breakthrough may enable biologically meaningful computer modeling of the full neuronal circuit of the hippocampus, a region of the brain involved in memory function.

Circuits of the mammalian cerebral cortex are made up of two types of neurons: Excitatory neurons, which release a neurotransmitter called glutamate, and inhibitory neurons, which release GABA (gamma-aminobutanoic acid), the main inhibitor of the central nervous system. “A balanced dialogue between the ‘excitatory’ and ‘inhibitory’ activities is critical for . Identifying the contribution from the several types of excitatory and inhibitory cells is essential to better understand brain operation,” explains Liset Menendez de la Prida, the Director of the Laboratorio de Circuitos Neuronales at the Institute Cajal who leads the study at the CSIC.

Oct 5, 2022

Latest Machine Learning Research at MIT Presents a Novel ‘Poisson Flow’ Generative Model (PFGM) That Maps any Data Distribution into a Uniform Distribution on a High-Dimensional Hemisphere

Posted by in categories: mapping, physics, robotics/AI, transportation

Deep generative models are a popular data generation strategy used to generate high-quality samples in pictures, text, and audio and improve semi-supervised learning, domain generalization, and imitation learning. Current deep generative models, however, have shortcomings such as unstable training objectives (GANs) and low sample quality (VAEs, normalizing flows). Although recent developments in diffusion and scored-based models attain equivalent sample quality to GANs without adversarial training, the stochastic sampling procedure in these models is sluggish. New strategies for securing the training of CNN-based or ViT-based GAN models are presented.

They suggest backward ODEsamplers (normalizing flow) accelerate the sampling process. However, these approaches have yet to outperform their SDE equivalents. We introduce a novel “Poisson flow” generative model (PFGM) that takes advantage of a surprising physics fact that extends to N dimensions. They interpret N-dimensional data items x (say, pictures) as positive electric charges in the z = 0 plane of an N+1-dimensional environment filled with a viscous liquid like honey. As shown in the figure below, motion in a viscous fluid converts any planar charge distribution into a uniform angular distribution.

A positive charge with z 0 will be repelled by the other charges and will proceed in the opposite direction, ultimately reaching an imaginary globe of radius r. They demonstrate that, in the r limit, if the initial charge distribution is released slightly above z = 0, this rule of motion will provide a uniform distribution for their hemisphere crossings. They reverse the forward process by generating a uniform distribution of negative charges on the hemisphere, then tracking their path back to the z = 0 planes, where they will be dispersed as the data distribution.

Oct 3, 2022

High-throughput mapping of a whole rhesus monkey brain at micrometer resolution

Posted by in categories: mapping, neuroscience

A whole monkey brain is imaged at high resolution in 100 hours.

Sep 28, 2022

BrainComp 2022: Experts in neuroscience and computing discuss the digital transformation of neuroscience and benefits of collaborating

Posted by in categories: mapping, neuroscience, robotics/AI, supercomputing

A new field of science has been emerging at the intersection of neuroscience and high-performance computing — this is the takeaway from the 2022 BrainComp conference, which took place in Cetraro, Italy from the 19th to the 22nd of September. The meeting, which featured international experts in brain mapping, machine learning, simulation, research infrastructures, neuro-derived hardware, neuroethics and more, strengthened the current collaborations in this emerging field and forged new ones.

Now in its 5th edition, BrainComp first started in 2013 and is jointly organised by the Human Brain Project and the EBRAINS digital research infrastructure, University of Calabria in Italy, the Heinrich Heine University of Düsseldorf and the Forschungszentrum Jülich in Germany. It is attended by researchers from inside and outside the Human Brain Project. This year was dedicated to the computational challenges of brain connectivity. The brain is the most complex system in the observable universe due to the tight connections between areas down to the wiring of the individual neurons: decoding this complexity through neuroscientific and computing advances benefits both fields.

Hosted by the organising committee of Katrin Amunts, Scientific Research Director of the HBP, Thomas Lippert, Leader of EBRAINS Computing Services from the Juelich Supercomputing Centre and Lucio Grandinetti from the University of Calabria, the sessions included a variety of topics over four days.

Sep 24, 2022

Musing on Understanding & AI — Hugo de Garis, Adam Ford, Michel de Haan

Posted by in categories: education, existential risks, information science, mapping, mathematics, physics, robotics/AI

Started out as an interview ended up being a discussion between Hugo de Garis and (off camera) Adam Ford + Michel de Haan.
00:11 The concept of understanding under-recognised as an important aspect of developing AI
00:44 Re-framing perspectives on AI — the Chinese Room argument — and how can consciousness or understanding arise from billions of seemingly discreet neurons firing? (Should there be a binding problem of understanding similar to the binding problem of consciousness?)
04:23 Is there a difference between generality in intelligence and understanding? (and extentionally between AGI and artificial understanding?)
05:08 Ah Ha! moments — where the penny drops — what’s going on when this happens?
07:48 Is there an ideal form of understanding? Coherence & debugging — ah ha moments.
10:18 Webs of knowledge — contextual understanding.
12:16 Early childhood development — concept formation and navigation.
13:11 The intuitive ability for concept navigation isn’t complete.
Is the concept of understanding a catch all?
14:29 Is it possible to develop AGI that doesn’t understand? Is generality and understanding the same thing?
17:32 Why is understanding (the nature of) understanding important?
Is understanding reductive? Can it be broken down?
19:52 What would be the most basic primitive understanding be?
22:11 If (strong) AI is important, and understanding is required to build (strong) AI, what sorts of things should we be doing to make sense of understanding?
Approaches — engineering, and copy the brain.
24:34 Is common sense the same thing as understanding? How are they different?
26:24 What concepts do we take for granted around the world — which when strong AI comes about will dissolve into illusions, and then tell us how they actually work under the hood?
27:40 Compression and understanding.
29:51 Knowledge, Gettier problems and justified true belief. Is knowledge different from understanding and if so how?
31:07 A hierarchy of intel — data, information, knowledge, understanding, wisdom.
33:37 What is wisdom? Experience can help situate knowledge in a web of understanding — is this wisdom? Is the ostensible appearance of wisdom necessarily wisdom? Think pulp remashings of existing wisdom in the form of trashy self-help literature.
35:38 Is understanding mapping knowledge into a useful framework? Or is it making accurate / novel predictions?
36:00 Is understanding like high resolution carbon copy like models that accurately reflect true nature or a mechanical process?
37:04 Does understanding come in gradients of topologies? Is there degrees or is it just on or off?
38:37 What comes first — understanding or generality?
40:47 Minsky’s ‘Society of Mind’
42:46 Is vitalism alive in well in the AI field? Do people actually think there are ghosts in the machines?
48:15 Anthropomorphism in AI literature.
50:48 Deism — James Gates and error correction in super-symmetry.
52:16 Why are the laws of nature so mathematical? Why is there so much symmetry in physics? Is this confusing the map with the territory?
52:35 The Drake equation, and the concept of the Artilect — does this make Deism plausible? What about the Fermi Paradox?
55:06 Hyperintelligence is tiny — the transcention hypothesis — therefore civs go tiny — an explanation for the fermi paradox.
56:36 Why would *all* civs go tiny? Why not go tall, wide and tiny? What about selection pressures that seem to necessitate cosmic land grabs?
01:01:52 The Great Filter and the The Fermi Paradox.
01:02:14 Is it possible for an AGI to have a deep command of knowledge across a wide variety of topics/categories without understanding being an internal dynamic? Is the turing test good enough to test for understanding? What kinds of behavioral tests could reliably test for understanding? (Of course without the luxury of peering under the hood)
01:03:09 Does AlphaGo understand Go, or DeepBlue understand chess? Revisiting the Chinese Room argument.
01:04:23 More on behavioral tests for AI understanding.
01:06:00 Zombie machines — David Chalmers Zombie argument.
01:07:26 Complex enough algorithms — is there a critical point of complexity beyond which general intelligence likely emerges? Or understanding emerges?
01:08:11 Revisiting behavioral ‘turing’ tests for understanding.
01:13:05 Shape sorters and reverse shape sorters.
01:14:03 Would slightly changing the rules of Go confuse AlphaGo (after it had been trained)? Need for adaptivity — understanding concept boundaries, predicting where they occur, and the ability to mine outwards from these boundaries…
01:15:11 Neural nets and adaptivity.
01:16:41 AlphaGo documentary — worth a watch. Progresses in AI challenges human dignity which is a concern, but the DeepMind and the AlphaGo documentary seemed to be respectful. Can we manage a transition from human labor to full on automation while preserving human dignity?

Filmed in the dandenong ranges in victoria, australia.

Many thanks for watching!

Page 29 of 55First2627282930313233Last