Well this is anticlimactic, to say the least.
Category: space – Page 727
For all of science’s impressive advancements, one problem has stubbornly eluded us: Why do we have consciousness? How does inert unconscious matter give rise to the light of conscious experience? Neuroscientist Donald Hoffman has been pondering this question throughout his career. His thinking has gradually led him to a surprising possibility — that consciousness itself is fundamental to reality. Donald’s theory, however, differs from that of the growing number of other scientists and philosophers now arriving at this conclusion.
“We’ve been stuck on the same problem for centuries. It’s time to take a different approach.”
The fundamental nature of reality, Donald theorizes, is comprised of an infinite network of interacting conscious agents. Uniquely, Donald offers a precise mathematical definition of a conscious agent. He believes the theory may be used to reconstruct the universe and existing scientific discoveries purely through the interaction of these units of consciousness.
During the 1930s, venerable theoretical physicist Albert Einstein returned to the field of quantum mechanics, which his theories of relativity helped to create. Hoping to develop a more complete theory of how particles behave, Einstein was instead horrified by the prospect of quantum entanglement — something he described as “spooky action at a distance.”
Despite Einstein’s misgivings, quantum entanglement has gone on to become an accepted part of quantum mechanics. And now, for the first time ever, a team of physicists from the University of Glasgow took an image of a form of quantum entanglement (aka Bell entanglement) at work. In so doing, they managed to capture the first piece of visual evidence of a phenomenon that baffled even Einstein himself.
The paper that described their findings, titled “Imaging Bell-type nonlocal behavior,” recently appeared in the journal Science Advances. The study was led by Dr. Paul-Antoine Moreau, a Leverhulme Early Career Fellow at the University of Glasgow, and included multiple researchers from Glasgow’s School of Physics & Astronomy.
How’s this for 2020 vision? Over the holidays, I took a series of high-res photos of my hometown on Mars. This panorama is made up of a crisp 1.8 billion pixels. It’s my most detailed view to date.
Zoom in: https://go.nasa.gov/3ap38hB
If news from Earth has got you down, maybe this update from the Red Planet will take your mind off things. NASA’s Curiosity rover mission has produced an incredible 1.8-billion-pixel image of the surface of Mars.
The image above doesn’t nearly do it justice, so be sure to watch the video below. You can also use this NASA webpage to explore the panorama in detail.
A spontaneous hole in the fabric of reality could theoretically end the universe, but don’t worry: physicists are studying the idea for what it can teach us about the cosmos.
Feature image ‘Psychonaut’ courtesy of Tetramode.
Follow the Daily Grail on Facebook and on Twitter.
The last half century has seen humanity take its first, tentative steps into outer space. Initially, through American and Russian astronaut missions into Earth orbit and then to the Moon, though more recently robotic probes have ventured beyond our solar system entirely.
The scientific revolution was ushered in at the beginning of the 17th century with the development of two of the most important inventions in history — the telescope and the microscope. With the telescope, Galileo turned his attention skyward, and advances in optics led Robert Hooke and Antonie van Leeuwenhoek toward the first use of the compound microscope as a scientific instrument, circa 1665. Today, we are witnessing an information technology-era revolution in microscopy, supercharged by deep learning algorithms that have propelled artificial intelligence to transform industry after industry.
One of the major breakthroughs in deep learning came in 2012, when the performance superiority of a deep convolutional neural network combined with GPUs for image classification was revealed by Hinton and colleagues [1] for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In AI’s current innovation and implementation phase, deep learning algorithms are propelling nearly all computer vision-intensive applications, including autonomous vehicles (transportation, military), facial recognition (retail, IT, communications, finance), biomedical imaging (healthcare), autonomous weapons and targeting systems (military), and automation and robotics (military, manufacturing, heavy industry, retail).
It should come as no surprise that the field of microscopy would ripe for transformation by artificial intelligence-aided image processing, analysis and interpretation. In biological research, microscopy generates prodigious amounts of image data; a single experiment with a transmission electron microscope can generate a data set containing over 100 terabytes worth of images [2]. The myriad of instruments and image processing techniques available today can resolve structures ranging in size across nearly 10 orders of magnitude, from single molecules to entire organisms, and capture spatial (3D) as well as temporal (4D) dynamics on time scales of femtoseconds to seconds.
Bottomline: DON’T PANIC!
This is a Cary prodocution!
If you’d like to use this video in your own projects, contact me through Twitter DMs at @realCarykh! I almost always say yes, but just make sure it’s ok with me first. Thanks!
Humany “behind-the-scenes” channel: https://www.youtube.com/channel/UCx_JS-Fzrq-bXUYP0mk9Zag
Sources: https://pastebin.com/R4N9m94g