Toggle light / dark theme

“The number of coronavirus disease 2019 (COVID-19) cases in Austria dropped from 90 to 10 cases per one million people, two weeks after the government required everyone to wear a face mask on April 6.”


Austria managed to slow down the rate of coronavirus cases in the country by 90% after requiring everyone to wear a face mask. Meanwhile, other countries are struggling to keep the number of cases and fatalities low.

Every once in a while I have a contentious discussion with someone about traveling to Mars, and the risks involved. One of the hardest risks to describe is the threat from galactic cosmic rays. Here is a short article about a new facility investigating the effects of galactic cosmic rays.

The very important point here is that we are not discussing electromagnetic radiation. These ions have been shown to sometimes penetrate spacecraft and inflict damage on astronauts brains. Earthlings do not have to worry about these as much because we have a magnetosphere that shields us from ions.


To better understand and mitigate the health risks faced by astronauts from exposure to space radiation, we ideally need to be able to test the effects of Galactic Cosmic Rays (GCRs) here on Earth under laboratory conditions. An article publishing on May 19, 2020 in the open access journal PLOS Biology from Lisa Simonsen and colleagues at the NASA Langley Research Center, USA, describes how NASA has developed a ground-based GCR Simulator at the NASA Space Radiation Laboratory (NSRL), located at Brookhaven National Laboratory.

Galactic cosmic rays comprise a mixture of highly energetic protons, , and higher charge and energy ions ranging from lithium to iron, and they are extremely difficult to shield against. These ions interact with spacecraft materials and to create a complex mixed field of primary and secondary particles.

Intel’s fifth-generation Loihi chip uses neuromorphic computing to learn faster on less training data than traditional artificial intelligence techniques — including how to smell like a human does and make accurate conclusions based on a tiny dataset of essentially just one sample.

“That’s really one of the main things we’re trying to understand and map into silicon … the brain’s ability to learn with single examples,” Mike Davies, the director of Intel’s Neuromorphic Computing Lab, told me recently on The AI Show podcast. “So with just showing one clean presentation of an odor, we can store that in this high dimensional representation in the chip, and then it allows it to then recognize a variety of noisy, corrupted, occluded odors like you would be faced with in the real world.”

A team of researchers at Heidelberg University has succeeded in building an apparatus that allowed them to observe Pauli crystals for the first time. They have written a paper describing their efforts and have uploaded it to the arXiv preprint server.

The Pauli exclusion principle is quite simple: It asserts that no two fermions can have the same quantum number. But as with many principles in physics, this simple assertion has had a profound impact on quantum mechanics. Looking more closely at the principle reveals that it also suggests that no two fermions can occupy the same . And that means that electrons must have different orbits around a nucleus, and by extension, it explains why atoms have volume. This understanding of the self-ordering of fermions has led to other findings—for instance, that they should form crystals with a specific geometry, which are now known as Pauli crystals. When this observation was first made, it was understood that such crystal formation could only happen under unique circumstances. In this new effort, the researchers have resolved the circumstances, and in so doing, have built an apparatus that allowed them to observe Pauli crystals for the first time.

The work involved a setup that included lasers that were able to trap a cloud of lithium-6 atoms supercooled to their lower energy state, forcing them to adhere to the exclusion principle, in a one-atom thick flat layer. The team then used a technique that allowed them to photograph the atoms when they were in a particular given state—and only those atoms. They then used the camera to take 20,000 pictures, but used only those that showed the right number of atoms—-indicating that they were adhering to the Pauli exclusion principle. Next, the team processed the remaining images to remove the impact of overall momentum in the atom cloud, rotated them properly, and then superimposed thousands of them, revealing the momentum distribution of the individual —that was the point at which crystal structures began to emerge in the photographs, just as was predicted by theory. The researchers note that their technique could also be used to study other effects related to fermion-based gases.

It sounds like a riddle: What do you get if you take two small diamonds, put a small magnetic crystal between them and squeeze them together very slowly?

The answer is a magnetic liquid, which seems counterintuitive. Liquids become solids under pressure, but not generally the other way around. But this unusual pivotal discovery, unveiled by a team of researchers working at the Advanced Photon Source (APS), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Argonne National Laboratory, may provide scientists with new insight into and quantum computing.

Though scientists and engineers have been making use of superconducting materials for decades, the exact process by which conduct electricity without resistance remains a quantum mechanical mystery. The telltale signs of a superconductor are a loss of resistance and a loss of magnetism. High-temperature superconductors can operate at temperatures above those of (−320 degrees Fahrenheit), making them attractive for lossless transmission lines in power grids and other applications in the energy sector.

A real-time deep-learning model is proposed to classify the volume of cuttings from a shale shaker on an offshore drilling rig by analyzing the real-time monitoring video stream. As opposed to the traditional, time-consuming video-analytics method, the proposed model can implement a real-time classification and achieve remarkable accuracy. The approach is composed of three modules. Compared with results manually labeled by engineers, the model can achieve highly accurate results in real time without dropping frames.

Introduction

A complete work flow already exists to guide the maintenance and cleaning of the borehole for many oil and gas companies. A well-formulated work flow helps support well integrity and reduce drilling risks and costs. One traditional method needs human observation of cuttings at the shale shaker and a hydraulic and torque-and-drag model; the operation includes a number of cleanup cycles. This continuous manual monitoring of the cuttings volume at the shale shaker becomes the bottleneck of the traditional work flow and is unable to provide a consistent evaluation of the hole-cleaning condition because the human labor cannot be available consistently, and the torque-and-drag operation is discrete, containing a break between two cycles.

Modeling and rendering of dynamic scenes is challenging, as natural scenes often contain complex phenomena such as thin structures, evolving topology, translucency, scattering, occlusion, and biological motion. Mesh-based reconstruction and tracking often fail in these cases, and other approaches (e.g., light field video) typically rely on constrained viewing conditions, which limit interactivity. We circumvent these difficulties by presenting a learning-based approach to representing dynamic objects inspired by the integral projection model used in tomographic imaging. The approach is supervised directly from 2D images in a multi-view capture setting and does not require explicit reconstruction or tracking of the object. Our method has two primary components: an encoder-decoder network that transforms input images into a 3D volume representation, and a differentiable ray-marching operation that enables end-to-end training. By virtue of its 3D representation, our construction extrapolates better to novel viewpoints compared to screen-space rendering techniques. The encoder-decoder architecture learns a latent representation of a dynamic scene that enables us to produce novel content sequences not seen during training. To overcome memory limitations of voxel-based representations, we learn a dynamic irregular grid structure implemented with a warp field during ray-marching. This structure greatly improves the apparent resolution and reduces grid-like artifacts and jagged motion. Finally, we demonstrate how to incorporate surface-based representations into our volumetric-learning framework for applications where the highest resolution is required, using facial performance capture as a case in point.

Video Player