Toggle light / dark theme

The DESI collaboration is conducting a groundbreaking experiment to understand the universe’s expansion and acceleration. Their work with the DESI instrument has enabled them to map the cosmos from its early stages to the present, challenging existing models of the universe. Initial findings suggest there may be more to discover about dark energy and cosmic acceleration. The project’s innovative approach, including a fully blinded analysis, ensures that their conclusions are based on unbiased data, paving the way for future discoveries in astrophysics. Credit: SciTechDaily.com.

The DESI collaboration is examining the universe’s accelerating expansion through comprehensive mapping from its early stages to the present. Their findings challenge traditional cosmic models and suggest new insights into dark energy, all while utilizing groundbreaking, unbiased research methods.

A team of researchers, including an astrophysicist from The University of Texas at Dallas, as part of the Dark Energy Spectroscopic Instrument (DESI) collaboration, is leading a groundbreaking experiment aimed at exploring the universe’s expansion and acceleration.

Lowtek Games combined the multifunctionality of a screen with the beauty of a pop-up book in a unique project that will take your imagination to another level. Codenamed Lowtek Lightbook, this interactive experience allows you to not only read stories but also play various games.

For example, you can color pictures with digital paints, find hidden objects, run away from aliens, or deliver food to them – all thanks to projection mapping and Lowtek Games’ clever thinking. Moreover, the story of Bib Goes Home can be even more engaging if you manually make Bib go home and explore his surroundings using a controller and a projector.

We are all very familiar with the concept of the Earth’s magnetic field. It turns out that most objects in space have magnetic fields but it’s quite tricky to measure them. Astronomers have developed an ingenious way to measure the magnetic field of the Milky Way using polarized light from interstellar dust grains that align themselves to the magnetic field lines. A new survey has begun this mapping process and has mapped an area that covers the equivalent of 15 times the full moon.

Many people will remember experiments in school with iron filings and bar magnets to unveil their magnetic field. It’s not quite so easy to capture the magnetic field of the Milky Way though. The new method to measure the field relies upon the small dust grains which permeate space between the stars.

The grains of dust are similar in size to smoke particles but they are not spherical. Just like a boat turning itself into the current, the dust particles’ long axis tends to align with the local magnetic field. As they do, they emit a glow in the same frequency as the cosmic background radiation and it is this that astronomers have been tuning in to.

“Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly,” said Robert Kent.


How can machine learning be improved to provide better efficiency in the future? This is what a recent study published in Nature Communications hopes to address as a team of researchers from The Ohio State University investigated the potential for controlling future machine learning products by creating digital twins (copies) that can be used to improve machine learning-based controllers that are currently being used in self-driving cars. However, these controllers require large amounts of computing power and are often challenging to use. This study holds the potential to help researchers better understand how future machine learning algorithms can exhibit better control and efficiency, thus improving their products.

“The problem with most machine learning-based controllers is that they use a lot of energy or power, and they take a long time to evaluate,” said Robert Kent, who is a graduate student in the Department of Physics at The Ohio State University and lead author of the study. “Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.”

For the study, the researchers created a fingertip-sized digital twin that can function without the internet with the goal of improving the productivity and capabilities of a machine learning-based controller. In the end, the researchers discovered a decrease in the controller’s power needs due to a machine learning method known as reservoir computing, which involves reading in data and mapping out to the target location. According to the researchers, this new method can be used to simplify complex systems, including self-driving cars while decreasing the amount of power and energy required to run the system.

Reconstruction of 1 mm3 of human brain (at 1.4 petabytes of EM data) published by @stardazed0 (@GoogleAI) & Lichtman lab.

Paper: https://science.org/doi/10.1126/science.adk4858

Blog:


Marking ten years of connectomics research at Google, we are releasing a publication in Science about a reconstruction at the synaptic level of a small piece of the human brain. We discuss the reconstruction process and dataset, and we present several new neuron structures discovered in the data.

To keep our bodies properly oriented, our brains perform impressive feats of calculation that track our stumbling meat sack through a mental map of our surrounds.

While a lot of research has focussed on the mapping, little has managed to determine how our neurological wiring monitors our direction within it.

A team of researchers from the University of Birmingham in the UK and the Ludwig Maximilian University of Munich in Germany has identified signature brain activity that describes a kind of ‘neural compass’ in the hope of understanding how we find our way through the world.

Inspired by the tetromino shapes in the classic video game Tetris, researchers in the US have designed a simple radiation detector that can monitor radioactive sources both safely and efficiently. Created by Mingda Li and colleagues at the Massachusetts Institute of Technology, the device employs a machine learning algorithm to process data, allowing it to build up accurate maps of sources using just four detector pixels.

\r \r.

Wherever there is a risk of radioactive materials leaking into the environment, it is critical for site managers to map out radiation sources as accurately as possible.

Driving at night might be a scary challenge for a new driver, but with hours of practice it soon becomes second nature. For self-driving cars, however, practice may not be enough because the lidar sensors that often act as these vehicles’ “eyes” have difficulty detecting dark-colored objects. Research published in ACS Applied Materials & Interfaces describes a highly reflective black paint that could help these cars see dark objects and make autonomous driving safer.

Lidar, short for light detection and ranging, is a system used in a variety of applications, including geologic mapping and self-driving vehicles. The system works like echolocation, but instead of emitting sound waves, lidar emits tiny pulses of near-infrared light. The light pulses bounce off objects and back to the sensor, allowing the system to map the 3D environment it’s in. But lidar falls short when objects absorb more of that near-infrared light than they reflect, which can occur on black-painted surfaces. Lidar can’t detect these dark objects on its own, so one common solution is to have the system rely on other sensors or software to fill in the information gaps. However, this solution could still lead to accidents in some situations. Rather than reinventing the lidar sensors, though, Chang-Min Yoon and colleagues wanted to make dark objects easier to detect with existing technology by developing a specially formulated, highly reflective black paint.