Toggle light / dark theme

Neutrino detectors are about to get a lot bigger.

One of the most mysterious particles in the universe are neutrinos, with only dark matter out-baffling scientists as a more puzzling phenomenon.

And while there are neutrino detectors in operation hunting for the rarified particles, we might need to resort to the colossal scales of the Pacific Ocean to detect a class of ultra-powerful neutrinos, according to a recent study shared on a preprint server.

And, with a small-scale demo in the works, we may soon see whether this idea will pan out, and transform our grasp of the universe.

Meet the ambitious P-ONE proposal.


The P-ONE design currently involves seven 10-string clusters, with each string hosting 20 optical elements. That s a grand total of 1,400 photodetectors floating around an area of the Pacific several miles across, providing much more coverage than IceCube.

Once it’s up and running, you just need to wait. Even neutrinos will strike some ocean water and give off a little flash, and the detectors will trace it.

Of course, it’s harder than it sounds. The strands will be moving constantly, waving back and forth with the ocean itself. And the Pacific Ocean is … less than pure, with salt and plankton and all manner of fish excrement floating around. That will change the behavior of light between the strands, making precise measurement difficult.

A University of Melbourne-led team has perfected a technique for embedding single atoms in a silicon wafer one-by-one. Their technology offers the potential to make quantum computers using the same methods that have given us cheap and reliable conventional devices containing billions of transistors.

“We could ‘hear’ the electronic click as each atom dropped into one of 10,000 sites in our prototype device. Our vision is to use this technique to build a very, very large-scale quantum device,” says Professor David Jamieson of The University of Melbourne, lead author of the Advanced Materials paper describing the process.

His co-authors are from UNSW Sydney, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Leibniz Institute of Surface Engineering (IOM), and RMIT Microscopy and Microanalysis Facility.

From the discovery of microorganisms in the field of biology to imaging atoms in the field of physics, microscopic imaging has improved our understanding of the world and has been responsible for many scientific advances. Now, with the advent of spintronics and miniature magnetic devices, there is a growing need for imaging at nanometer scales to detect quantum properties of matter, such as electron spins, magnetic domain structure in ferromagnets, and magnetic vortices in superconductors.

Typically, this is done by complementing standard microscopy techniques, such as scanning tunneling microscopy and (AFM), with magnetic sensors to create “scanning magnetometry probes” that can achieve nanoscale imaging and sensing. However, these probes often require ultrahigh vacuum conditions, extremely low temperatures, and are limited in spatial resolution by the probe size.

In this regard, nitrogen-vacancy (NV) centers in diamond (defects in diamond structure formed by nitrogen atoms adjacent to “vacancies” created by missing atoms) have gained significant interest. The NV pair, it turns out, can be combined with AFM to accomplish local magnetic imaging and can operate at room temperature and pressures. However, fabricating these probes involve complex techniques that do not allow for much control over the probe shape and size.

For quantum computers to surpass their classical counterparts in speed and capacity, their qubits—which are superconducting circuits that can exist in an infinite combination of binary states—need to be on the same wavelength. Achieving this, however, has come at the cost of size. Whereas the transistors used in classical computers have been shrunk down to nanometer scales, superconducting qubits these days are still measured in millimeters—one millimeter is one million nanometers.

Combine qubits together into larger and larger circuit chips, and you end up with, relatively speaking, a big physical footprint, which means quantum computers take up a lot of physical space. These are not yet devices we can carry in our backpacks or wear on our wrists.

To shrink qubits down while maintaining their performance, the field needs a new way to build the capacitors that store the energy that “powers” the qubits. In collaboration with Raytheon BBN Technologies, Wang Fong-Jen Professor James Hone’s lab at Columbia Engineering recently demonstrated a superconducting qubit built with 2D materials that’s a fraction of previous sizes.

The atomic nucleus is a tough nut to crack. The strong interaction between the protons and neutrons that make it up depends on many quantities, and these particles, collectively known as nucleons, are subject to not only two-body forces but also three-body ones. These and other features make the theoretical modeling of atomic nuclei a challenging endeavor.

Circa 2020


We discuss the possibility to predict the QCD axion mass in the context of grand unified theories. We investigate the implementation of the DFSZ mechanism in the context of renormalizable SU theories. In the simplest theory, the axion mass can be predicted with good precision in the range ma = (2–16) neV, and there is a strong correlation between the predictions for the axion mass and proton decay rates. In this context, we predict an upper bound for the proton decay channels with antineutrinos, τ p → K + ν ¯ ≲ 4 × 10 37 $$ \tau \left(p\to {K}^{+}\overline{
u}\right)\lesssim 4\times {10}^{37} $$ yr and τ p → π + ν ¯ ≲ 2 × 10 36 $$ \tau \left(p\to {\pi}^{+}\overline{
u}\right)\lesssim 2\times {10}^{36} $$ yr. This theory can be considered as the minimal realistic grand unified theory with the DFSZ mechanism and it can be fully tested by proton decay and axion experiments.

While the Large Hadron Collider (LHC) at CERN is well known for smashing protons together, it is actually the quarks and gluons inside the protons—collectively known as partons—that are really interacting. Thus, in order to predict the rate of a process occurring in the LHC—such as the production of a Higgs boson or a yet-unknown particle—physicists have to understand how partons behave within the proton. This behavior is described in parton distribution functions (PDFs), which describe what fraction of a proton’s momentum is taken by its constituent quarks and gluons.

Knowledge of these PDFs has traditionally come from lepton–proton colliders, such as HERA at DESY. These machines use point-like particles, such as electrons, to directly probe the partons within the proton. Their research revealed that, in addition to the well-known up and down valence quarks that are inside a proton, there is also a sea of quark–antiquark pairs in the proton. This sea is theoretically made of all types of quarks, bound together by gluons. Now, studies of the LHC’s proton–proton collisions are providing a detailed look into PDFs, in particular the proton’s gluon and quark-type composition.

The physicists at CERN’s ATLAS Experiment have just released a new paper combining LHC and HERA data to determine PDFs. The result uses ATLAS data from several different Standard Model processes, including the production of W and Z bosons, pairs of top quarks and hadronic jets (collimated sprays of particles). It was traditionally thought that the strange-quark PDF would be suppressed by a factor of ~2 compared to that of the lighter up-and down-type quarks, because of its larger mass. The new paper confirms a previous ATLAS result, which found that the strange is not substantially suppressed at small momentum fractions and extends this result to show how suppression kicks in at higher momentum fractions (x 0.05) as shown in Figure 1.