Toggle light / dark theme

Most mass in everyday matter around us resides in protons and neutrons inside the atomic nucleus. However, the lifetime of a free neutron—one not bounded to a nucleus—is unstable, decaying by a process called beta decay. For neutrons, beta decay involves the emission of a proton, an electron, and an anti-neutrino. Beta decay is a common process.

However, scientists have some significant uncertainties about the neutron lifetime and about the neutron decaying inside a nucleus that leads to a proton emission. This is called beta-delayed proton emission. There are only a few neutron-rich nuclei for which beta-delayed proton emission is energetically allowed. The radioactive nucleus beryllium-11 (11 Be), an isotope that consists of 4 and 7 , with its last neutron very weakly bound, is among those rare cases. Scientists recently observed a surprising large beta-delayed proton decay rate for 11 Be. Their work is published in Physical Review Letters.

The discovery of an exotic near-threshold that favors proton decay is a key for explaining the beta-delayed proton decay of 11 Be. The discovery is also a remarkable and not fully understood manifestation of quantum many-body physics. Many-body physics involves interacting . While scientists may know the physics that apply to each particle, the complete system can be too complex to understand.

A team of physicists has created a new way to self-assemble particles—an advance that offers new promise for building complex and innovative materials at the microscopic level.

Self-assembly, introduced in the early 2000s, gives scientists a means to “pre-program” particles, allowing for the building of materials without further human intervention—the microscopic equivalent of Ikea furniture that can assemble itself.

The breakthrough, reported in the journal Nature, centers on emulsions—droplets of oil immersed in water—and their use in the self-assembly of foldamers, which are unique shapes that can be theoretically predicted from the sequence of droplet interactions.

Have you ever been faced with a problem where you had to find an optimal solution out of many possible options, such as finding the quickest route to a certain place, considering both distance and traffic?

If so, the problem you were dealing with is what is formally known as a “combinatorial optimization problem.” While mathematically formulated, these problems are common in the real world and spring up across several fields, including logistics, network routing, machine learning, and .

However, large-scale combinatorial optimization problems are very computationally intensive to solve using standard computers, making researchers turn to other approaches. One such approach is based on the “Ising model,” which mathematically represents the magnetic orientation of atoms, or “spins,” in a ferromagnetic material.

Artificial Intelligence (AI), a term first coined at a Dartmouth workshop in 1956, has seen several boom and bust cycles over the last 66 years. Is the current boom different?

The most exciting advance in the field since 2017 has been the development of “Large Language Models,” giant neural networks trained on massive databases of text on the web. Still highly experimental, Large Language Models haven’t yet been deployed at scale in any consumer product — smart/voice assistants like Alexa, Siri, Cortana, or the Google Assistant are still based on earlier, more scripted approaches.

Large Language Models do far better at routine tasks involving language processing than their predecessors. Although not always reliable, they can give a strong impression of really understanding us and holding up their end of an open-ended dialog. Unlike previous forms of AI, which could only perform specific jobs involving rote perception, classification, or judgment, Large Language Models seem to be capable of a lot more — including possibly passing the Turing Test, named after computing pioneer Alan Turing’s thought experiment that posits when an AI in a chat can’t be distinguished reliably from a human, it will have achieved general intelligence.

But can Large Language Models really understand anything, or are they just mimicking the superficial “form” of language? What can we say about our progress toward creating real intelligence in a machine? What do “intelligence” and “understanding” even mean? Blaise Agüera y Arcas, a Fellow at Google Research, and Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, take on these thorny questions in a wide-ranging presentation and discussion.

A new field of science has been emerging at the intersection of neuroscience and high-performance computing — this is the takeaway from the 2022 BrainComp conference, which took place in Cetraro, Italy from the 19th to the 22nd of September. The meeting, which featured international experts in brain mapping, machine learning, simulation, research infrastructures, neuro-derived hardware, neuroethics and more, strengthened the current collaborations in this emerging field and forged new ones.

Now in its 5th edition, BrainComp first started in 2013 and is jointly organised by the Human Brain Project and the EBRAINS digital research infrastructure, University of Calabria in Italy, the Heinrich Heine University of Düsseldorf and the Forschungszentrum Jülich in Germany. It is attended by researchers from inside and outside the Human Brain Project. This year was dedicated to the computational challenges of brain connectivity. The brain is the most complex system in the observable universe due to the tight connections between areas down to the wiring of the individual neurons: decoding this complexity through neuroscientific and computing advances benefits both fields.

Hosted by the organising committee of Katrin Amunts, Scientific Research Director of the HBP, Thomas Lippert, Leader of EBRAINS Computing Services from the Juelich Supercomputing Centre and Lucio Grandinetti from the University of Calabria, the sessions included a variety of topics over four days.

Bjarke Ingels Group and Barcode Architects collaborated to create ‘Sluishuis’ – an angular residential complex placed above the IJ Lake in Amsterdam. The structure’s sharp geometric ends meet in the air, water, and land, creating a mesmerizing structure that seems to be jutting into the sky while resembling the bow of a ship! It is constructed on an artificial island in the IJ Lake and forms a geometrically-intriguing gateway from the lake.

Designer: Bjarke Ingels Group x Barcode Architects.

A well-known game studio is allegedly using AI voices for a video game. A clarification includes a commitment to human creativity. It’s another footnote in the debate over the value of human labor that will become more common in the future.

It’s the very debate that has erupted so vehemently around AI-generated images in recent months. Are AI images art? If so, can they be equated with human art? Are they detrimental to art? Are they even plagiarism, because the AI examines human works during training – in the inspiration phase, so to speak – and then imitates them in trace elements?

Image generation machines like DALL-E 2, Stable Diffusion, and Midjourney alone raise myriad questions about the value and interplay of human and machine labor. These questions are likely to increase because generative AI will not stop at 2D images. Another area it is already tapping into is audio.