Toggle light / dark theme

If new particles are out there, the Large Hadron Collider (LHC) is the ideal place to search for them. The theory of supersymmetry suggests that a whole new family of partner particles exists for each of the known fundamental particles. While this might seem extravagant, these partner particles could address various shortcomings in current scientific knowledge, such as the source of the mysterious dark matter in the universe, the “unnaturally” small mass of the Higgs boson, the anomalous way that the muon spins and even the relationship between the various forces of nature. But if these supersymmetric particles exist, where might they be hiding?

This is what physicists at the LHC have been trying to find out, and in a recent study of proton–proton data from Run 2 of the LHC (2015–2018), the ATLAS collaboration provides the most comprehensive overview yet of its searches for some of the most elusive types of supersymmetric particles—those that would only rarely be produced through the “weak” nuclear force or the electromagnetic force. The lightest of these weakly interacting supersymmetric particles could be the source of dark matter.

The increased collision energy and the higher collision rate provided by Run 2, as well as new search algorithms and machine-learning techniques, have allowed for deeper exploration into this difficult-to-reach territory of supersymmetry.

Tech companies are investing billions in developing and deploying generative AI. That money needs to be recouped. Recent reports and analysis show that it’s not easy.

According to an anonymous source from the Wall Street Journal, Microsoft lost more than $20 per user per month on generative AI code Github Copilot in the first few months of the year. Some users reportedly cost as much as $80 per month. Microsoft charges $10 per user per month.

It loses money because the AI model that generates the code is expensive to run. Github Copilot is popular with developers and currently has about 1.5 million users who constantly trigger the model to write more code.

When a group of researchers asked an AI to design a robot that could walk, it created a “small, squishy and misshapen” thing that walks by spasming when filled with air.

The researchers — affiliated with Northwestern University, MIT, and the University of Vermont — published their findings in an article for the Proceedings of the National Academy of Sciences on October 3.

“We told the AI that we wanted a robot that could walk across land. Then we simply pressed a button and presto!” Sam Kriegman, an assistant professor at Northwestern University and the lead researcher behind the study, wrote in a separate blog post.

We are arguably at “the knee” of the curve. More breakthroughs have happened in the first 9 months of 2023 than all previous years from the turn of the century (2001 — 2022).

Will AGI kill us all? Will we join with it? Is it even close? Is it just “cool stuff”? Will we have bootstrapping self-improving AI?

The podcast crew today:
On the panel (left to right): Stefan Van Der Wel, Oliver Engelmann, Brendan Clarke.
Host camera: Roy Sherfan, Simon Carter.
Off camera: Peter Xing.

“It’s never really the goal of anybody to write papers — it’s to do science,” says Michael Eisen, a computational biologist at the University of California, Berkeley, who is also editor-in-chief of the journal eLife. He predicts that generative AI tools could even fundamentally transform the nature of the scientific paper.

But the spectre of inaccuracies and falsehoods threatens this vision. LLMs are merely engines for generating stylistically plausible output that fits the patterns of their inputs, rather than for producing accurate information. Publishers worry that a rise in their use might lead to greater numbers of poor-quality or error-strewn manuscripts — and possibly a flood of AI-assisted fakes.

“Anything disruptive like this can be quite worrying,” says Laura Feetham, who oversees peer review for IOP Publishing in Bristol, UK, which publishes physical-sciences journals.