Toggle light / dark theme

Argonne National Laboratory scientists have used anomaly detection in the ATLAS collaboration to search for new particles, identifying a promising anomaly that could indicate new physics beyond the Standard Model.

Scientists used a neural network, a type of brain-inspired machine learning algorithm, to sift through large volumes of particle collision data in a study that marks the first use of a neural network to analyze data from a collider experiment.

Particle physicists are tasked with mining this massive and growing store of collision data for evidence of undiscovered particles. In particular, they’re searching for particles not included in the Standard Model of particle physics, our current understanding of the universe’s makeup that scientists suspect is incomplete.

The problem of personal identity is a longstanding philosophical topic albeit without final consensus. In this article the somewhat similar problem of AI identity is discussed, which has not gained much traction yet, although this investigation is increasingly relevant for different fields, such as ownership issues, personhood of AI, AI welfare, brain–machine interfaces, the distinction between singletons and multi-agent systems as well as to potentially support finding a solution to the problem of personal identity. The AI identity problem analyses the criteria for two AIs to be considered the same at different points in time. Two approaches to tackle the problem are proposed: One is based on the personal identity problem and the concept of computational irreducibility, while the other one applies multi-factor authentication to the AI identity problem. Also, a range of scenarios is examined regarding AI identity, such as replication, fission, fusion, switch off, resurrection, change of hardware, transition from non-sentient to sentient, journey to the past, offspring and identity change.

Like Recommend

From UPenn, Google Deepmind, & NVIDIA Introducing🎓, our latest effort pushing the frontier of robot learning using LLMs!

From upenn, google deepmind, & NVIDIA

Introducing🎓, our latest effort pushing the frontier of robot learning using LLMs!


Contribute to eureka-research/DrEureka development by creating an account on GitHub.

Microsoft is said to be building an OpenAI competitor despite its multi-billion-dollar partnership with the firm — and according to at least one insider, it’s using GPT-4 data to do so.

First reported by The Information, the new large language model (LLM) is apparently called MAI-1, and an inside source told the website that Microsoft is using GPT-4 and public information from the web to train it out.

MAI-1 may also be trained on datasets from Inflection, the startup previously run by Google DeepMind cofounder Mustafa Suleyman before he joined Microsoft as the CEO of its AI department earlier this year. When it hired Suleyman, Microsoft also brought over most of Inflection’s staff and folded them into Microsoft AI.

In this talk at Mindfest 2024, Hartmut Neven proposes that conscious moments are generated by the formation of quantum superpositions, challenging traditional views on the origins of consciousness. Please consider signing up for TOEmail at https://www.curtjaimungal.org.

Support TOE: — Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) — Crypto: https://tinyurl.com/cryptoTOE — PayPal: https://tinyurl.com/paypalTOE — TOE Merch: https://tinyurl.com/TOEmerch … see more.