Menu

Blog

Archive for the ‘robotics/AI’ category: Page 104

Jun 10, 2024

ATLAS chases long-lived particles with the Higgs boson

Posted by in categories: cosmology, information science, particle physics, robotics/AI

The Higgs boson has an extremely short lifespan, living for about 10–22 seconds before quickly decaying into other particles. For comparison, in that brief time, light can only travel about the width of a small atomic nucleus. Scientists study the Higgs boson by detecting its decay products in particle collisions at the Large Hadron Collider. But what if the Higgs boson could also decay into unexpected new particles that are long-lived? What if these particles can travel a few centimeters through the detector before they decay? These long-lived particles (LLPs) could shed light on some of the universe’s biggest mysteries, such as the reason matter prevailed over antimatter in the early universe and the nature of dark matter. Searching for LLPs is extremely challenging because they rarely interact with matter, making them difficult to observe in a particle detector. However, their unusual signatures provide exciting prospects for discovery. Unlike particles that leave a continuous track, LLPs result in noticeable displacements between their production and decay points within the detector. Identifying such a signature requires dedicated algorithms. In a new study submitted to Physical Review Letters, ATLAS scientists used a new algorithm to search for LLPs produced in the decay of Higgs bosons. Boosting sensitivity with a new algorithm Figure 1: A comparison of the radial distributions of reconstructed displaced vertices in a simulated long-lived particle (LLP) sample using the legacy and new (updated) track reconstruction configurations. The circular markers represent reconstructed vertices that are matched to LLP decay vertices and the dashed lines represent reconstructed vertices from background decay vertices (non-LLP). (Image: ATLAS Collaboration/CERN) Despite being critical to the LLP searches, dedicated reconstruction algorithms were previously so resource intensive that they could only be applied to less than 10% of all recorded ATLAS data. Recently, however, ATLAS scientists implemented a new “Large-Radius Tracking” algorithm (LRT), which significantly speeds up the reconstruction of charged particle trajectories in the ATLAS Inner Detector that do not point back to the primary proton-proton collision point, while drastically reducing backgrounds and random combinations of detector signals. The LRT algorithm is executed after the primary tracking iteration using exclusively the detector hits (energy deposits from charged particles recorded in individual detector elements) not already assigned to primary tracks. As a result, ATLAS saw an enormous increase in the efficiency of identifying LLP decays (see Figure 1). The new algorithm also improved CPU processing time more than tenfold compared to the legacy implementation, and the disk space usage per event was reduced by more than a factor of 50. These improvements enabled physicists to fully integrate the LRT algorithm into the standard ATLAS event reconstruction chain. Now, every recorded collision event can be scrutinized for the presence of new LLPs, greatly enhancing the discovery potential of such signatures. Physicists are searching for Higgs bosons decaying into new long-lived particles, which may leave a ‘displaced’ signature in the ATLAS detector. Exploring the dark with the Higgs boson Figure 2: Observed 95% confidence-limit on the decay of the Higgs boson to a pair of long-lived s particles that decay back to Standard-Model particles shown as a function of the mean proper decay length ( of the long-lived particle. The observed limits for the Higgs Portal model from the previous ATLAS search are shown with the dotted lines. (Image: ATLAS Collaboration/CERN) In their new result, ATLAS scientists employed the LRT algorithm to search for LLPs that decay hadronically, leaving a distinct signature of one or more hadronic “jets” of particles originating at a significantly displaced position from the proton–proton collision point (a displaced vertex). Physicists also focused on the Higgs “portal” model, in which the Higgs boson mediates interactions with dark-matter particles through its coupling to a neutral boson s, resulting in exotic decays of the Higgs boson to a pair of long-lived s particles that decay into Standard-Model particles. The ATLAS team studied collision events with unique characteristics consistent with the production of the Higgs boson. The background processes that mimic the LLP signature are complex and challenging to model. To achieve good discrimination between signal and background processes, ATLAS physicists used a machine learning algorithm trained to isolate events with jets arising from LLP decays. Complementary to this, a dedicated displaced vertex reconstruction algorithm was used to pinpoint the origin of hadronic jets originating from the decay of LLPs. This new search did not uncover any events featuring Higgs-boson decays to LLPs. It improves bounds on Higgs-boson decays to LLPs by a factor of 10 to 40 times compared to the previous search using the exact same dataset (see Figure 2)! For the first time at the LHC, bounds on exotic decays of the Higgs boson for low LLP masses (less than 16 GeV) have surpassed results for direct searches of exotic Higgs-boson decays to undetected states. About the event display: A 13 TeV collision event recorded by the ATLAS experiment containing two displaced decay vertices (blue circles) significantly displaced from the beam line showing “prompt” non displaced decay vertices (pink circles). The event characteristics are compatible with what would be expected if a Higgs boson is produced in association with a Z boson (decaying to two electrons indicated by green towers), and decayed into two LLPs (decaying into two b-quarks each). Tracks shown in yellow and jets are indicated by cones. The green and yellow blocks correspond to energy deposition in the electromagnetic and hadronic calorimeters, respectively. (Image: ATLAS Collaboration/CERN) Learn more Search for light long-lived particles in proton-proton collisions at 13 TeV using displaced vertices in the ATLAS inner detector (Submitted to PRL, arXiv:2403.15332, see figures) Performance of the reconstruction of large impact parameter tracks in the inner detector of ATLAS (Eur. Phys. J. C 83 (2023) 1,081, arXiv:2304.12867, see figures) Search for exotic decays of the Higgs boson into long-lived particles in proton-proton collisions at 13 TeV using displaced vertices in the ATLAS inner detector (JHEP 11 (2021) 229, arXiv:2107.06092, see figures)

Jun 10, 2024

AI Used to Predict Potential New Antibiotics

Posted by in categories: biotech/medical, chemistry, robotics/AI

A new study used machine learning to predict potential new antibiotics in the global microbiome, which study authors say marks a significant advance in the use of artificial intelligence in antibiotic resistance research…

For this study, the researchers collected genomes and meta-genomes stored in publicly available databases and looked for DNA snippets that could have antimicrobial activity. To validate those predictions, they used chemistry to synthesize 100 of those molecules in the laboratory and then test them to determine if they could actually kill bacteria, including ‘some of the most dangerous pathogens in our society’, de la Fuente said.

Jun 10, 2024

New method could allow multi-robot teams to autonomously and reliably explore other planets

Posted by in categories: robotics/AI, space

While roboticists have developed increasingly sophisticated systems over the past decades, ensuring that these systems can autonomously operate in real-world settings without mishaps often proves challenging. This is particularly difficult when these robots are designed to be deployed in complex environments, including space and other planets.

Jun 9, 2024

Mixture-of-Agents Enhances Large Language Model Capabilities

Posted by in category: robotics/AI

From Duke university, Together AI, U Chicago, & Stanford University.

Mixture-of-agents enhances large language model capabilities.

Recent advances in large language models (LLMs) demonstrate substantial capabilities in natural language understanding and…

Continue reading “Mixture-of-Agents Enhances Large Language Model Capabilities” »

Jun 9, 2024

The Future of AI and 5G: Scientists Develop the First Universal, Programmable, and Multifunctional Photonic Chip

Posted by in categories: drones, internet, quantum physics, robotics/AI, satellites

Researchers from the Photonics Research Laboratory (PRL)-iTEAM at the Universitat Politècnica de València, in collaboration with iPRONICS, have developed a groundbreaking photonic chip. This chip is the world’s first to be universal, programmable, and multifunctional, making it a significant advancement for the telecommunications industry, data centers, and AI computing infrastructures. It is poised to enhance a variety of applications including 5G communications, quantum computing, data centers, artificial intelligence, satellites, drones, and autonomous vehicles.

The development of this revolutionary chip is the main result of the European project UMWP-Chip, led by researcher José Capmany and funded by an ERC Advanced Grant from the European Research Council. The work has been published in the Nature Communications journal.

Jun 9, 2024

LightSpy Spyware’s macOS Variant Found with Advanced Surveillance Capabilities

Posted by in categories: cybercrime/malcode, robotics/AI, surveillance

Cybersecurity researchers have disclosed that the LightSpy spyware recently identified as targeting Apple iOS users is in fact a previously undocumented macOS variant of the implant.

The findings come from both Huntress Labs and ThreatFabric, which separately analyzed the artifacts associated with the cross-platform malware framework that likely possesses capabilities to infect Android, iOS, Windows, macOS, Linux, and routers from NETGEAR, Linksys, and ASUS.

“The Threat actor group used two publicly available exploits (CVE-2018–4233, CVE-2018–4404) to deliver implants for macOS,” ThreatFabric said in a report published last week. “Part of the CVE-2018–4404 exploit is likely borrowed from the Metasploit framework. macOS version 10 was targeted using those exploits.”

Jun 9, 2024

Thousands of companies using Ray framework exposed to cyberattacks, researchers say

Posted by in categories: cybercrime/malcode, robotics/AI

Researchers are warning that hackers are actively exploiting a disputed vulnerability in a popular open-source AI framework known as Ray.

This tool is commonly used to develop and deploy large-scale Python applications, particularly for tasks like machine learning, scientific computing and data processing.

According to Ray’s developer, Anyscale, the framework is used by major tech companies such as Uber, Amazon and OpenAI.

Jun 9, 2024

AI firm Hugging Face discloses leak of secrets on its Spaces platform

Posted by in categories: robotics/AI, security

The disclosure notice also noted several security changes made to the Spaces platform in response to the leak, including the removal of org tokens to improve traceability and auditing capabilities, and the implementation of a key management service (KMS) for Spaces secrets.

Hugging Face said it plans to deprecate traditional read and write tokens “in the near future,” replacing them with fine-grained access tokens, which are currently the default.

Spaces users are recommended to switch their Hugging Face tokens to fine-grained access tokens if they are not already using them, and refresh any key or token that may have been exposed.

Jun 9, 2024

AI Will Become Mathematicians’ ‘Co-Pilot’

Posted by in categories: mathematics, robotics/AI

Fields Medalist Terence Tao explains how proof checkers and AI programs are dramatically changing mathematics.

By Christoph Drösser

Mathematics is traditionally a solitary science. In 1986 Andrew Wiles withdrew to his study for seven years to prove Fermat’s theorem. The resulting proofs are often difficult for colleagues to understand, and some are still controversial today. But in recent years ever larger areas of mathematics have been so strictly broken down into their individual components (“formalized”) that proofs can be checked and verified by computers.

Jun 9, 2024

The Missing Piece: Combining Foundation Models and Open-Endedness for Artificial Superhuman Intelligence ASI

Posted by in categories: information science, robotics/AI

Recent advances in artificial intelligence, primarily driven by foundation models, have enabled impressive progress. However, achieving artificial general intelligence, which involves reaching human-level performance across various tasks, remains a significant challenge. A critical missing component is a formal description of what it would take for an autonomous system to self-improve towards increasingly creative and diverse discoveries without end—a “Cambrian explosion” of emergent capabilities i-e the creation of open-ended, ever-self-improving AI remains elusive., behaviors, and artifacts. This open-ended invention is how humans and society accumulate new knowledge and technology, making it essential for artificial superhuman intelligence.

DeepMind researchers propose a concrete formal definition of open-endedness in AI systems from the perspective of novelty and learnability. They illustrate a path towards achieving artificial superhuman intelligence (ASI) by developing open-ended systems built upon foundation models. These open-ended systems would be capable of making robust, relevant discoveries that are understandable and beneficial to humans. The researchers argue that such open-endedness, enabled by the combination of foundation models and open-ended algorithms, is an essential property for any ASI system to continuously expand its capabilities and knowledge in a way that can be utilized by humanity.

The researchers provide a formal definition of open-endedness from the perspective of an observer. An open-ended system produces a sequence of artifacts that are both novel and learnable. Novelty is defined as artifacts becoming increasingly unpredictable to the observer’s model over time. Learnability requires that conditioning on a longer history of past artifacts makes future artifacts more predictable. The observer uses a statistical model to predict future artifacts based on the history, judging the quality of predictions using a loss metric. Interestingness is represented by the observer’s choice of loss function, capturing which features they find useful to learn about. This formal definition quantifies the key intuition that an open-ended system endlessly generates artifacts that are both novel and meaningful to the observer.

Page 104 of 2,367First101102103104105106107108Last