Menu

Blog

Archive for the ‘information science’ category: Page 52

Feb 8, 2023

Deep learning for quantum sensing

Posted by in categories: information science, quantum physics, robotics/AI

Quantum sensing represents one of the most promising applications of quantum technologies, with the aim of using quantum resources to improve measurement sensitivity. In particular, sensing of optical phases is one of the most investigated problems, considered key to developing mass-produced technological devices.

Optimal usage of quantum sensors requires regular characterization and calibration. In general, such calibration is an extremely complex and resource-intensive task—especially when considering systems for estimating multiple parameters, due to the sheer volume of required measurements as well as the computational time needed to analyze those measurements. Machine-learning algorithms present a powerful tool to address that complexity. The discovery of suitable protocols for algorithm usage is vital for the development of sensors for precise quantum-enhanced measurements.

A particular type of machine-learning algorithm known as “reinforcement learning” (RL) relies on an intelligent agent guided by rewards: Depending on the rewards it receives, it learns to perform the right actions to achieve the desired optimization. The first experimental realizations using RL algorithms for the optimization of quantum problems have been reported only very recently. Most of them still rely on prior knowledge of the model describing the system. What is desirable is instead a completely model-free approach, which is possible when the agent’s reward does not depend on the explicit system model.

Feb 7, 2023

N-Electron Valence Perturbation Theory with Reference Wave Functions from Quantum Computing: Application to the Relative Stability of Hydroxide Anion and Hydroxyl Radical

Posted by in categories: computing, information science, quantum physics

Quantum simulations of the hydroxide anion and hydroxyl radical are reported, employing variational quantum algorithms for near-term quantum devices. The energy of each species is calculated along the dissociation curve, to obtain information about the stability of the molecular species being investigated. It is shown that simulations restricted to valence spaces incorrectly predict the hydroxyl radical to be more stable than the hydroxide anion. Inclusion of dynamical electron correlation from nonvalence orbitals is demonstrated, through the integration of the variational quantum eigensolver and quantum subspace expansion methods in the workflow of N-electron valence perturbation theory, and shown to correctly predict the hydroxide anion to be more stable than the hydroxyl radical, provided that basis sets with diffuse orbitals are also employed.

Feb 7, 2023

A New AI Research From MIT Reduces Variance in Denoising Score-Matching, Improving Image Quality, Stability, and Training Speed in Diffusion Models

Posted by in categories: information science, robotics/AI

Diffusion models have recently produced outstanding results on various generating tasks, including the creation of images, 3D point clouds, and molecular conformers. Ito stochastic differential equations (SDE) are a unified framework that can incorporate these models. The models acquire knowledge of time-dependent score fields through score-matching, which later directs the reverse SDE during generative sampling. Variance-exploding (VE) and variance-preserving (VP) SDE are common diffusion models. EDM offers the finest performance to date by expanding on these compositions. The existing training method for diffusion models can still be enhanced, despite achieving outstanding empirical results.

The Stable Target Field (STF) objective is a generalized variation of the denoising score-matching objective. Particularly, the high volatility of the denoising score matching (DSM) objective’s training targets can result in subpar performance. They divide the score field into three regimes to comprehend the cause of this volatility better. According to their investigation, the phenomenon mostly occurs in the intermediate regime, defined by various modes or data points having a similar impact on the scores. In other words, under this regime, it is still being determined where the noisy samples produced throughout the forward process originated. Figure 1(a) illustrates the differences between the DSM and their proposed STF objectives.

Figure 1: Examples of the DSM objective’s and our suggested STF objective’s contrasts.

Feb 7, 2023

Echolocation could give small robots the ability to find lost people

Posted by in categories: drones, information science, robotics/AI

Scientists and roboticists have long looked at nature for inspiration to develop new features for machines. In this case, researchers from Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland were inspired by bats and other animals that rely on echolocation to design a method that would give small robots that ability to navigate themselves — one that doesn’t need expensive hardware or components too large or too heavy for tiny machines. In fact, according to PopSci, the team only used the integrated audio hardware of an interactive puck robot and built an audio extension deck using cheap mic and speakers for a tiny flying drone that can fit in the palm of your hand.

The system works just like bat echolocation. It was designed to emit sounds across frequencies, which a robot’s microphone then picks up as they bounce off walls. An algorithm the team created then goes to work to analyze sound waves and create a map with the room’s dimensions.

In a paper published in IEEE Robotics and Automation Letters, the researchers said existing “algorithms for active echolocation are less developed and often rely on hardware requirements that are out of reach for small robots.” They also said their “method is model-based, runs in real time and requires no prior calibration or training.” Their solution could give small machines the capability to be sent on search-and-rescue missions or to previously uncharted locations that bigger robots wouldn’t be able to reach. And since the system only needs onboard audio equipment or cheap additional hardware, it has a wide range of potential applications.

Feb 7, 2023

AI can predict the effectiveness of breast cancer chemotherapy

Posted by in categories: biotech/medical, information science, robotics/AI

Engineers at the University of Waterloo have developed artificial intelligence (AI) technology to predict if women with breast cancer would benefit from chemotherapy prior to surgery.

The new AI algorithm, part of the open-source Cancer-Net initiative led by Dr. Alexander Wong, could help unsuitable candidates avoid the serious side effects of chemotherapy and pave the way for better surgical outcomes for those who are suitable.

“Determining the right treatment for a given breast cancer patient is very difficult right now, and it is crucial to avoid unnecessary side effects from using treatments that are unlikely to have real benefit for that patient,” said Wong, a professor of systems design engineering.

Feb 7, 2023

An extension of FermiNet to discover quantum phase transitions

Posted by in categories: chemistry, information science, quantum physics, robotics/AI

Architectures based on artificial neural networks (ANNs) have proved to be very helpful in research settings, as they can quickly analyze vast amounts of data and make accurate predictions. In 2020, Google’s British AI subsidiary DeepMind used a new ANN architecture dubbed the Fermionic neural network (FermiNet) to solve the Schrodinger equation for electrons in molecules, a central problem in the field of chemistry.

The Schroedinger is a partial differential equation based on well-established theory of energy conservation, which can be used to derive information about the behavior of electrons and solve problems related to the properties of matter. Using FermiNet, which is a conceptually simple method, DeepMind could solve this equation in the context of chemistry, attaining very accurate results that were comparable to those obtained using highly sophisticated quantum chemistry techniques.

Researchers at Imperial College London, DeepMind, Lancaster University, and University of Oxford recently adapted the FermiNet architecture to tackle a quantum physics problem. In their paper, published in Physical Review Letters, they specifically used FermiNet to calculate the ground states of periodic Hamiltonians and study the homogenous electron gas (HEG), a simplified quantum mechanical model of electrons interacting in solids.

Feb 6, 2023

Code-generating platform Magic challenges GitHub’s Copilot with $23M in VC backing

Posted by in categories: information science, robotics/AI

Magic, a startup developing a code-generating platform similar to GitHub’s Copilot, today announced that it raised $23 million in a Series A funding round led by Alphabet’s CapitalG with participation from Elad Gil, Nat Friedman and Amplify Partners. So what’s its story?

Magic’s CEO and co-founder, Eric Steinberger, says that he was inspired by the potential of AI at a young age. In high school, he and his friends wired up the school’s computers for machine learning algorithm training, an experience that planted the seeds for Steinberger’s computer science degree and his job at Meta as an AI researcher.

“I spent years exploring potential paths to artificial general intelligence, and then large language models (LLMs) were invented,” Steinberger told TechCrunch in an email interview. “I realized that combining LLMs trained on code with my research on neural memory and reinforcement learning might allow us to build an AI software engineer that feels like a true colleague, not just a tool. This would be extraordinarily useful for companies and developers.”

Feb 6, 2023

Vectors of Cognitive AI: Attention

Posted by in categories: information science, robotics/AI

Panelists: michael graziano, jonathan cohen, vasudev lal, joscha bach.

The seminal contribution “Attention is all you need” (Vasvani et al. 2017), which introduced the Transformer algorithm, triggered a small revolution in machine learning. Unlike convolutional neural networks, which construct each feature out of a fixed neighborhood of signals, Transformers learn which data a feature on the next layer of a neural network should attend to. However, attention in neural networks is very different from the integrated attention in a human mind. In our minds, attention seems to be part of a top-down mechanism that actively creates a coherent, dynamic model of reality, and plays a crucial role in planning, inference, reflection and creative problem solving. Our consciousness appears to be involved in maintaining the control model of our attention.

Continue reading “Vectors of Cognitive AI: Attention” »

Feb 5, 2023

Generalist AI beyond Deep Learning

Posted by in categories: biological, information science, robotics/AI

Generative AI represents a big breakthrough towards models that can make sense of the world by dreaming up visual, textual and conceptual representations, and are becoming increasingly generalist. While these AI systems are currently based on scaling up deep learning algorithms with massive amounts of data and compute, biological systems seem to be able to make sense of the world using far less resources. This phenomenon of efficient intelligent self-organization still eludes AI research, creating an exciting new frontier for the next wave of developments in the field. Our panelists will explore the potential of incorporating principles of intelligent self-organization from biology and cybernetics into technical systems as a way to move closer to general intelligence. Join in on this exciting discussion about the future of AI and how we can move beyond traditional approaches like deep learning!

This event is hosted and sponsored by Intel Labs as part of the Cognitive AI series.

Feb 4, 2023

Google’s ChatGPT rival to be released in coming ‘weeks and months’

Posted by in categories: information science, robotics/AI

“We are just at the beginning of our AI journey, and the best is yet to come,” said Google CEO.

Search engine giant Google is looking to deploy its artificial intelligence (A.I.)-based large language models available as a “companion to search,” CEO Sundar Pichai said during an earnings report on Thursday, Bloomberg.

A large language model (LLM) is a deep learning algorithm that can recognize and summarize content from massive datasets and use it to predict or generate text. OpenAI’s GPT-3 is one such LLM that powers the hugely popular chatbot, ChatGPT.

Page 52 of 280First4950515253545556Last