Menu

Blog

Archive for the ‘information science’ category: Page 160

Jan 12, 2021

Deconstructing Schrödinger’s Cat – Solving the Paradox

Posted by in categories: information science, particle physics, quantum physics

The French theoretical physicist Franck Laloë presents a modification of Schrödinger’s famous equation that ensures that all measured states are unique, helping to solve the problem that is neatly encompassed in the Schördinger’s cat paradox.

The paradox of Schrödinger’s cat – the feline that is, famously, both alive and dead until its box is opened – is the most widely known example of a recurrent problem in quantum mechanics: its dynamics seems to predict that macroscopic objects (like cats) can, sometimes, exist simultaneously in more than one completely distinct state. Many physicists have tried to solve this paradox over the years, but no approach has been universally accepted. Now, however, theoretical physicist Franck Laloë from Laboratoire Kastler Brossel (ENS-Université PSL) in Paris has proposed a new interpretation that could explain many features of the paradox. He sets out a model of this possible theory in a new paper in EPJ D.

One approach to solving this problem involves adding a small, random extra term to the Schrödinger equation, which allows the quantum state vector to ‘collapse’, ensuring that – as is observed in the macroscopic universe – the outcome of each measurement is unique. Laloë’s theory combines this interpretation with another from de Broglie and Bohm and relates the origins of the quantum collapse to the universal gravitational field. This approach can be applied equally to all objects, quantum and macroscopic: that is, to cats as much as to atoms.

Jan 12, 2021

Diffractive networks improve optical image classification accuracy

Posted by in categories: information science, robotics/AI

Recently, there has been a reemergence of interest in optical computing platforms for artificial intelligence-related applications. Optics is ideally suited for realizing neural network models because of the high speed, large bandwidth and high interconnectivity of optical information processing. Introduced by UCLA researchers, Diffractive Deep Neural Networks (D2NNs) constitute such an optical computing framework, comprising successive transmissive and/or reflective diffractive surfaces that can process input information through light-matter interaction. These surfaces are designed using standard deep learning techniques in a computer, which are then fabricated and assembled to build a physical optical network. Through experiments performed at terahertz wavelengths, the capability of D2NNs in classifying objects all-optically was demonstrated. In addition to object classification, the success of D2NNs in performing miscellaneous optical design and computation tasks, including e.g., spectral filtering, spectral information encoding, and optical pulse shaping have also been demonstrated.

In their latest paper published in Light: Science & Applications, UCLA team reports a leapfrog advance in D2NN-based image classification accuracy through ensemble learning. The key ingredient behind the success of their approach can be intuitively understood through the experiment of Sir Francis Galton (1822–1911), an English philosopher and statistician, who, while visiting a livestock fair, asked the participants to guess the weight of an ox. None of the hundreds of participants succeeded in guessing the weight. But to his astonishment, Galton found that the median of all the guesses came quite close—1207 pounds, and was accurate within 1% of the true weight of 1198 pounds. This experiment reveals the power of combining many predictions in order to obtain a much more accurate prediction. Ensemble learning manifests this idea in machine learning, where an improved predictive performance is attained by combining multiple models.

In their scheme, UCLA researchers reported an ensemble formed by multiple D2NNs operating in parallel, each of which is individually trained and diversified by optically filtering their inputs using a variety of filters. 1252 D2NNs, uniquely designed in this manner, formed the initial pool of networks, which was then pruned using an iterative pruning algorithm, so that the resulting physical ensemble is not prohibitively large. The final prediction comes from a weighted average of the decisions from all the constituent D2NNs in an ensemble. The researchers evaluated the performance of the resulting D2NN ensembles on CIFAR-10 image dataset, which contains 60000 natural images categorized in 10 classes and is an extensively used dataset for benchmarking various machine learning algorithms. Simulations of their designed ensemble systems revealed that diffractive optical networks can significantly benefit from the ‘wisdom of the crowd’.

Jan 11, 2021

Solar flow battery efficiently stores renewable energy in liquid form

Posted by in categories: information science, solar power, sustainability

Capturing energy from the Sun with solar panels is only half the story – that energy needs to be stored somewhere for later use. In the case of flow batteries, storage is relegated to vats of liquid. Now, an international team led by University of Wisconsin-Madison scientists has created a new version of these solar flow batteries that’s efficient and long-lasting.

To make the new device, the team combined several existing technologies. It’s a silicon/perovskite tandem solar cell, paired with a redox flow battery, which the team says will allow people to harvest and store renewable energy in one device. Not only is it efficient, but it should be inexpensive and simple enough to scale up for home use.

The energy-harvesting part of the equation combines the long-time industry-leading material – silicon – with a promising young upstart called perovskite. These tandem solar cells have proved better than either material alone, since the two materials capture different wavelengths of light.

Jan 11, 2021

High-Frequency Traders Push Closer to Light Speed With Cutting-Edge Cables

Posted by in categories: business, habitats, information science, internet

The cable, called hollow-core fiber, is a next-generation version of the fiber-optic cable used to deliver broadband internet to homes and businesses. Made of glass, such cables carry data encoded as beams of light. But instead of being solid, hollow-core fiber is empty inside, with dozens of parallel, air-filled channels narrower than a human hair.

Because light travels nearly 50% faster through air than glass, it takes about one-third less time to send data through hollow-core fiber than through the same length of standard fiber.

The difference is often just a minuscule fraction of a second. But in high-frequency trading, that can make the difference between profits and losses. HFT firms use sophisticated algorithms and ultrafast data networks to execute rapid-fire trades in stocks, options and futures. Many are secretive about their trading strategies and technology.

Jan 11, 2021

Team creates hybrid chips with processors and memory to run AI on battery-powered devices

Posted by in categories: information science, robotics/AI

Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall—the so-called “memory wall” that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.

“Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life,” said computer scientist Subhasish Mitra, senior author of a new study published in Nature Electronics.

Now, a team that includes Stanford computer scientist Mary Wootters and electrical engineer H.-S. Philip Wong has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.

Jan 10, 2021

MIT Deep-Learning Algorithm Finds Hidden Warning Signals in Measurements Collected Over Time

Posted by in categories: biotech/medical, information science, robotics/AI, satellites

A new deep-learning algorithm could provide advanced notice when systems — from satellites to data centers — are falling out of whack.

When you’re responsible for a multimillion-dollar satellite hurtling through space at thousands of miles per hour, you want to be sure it’s running smoothly. And time series can help.

A time series is simply a record of a measurement taken repeatedly over time. It can keep track of a system’s long-term trends and short-term blips. Examples include the infamous Covid-19 curve of new daily cases and the Keeling curve that has tracked atmospheric carbon dioxide concentrations since 1958. In the age of big data, “time series are collected all over the place, from satellites to turbines,” says Kalyan Veeramachaneni. “All that machinery has sensors that collect these time series about how they’re functioning.”

Jan 9, 2021

Artificial Intelligence Finds Hidden Roads Threatening Amazon Ecosystems

Posted by in categories: information science, mapping, robotics/AI

(Inside Science) — It took years of painstaking work for Carlos Souza and his colleagues to map out every road in the Brazilian Amazon biome. Official maps of the 4.2 million-square-kilometer region only show roads built by federal and local governments. But by carefully tracing lines on satellite images, the researchers concluded in 2016 that the true length of all the roads combined was nearly 13 times higher.

“When we don’t have a good understanding of how much roadless areas we have on the landscape, we probably will misguide any conservation plans for that territory,” said Souza, a geographer at a Brazil-based environmental nonprofit organization called Imazon.

Now, Imazon researchers have built an artificial intelligence algorithm to find such roads automatically. Currently, the algorithm is reaching about 70% accuracy, which rises to 87%-90% with some additional automated processing, said Souza. Analysts then confirm potential roads by examining the satellite images. Souza presented the research last month at a virtual meeting of the American Geophysical Union.

Jan 5, 2021

New Quantum Algorithms Finally Crack Nonlinear Equations

Posted by in categories: computing, information science, quantum physics

Two teams found different ways for quantum computers to process nonlinear systems by first disguising them as linear ones.

Jan 4, 2021

DUAL takes AI to the next level

Posted by in categories: cybercrime/malcode, information science, robotics/AI

Scientists at DGIST in Korea, and UC Irvine and UC San Diego in the US, have developed a computer architecture that processes unsupervised machine learning algorithms faster, while consuming significantly less energy than state-of-the-art graphics processing units. The key is processing data where it is stored in computer memory and in an all-digital format. The researchers presented the new architecture, called DUAL, at the 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture.

“Today’s computer applications generate a large amount of data that needs to be processed by algorithms,” says Yeseong Kim of Daegu Gyeongbuk Institute of Science and Technology (DGIST), who led the effort.

Powerful “unsupervised” machine learning involves training an algorithm to recognize patterns in without providing labeled examples for comparison. One popular approach is a clustering algorithm, which groups similar data into different classes. These algorithms are used for a wide variety of data analyzes, such as identifying on social media, filtering spam email and detecting criminal or fraudulent activity online.

Jan 4, 2021

Breakthrough for Healthcare, Agriculture, Energy: Artificial Intelligence Reveals Recipe for Building Artificial Proteins

Posted by in categories: biotech/medical, chemistry, food, information science, robotics/AI

Proteins are essential to cells, carrying out complex tasks and catalyzing chemical reactions. Scientists and engineers have long sought to harness this power by designing artificial proteins that can perform new tasks, like treat disease, capture carbon or harvest energy, but many of the processes designed to create such proteins are slow and complex, with a high failure rate.

In a breakthrough that could have implications across the healthcare, agriculture, and energy sectors, a team lead by researchers in the Pritzker School of Molecular Engineering at the University of Chicago has developed an artificial intelligence-led process that uses big data to design new proteins.

By developing machine-learning models that can review protein information culled from genome databases, the researchers found relatively simple design rules for building artificial proteins. When the team constructed these artificial proteins in the lab, they found that they performed chemical processes so well that they rivaled those found in nature.