Quantum computing algorithms can simulate infinitely-large quantum systems thanks to mathematical tools known as tensor networks.
Category: information science – Page 190
Google, Nvidia, and others are training algorithms in the dark arts of designing semiconductors—some of which will be used to run artificial intelligence programs.
The US Defense Advanced Research Projects Agency (DARPA) has selected three teams of researchers led by Raytheon, BAE Systems, and Northrop Grumman to develop event-based infrared (IR) camera technologies under the Fast Event-based Neuromorphic Camera and Electronics (FENCE) program. It is designed to make computer vision cameras more efficient by mimicking how the human brain processes information. DARPA’s FENCE program aims to develop a new class of low-latency, low-power, event-based infrared focal plane array (FPA) and digital signal processing (DSP) and machine learning (ML) algorithms. The development of these neuromorphic camera technologies will enable intelligent sensors that can handle more dynamic scenes and aid future military applications.
New intelligent event-based — or neuromorphic — cameras can handle more dynamic scenes.
An elegant new algorithm developed by Danish researchers can significantly reduce the resource consumption of the world’s computer servers. Computer servers are as taxing on the climate as global air traffic combined, thereby making the green transition in IT an urgent matter. The researchers, from the University of Copenhagen, expect major IT companies to deploy the algorithm immediately.
One of the flipsides of our runaway internet usage is its impact on climate due to the massive amount of electricity consumed by computer servers. Current CO2 emissions from data centers are as high as from global air traffic combined—with emissions expected to double within just a few years.
Only a handful of years have passed since Professor Mikkel Thorup was among a group of researchers behind an algorithm that addressed part of this problem by producing a groundbreaking recipe to streamline computer server workflows. Their work saved energy and resources. Tech giants including Vimeo and Google enthusiastically implemented the algorithm in their systems, with online video platform Vimeo reporting that the algorithm had reduced their bandwidth usage by a factor of eight.
“Because nothing can protect hardware, software, applications or data from a quantum-enabled adversary, encryption keys and data will require re-encrypting with a quantum-resistant algorithm and deleting or physically securing copies and backups.” v/@preskil… See More.
To ease the disruption caused by moving away from quantum-vulnerable cryptographic code, NIST has released a draft document describing the first steps of that journey.
Physics-informed machine learning might help verify microchips.
Physicists love recreating the world in software. A simulation lets you explore many versions of reality to find patterns or to test possibilities. But if you want one that’s realistic down to individual atoms and electrons, you run out of computing juice pretty quickly.
Machine-learning models can approximate detailed simulations, but often require lots of expensive training data. A new method shows that physicists can lend their expertise to machine-learning algorithms, helping them train on a few small simulations consisting of a few atoms, then predict the behavior of system with hundreds of atoms. In the future, similar techniques might even characterize microchips with billions of atoms, predicting failures before they occur.
The researchers started with simulated units of 16 silicon and germanium atoms, two elements often used to make microchips. They employed high-performance computers to calculate the quantum-mechanical interactions between the atoms’ electrons. Given a certain arrangement of atoms, the simulation generated unit-level characteristics such as its energy bands, the energy levels available to its electrons. But “you realize that there is a big gap between the toy models that we can study using a first-principles approach and realistic structures,” says Sanghamitra Neogi, a physicist at the University of Colorado, Boulder, and the paper’s senior author. Could she and her co-author, Artem Pimachev, bridge the gap using machine learning?
This video was made possible by NordPass. Sign up with this link and get 70% off your premium subscription + 1 monrth for free! https://nordpass.com/futurology.
Visit Our Parent Company EarthOne For Sustainable Living Made Simple ➤
https://earthone.io/
The story of humanity is progress, from the origins of humanity with slow disjointed progress to the agricultural revolution with linear progress and furthermore to the industrial revolution with exponential almost unfathomable progress.
This accelerating rate of change of progress is due to the compounding effect of technology, in which it enables countless more from 3D printing, autonomous vehicles, blockchain, batteries, remote surgeries, virtual and augmented reality, robotics – the list can go on and on. These devices in turn will lead to mass changes in society from energy generation, monetary systems, space colonization, automation and much more!
This is only the Beginning.
Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense.
“The first thing I thought was, ‘My program has a bug, because the solution cannot exist,’” Krenn says. MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons (entangled states being those that once made Albert Einstein invoke the specter of “spooky action at a distance”). Krenn and his colleagues had not explicitly provided MELVIN the rules needed to generate such complex states, yet it had found a way. Eventually, he realized that the algorithm had rediscovered a type of experimental arrangement that had been devised in the early 1990s. But those experiments had been much simpler. MELVIN had cracked a far more complex puzzle.
“When we understood what was going on, we were immediately able to generalize [the solution],” says Krenn, who is now at the University of Toronto. Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, Anton Zeilinger of the University of Vienna and their colleagues have refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Krenn and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.
Below is my Answer.
“There is big confluence between AI & Social Media. It is a two way thing, AI not only affects Social Media, Social Media also plays a great role in the development of AI.
The way AI is developed is through data, large data (big data) and one of the easiest ways to generate and source for data at this scale is from the contents and interactions on social media.
Most social media platforms operate at scale, so for issues such as monitoring or censorship of what is being posted, the admin of these platforms have to use automation and AI for its management and policing.
AI algorithms such as sentiment analysis or recommendation engines (used by Facebook & Youtube to recommend posts based on the AI understanding of what you will like) are very much an integral part of any social platform architecture.
Whenever there’s an issue, there’s no support. It’s you against the machine, so you don’t even try.
Amazon’s contract Flex delivery drivers already have to deal with various indignities, and you can now add the fact that they can be hired — and fired — by algorithms, according to a Bloomberg report.
To ensure same-day and other deliveries arrive on time, Amazon uses millions of subcontracted drivers for its Flex delivery program, started in 2015. Drivers sign up via a smartphone app via which they can choose shifts, coordinate deliveries and report problems. The reliance on technology doesn’t end there, though, as they’re also monitored for performance and fired by algorithms with little human intervention.
However, the system can often fire workers seemingly without good cause, according to the report. One worker said her rating (ranging from Fantastic, Great, Fair, or At Risk) fell after she was forced to halt deliveries due to a nail in her tire. She succeeded in boosting it to Great over the next several weeks, but her account was eventually terminated for violating Amazon’s terms of service. She contested the firing, but the company wouldn’t reinstate her.