Blog

Archive for the ‘information science’ category: Page 3

Feb 28, 2020

Witnessing the birth of baby universes 46 times: The link between gravity and soliton

Posted by in categories: information science, quantum physics

Scientists have been attempting to come up with an equation to unify the micro and macro laws of the Universe; quantum mechanics and gravity. We are one step closer with a paper that demonstrates that this unification is successfully realized in JT gravity. In the simplified toy model of the one dimensional domain, the holographic principle, or how information is stored on a boundary that manifests in another dimension is revealed.

How did the universe begin? How does quantum mechanics, the study of the smallest things, relate to gravity and the study of big things? These are some of the questions physicists have been working to solve ever since Einstein released his theory of relativity.

Formulas show that baby universes pops in and out of the main Universe. However, we don’t realize or experience this as humans. To calculate how this scales, devised the so-called JT gravity, which turns the into a toy-like model with only one dimension of time or space. These restricted parameters allows for a model in which scientists can test their theories.

Feb 26, 2020

Scientists propose new regulatory framework to make AI safer

Posted by in categories: information science, robotics/AI

Scientists from Imperial College London have proposed a new regulatory framework for assessing the impact of AI, called the Human Impact Assessment for Technology (HIAT).

The researchers believe the HIAT could identify the ethical, psychological and social risks of technological progress, which are already being exposed in a growing range of applications, from voter manipulation to algorithmic sentencing.

Feb 26, 2020

We’re Making Progress in Explainable AI, but Major Pitfalls Remain

Posted by in categories: information science, robotics/AI

Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.

Can You Explain What You Don’t Understand?

While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?

Continue reading “We’re Making Progress in Explainable AI, but Major Pitfalls Remain” »

Feb 25, 2020

Progressing Towards Assuredly Safer Autonomous Systems

Posted by in categories: information science, mathematics, robotics/AI, transportation

The sophistication of autonomous systems currently being developed across various domains and industries has markedly increased in recent years, due in large part to advances in computing, modeling, sensing, and other technologies. While much of the technology that has enabled this technical revolution has moved forward expeditiously, formal safety assurances for these systems still lag behind. This is largely due to their reliance on data-driven machine learning (ML) technologies, which are inherently unpredictable and lack the necessary mathematical framework to provide guarantees on correctness. Without assurances, trust in any learning enabled cyber physical system’s (LE-CPS’s) safety and correct operation is limited, impeding their broad deployment and adoption for critical defense situations or capabilities.

To address this challenge, DARPA’s Assured Autonomy program is working to provide continual assurance of an LE-CPS’s safety and functional correctness, both at the time of its design and while operational. The program is developing mathematically verifiable approaches and tools that can be applied to different types and applications of data-driven ML algorithms in these systems to enhance their autonomy and assure they are achieving an acceptable level of safety. To help ground the research objectives, the program is prioritizing challenge problems in the defense-relevant autonomous vehicle space, specifically related to air, land, and underwater platforms.

The first phase of the Assured Autonomy program recently concluded. To assess the technologies in development, research teams integrated them into a small number of autonomous demonstration systems and evaluated each against various defense-relevant challenges. After 18 months of research and development on the assurance methods, tools, and learning enabled capabilities (LECs), the program is exhibiting early signs of progress.

Feb 24, 2020

Berkeley Lab to Tackle Particle Physics with Quantum Computing

Posted by in categories: computing, information science, particle physics, quantum physics

Massive-scale particle physics produces correspondingly large amounts of data – and this is particularly true of the Large Hadron Collider (LHC), the world’s largest particle accelerator, which is housed at the European Organization for Nuclear Research (CERN) in Switzerland. In 2026, the LHC will receive a massive upgrade through the High Luminosity LHC (HL-LHC) Project. This will increase the LHC’s data output by five to seven times – billions of particle events every second – and researchers are scrambling to prepare big data computing for this deluge of particle physics data. Now, researchers at Lawrence Berkeley National Laboratory are working to tackle high volumes of particle physics data with quantum computing.

When a particle accelerator runs, particle detectors offer data points for where particles crossed certain thresholds in the accelerator. Researchers then attempt to reconstruct precisely how the particles traveled through the accelerator, typically using some form of computer-aided pattern recognition.

This project, which is led by Heather Gray, a professor at the University of California, Berkeley, and a particle physicist at Berkeley Lab, is called Quantum Pattern Recognition for High-Energy Physics (or HEP.QPR). In essence, HEP.QPR aims to use quantum computing to speed this pattern recognition process. HEP.QPR also includes Berkeley Lab scientists Wahid Bhimji, Paolo Calafiura and Wim Lavrijsen.

Continue reading “Berkeley Lab to Tackle Particle Physics with Quantum Computing” »

Feb 23, 2020

RAFT 2035: Roadmap to Abundance, Flourishing, and Transcendence, by 2035 by David Wood

Posted by in categories: biotech/medical, drones, information science, nanotechnology, robotics/AI

I’ve been reading an excellent book by David Wood, entitled, which was recommended by my pal Steele Hawes. I’ve come to an excellent segment of the book that I will quote now.

“One particular challenge that international trustable monitoring needs to address is the risk of more ever powerful weapon systems being placed under autonomous control by AI systems. New weapons systems, such as swarms of miniature drones, increasingly change their configuration at speeds faster than human reactions can follow. This will lead to increased pressures to transfer control of these systems, at critical moments, from human overseers to AI algorithms. Each individual step along the journey from total human oversight to minimal human oversight might be justified, on grounds of a balance of risk and reward. However, that series of individual decisions adds up to an overall change that is highly dangerous, given the potential for unforeseen defects or design flaws in the AI algorithms being used.”


The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities: enormous opportunities and enormous risks.

Continue reading “RAFT 2035: Roadmap to Abundance, Flourishing, and Transcendence, by 2035 by David Wood” »

Feb 23, 2020

AI Just Discovered a New Antibiotic to Kill the World’s Nastiest Bacteria

Posted by in categories: biotech/medical, information science, robotics/AI

An AI algorithm found an antibiotic that wipes out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria in the world.

Feb 21, 2020

Solving a Higgs optimization problem with quantum annealing for machine learning

Posted by in categories: information science, particle physics, quantum physics, robotics/AI

A machine learning algorithm implemented on a quantum annealer—a D-Wave machine with 1,098 superconducting qubits—is used to identify Higgs-boson decays from background standard-model processes.

Feb 20, 2020

Mixed-signal hardware security thwarts powerful electromagnetic attacks

Posted by in categories: encryption, information science, internet, security

Security of embedded devices is essential in today’s internet-connected world. Security is typically guaranteed mathematically using a small secret key to encrypt the private messages.

When these computationally secure encryption algorithms are implemented on a physical hardware, they leak critical side-channel information in the form of power consumption or electromagnetic radiation. Now, Purdue University innovators have developed technology to kill the problem at the source itself—tackling physical-layer vulnerabilities with physical-layer solutions.

Continue reading “Mixed-signal hardware security thwarts powerful electromagnetic attacks” »

Feb 20, 2020

New artificial intelligence algorithm better predicts corn yield

Posted by in categories: food, information science, robotics/AI

With some reports predicting the precision agriculture market will reach $12.9 billion by 2027, there is an increasing need to develop sophisticated data-analysis solutions that can guide management decisions in real time. A new study from an interdisciplinary research group at University of Illinois offers a promising approach to efficiently and accurately process precision ag data.

Page 3 of 10612345678Last