Menu

Blog

Archive for the ‘information science’ category: Page 178

Mar 1, 2020

How China is using AI and big data to combat coronavirus outbreak

Posted by in categories: biotech/medical, information science, robotics/AI, surveillance

Authorities in China step up surveillance and roll out new artificial intelligence tools to fight deadly epidemic.

Mar 1, 2020

Meet Xenobot, an Eerie New Kind of Programmable Organism

Posted by in categories: bioengineering, information science

Under the watchful eye of a microscope, busy little blobs scoot around in a field of liquid—moving forward, turning around, sometimes spinning in circles. Drop cellular debris onto the plain and the blobs will herd them into piles. Flick any blob onto its back and it’ll lie there like a flipped-over turtle.

Their behavior is reminiscent of a microscopic flatworm in pursuit of its prey, or even a tiny animal called a water bear—a creature complex enough in its bodily makeup to manage sophisticated behaviors. The resemblance is an illusion: These blobs consist of only two things, skin cells and heart cells from frogs.

Writing today in the Proceedings of the National Academy of Sciences, researchers describe how they’ve engineered so-calleds (from the species of frog, Xenopus laevis, whence their cells came) with the help of evolutionary algorithms. They hope that this new kind of organism—contracting cells and passive cells stuck together—and its eerily advanced behavior can help scientists unlock the mysteries of cellular communication.

Feb 28, 2020

AI Is an Energy-Guzzler. We Need to Re-Think Its Design, and Soon

Posted by in categories: information science, robotics/AI

Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.

The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.

For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.

Feb 28, 2020

Witnessing the birth of baby universes 46 times: The link between gravity and soliton

Posted by in categories: information science, quantum physics

Scientists have been attempting to come up with an equation to unify the micro and macro laws of the Universe; quantum mechanics and gravity. We are one step closer with a paper that demonstrates that this unification is successfully realized in JT gravity. In the simplified toy model of the one dimensional domain, the holographic principle, or how information is stored on a boundary that manifests in another dimension is revealed.

How did the universe begin? How does quantum mechanics, the study of the smallest things, relate to gravity and the study of big things? These are some of the questions physicists have been working to solve ever since Einstein released his theory of relativity.

Formulas show that baby universes pops in and out of the main Universe. However, we don’t realize or experience this as humans. To calculate how this scales, devised the so-called JT gravity, which turns the into a toy-like model with only one dimension of time or space. These restricted parameters allows for a model in which scientists can test their theories.

Feb 26, 2020

Scientists propose new regulatory framework to make AI safer

Posted by in categories: information science, robotics/AI

Scientists from Imperial College London have proposed a new regulatory framework for assessing the impact of AI, called the Human Impact Assessment for Technology (HIAT).

The researchers believe the HIAT could identify the ethical, psychological and social risks of technological progress, which are already being exposed in a growing range of applications, from voter manipulation to algorithmic sentencing.

Feb 26, 2020

We’re Making Progress in Explainable AI, but Major Pitfalls Remain

Posted by in categories: information science, robotics/AI

Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.

Can You Explain What You Don’t Understand?

While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?

Feb 25, 2020

Progressing Towards Assuredly Safer Autonomous Systems

Posted by in categories: information science, mathematics, robotics/AI, transportation

The sophistication of autonomous systems currently being developed across various domains and industries has markedly increased in recent years, due in large part to advances in computing, modeling, sensing, and other technologies. While much of the technology that has enabled this technical revolution has moved forward expeditiously, formal safety assurances for these systems still lag behind. This is largely due to their reliance on data-driven machine learning (ML) technologies, which are inherently unpredictable and lack the necessary mathematical framework to provide guarantees on correctness. Without assurances, trust in any learning enabled cyber physical system’s (LE-CPS’s) safety and correct operation is limited, impeding their broad deployment and adoption for critical defense situations or capabilities.

To address this challenge, DARPA’s Assured Autonomy program is working to provide continual assurance of an LE-CPS’s safety and functional correctness, both at the time of its design and while operational. The program is developing mathematically verifiable approaches and tools that can be applied to different types and applications of data-driven ML algorithms in these systems to enhance their autonomy and assure they are achieving an acceptable level of safety. To help ground the research objectives, the program is prioritizing challenge problems in the defense-relevant autonomous vehicle space, specifically related to air, land, and underwater platforms.

The first phase of the Assured Autonomy program recently concluded. To assess the technologies in development, research teams integrated them into a small number of autonomous demonstration systems and evaluated each against various defense-relevant challenges. After 18 months of research and development on the assurance methods, tools, and learning enabled capabilities (LECs), the program is exhibiting early signs of progress.

Feb 24, 2020

Berkeley Lab to Tackle Particle Physics with Quantum Computing

Posted by in categories: computing, information science, particle physics, quantum physics

Massive-scale particle physics produces correspondingly large amounts of data – and this is particularly true of the Large Hadron Collider (LHC), the world’s largest particle accelerator, which is housed at the European Organization for Nuclear Research (CERN) in Switzerland. In 2026, the LHC will receive a massive upgrade through the High Luminosity LHC (HL-LHC) Project. This will increase the LHC’s data output by five to seven times – billions of particle events every second – and researchers are scrambling to prepare big data computing for this deluge of particle physics data. Now, researchers at Lawrence Berkeley National Laboratory are working to tackle high volumes of particle physics data with quantum computing.

When a particle accelerator runs, particle detectors offer data points for where particles crossed certain thresholds in the accelerator. Researchers then attempt to reconstruct precisely how the particles traveled through the accelerator, typically using some form of computer-aided pattern recognition.

This project, which is led by Heather Gray, a professor at the University of California, Berkeley, and a particle physicist at Berkeley Lab, is called Quantum Pattern Recognition for High-Energy Physics (or HEP.QPR). In essence, HEP.QPR aims to use quantum computing to speed this pattern recognition process. HEP.QPR also includes Berkeley Lab scientists Wahid Bhimji, Paolo Calafiura and Wim Lavrijsen.

Feb 23, 2020

RAFT 2035: Roadmap to Abundance, Flourishing, and Transcendence, by 2035 by David Wood

Posted by in categories: biotech/medical, drones, information science, nanotechnology, robotics/AI

I’ve been reading an excellent book by David Wood, entitled, which was recommended by my pal Steele Hawes. I’ve come to an excellent segment of the book that I will quote now.

“One particular challenge that international trustable monitoring needs to address is the risk of more ever powerful weapon systems being placed under autonomous control by AI systems. New weapons systems, such as swarms of miniature drones, increasingly change their configuration at speeds faster than human reactions can follow. This will lead to increased pressures to transfer control of these systems, at critical moments, from human overseers to AI algorithms. Each individual step along the journey from total human oversight to minimal human oversight might be justified, on grounds of a balance of risk and reward. However, that series of individual decisions adds up to an overall change that is highly dangerous, given the potential for unforeseen defects or design flaws in the AI algorithms being used.”


The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities: enormous opportunities and enormous risks.

Feb 23, 2020

AI Just Discovered a New Antibiotic to Kill the World’s Nastiest Bacteria

Posted by in categories: biotech/medical, information science, robotics/AI

An AI algorithm found an antibiotic that wipes out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria in the world.