Toggle light / dark theme

Researchers at the National Aeronautics and Space Administration (NASA) recently announced that it is using artificial intelligence to calibrate images of the Sun.

NASA launched its Solar Dynamics Observatory (SDO) back in early 2010 to conduct research and capture high-definition images of the Sun.

The new artificial intelligence-powered technology is now helping scientists to precisely calibrate captured images at a quick pace in order to generate accurate, usable data. NASA uses the Atmospheric Imagery Assembly (AIA) present at the SDO to capture the Sun’s images across various wavelengths of ultraviolet light every 12 seconds.

A trio of researchers at Cornell University has found that it is possible to hide malware code inside of AI neural networks. Zhi Wang, Chaoge Liu and Xiang Cui have posted a paper describing their experiments with injecting code into neural networks on the arXiv preprint server.

As grows ever more complex, so do attempts by criminals to break into machines running new technology for their own purposes, such as destroying data or encrypting it and demanding payment from users for its return. In this new study, the team has found a new way to infect certain kinds of computer systems running artificial intelligence applications.

AI systems do their work by processing data in ways similar to the . But such networks, the research trio found, are vulnerable to infiltration by foreign code.

Cigarette butts are a common type of litter for marine environments but AI-powered robot litter pickers could be the solution.


It seems many people leave behind more than just sandcastles when they go home after a trip to the beach. Beach litter is a recurring issue, and it is damaging our coastal environments and wildlife.

And there is one small item that is causing a big problem: cigarette butts. They may only be a few centimetres long, but they are full of microplastics and toxic chemicals that harm the marine environment. They don’t easily decompose, and when they come into contact with the water, harmful substances can leach out.

Unfortunately, they are also the most common type of litter, with an estimated 4.5 trillion discarded annually.

CORVALLIS, Ore. – Cassie the robot, invented at Oregon State University and produced by OSU spinout company Agility Robotics, has made history by traversing 5 kilometers, completing the route in just over 53 minutes.

Cassie was developed under the direction of robotics professor Jonathan Hurst with a 16-month, $1 million grant from the Advanced Research Projects Agency of the U.S. Department of Defense.

Since Cassie’s introduction in 2017, OSU students funded by the National Science Foundation have been exploring machine learning options for the robot.

I’ve been suggesting for a long time to drop these Ai’s into open world games.


EDIT: Also see paper and results compilation video!

Today, we published “Open-Ended Learning Leads to Generally Capable Agents,” a preprint detailing our first steps to train an agent capable of playing many different games without needing human interaction data. … The result is an agent with the ability to succeed at a wide spectrum of tasks — from simple object-finding problems to complex games like hide and seek and capture the flag, which were not encountered during training. We find the agent exhibits general, heuristic behaviours such as experimentation, behaviours that are widely applicable to many tasks rather than specialised to an individual task.

What i would suggest is landing Atlas robots in waves on the Moon, the first wave builds a solar panel farm for power, the second repairs the first wave, the third joins the first two to begin building large scale runways, the fourth joins the first three to begin building permanent structures.

The Moon is close enough for teleoperations, and in the 2030s, when we actually do Mars, the AI could repeat the whole thing there.


Before they explore Mars, the robots explore Martian-like caves on Earth first.

Although effective uncertainty estimation can be a key consideration in the development of safe and fair artificial intelligence systems, most of today’s large-scale deep learning applications are lacking in this regard.

To accelerate research in this field, a team from DeepMind has proposed epistemic neural networks (ENNs) as an interface for uncertainty modelling in deep learning, and the KL divergence from a target distribution as a precise metric to evaluate ENNs. In the paper Epistemic Neural Networks, the team also introduces a computational testbed based on inference in a neural network Gaussian process, and validates that the proposed ENNs can improve performance in terms of statistical quality and computational cost.

The researchers say all existing approaches to uncertainty modelling in deep learning can be expressed as ENNs, presenting a new perspective on the potential of neural networks as computational tools for approximate posterior inference.