Toggle light / dark theme

Autonomous and semi-autonomous systems need active illumination to navigate at night or underground. Switching on visible headlights or some other emitting system like lidar, however, has a significant drawback: It allows adversaries to detect a vehicle’s presence, in some cases from long distances away.

To eliminate this vulnerability, DARPA announced the Invisible Headlights program. The fundamental research effort seeks to discover and quantify information contained in ambient thermal emissions in a wide variety of environments and to create new passive 3D sensors and algorithms to exploit that information.

“We’re aiming to make completely passive navigation in pitch dark conditions possible,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “In the depths of a cave or in the dark of a moonless, starless night with dense fog, current autonomous systems can’t make sense of the environment without radiating some signal—whether it’s a laser pulse, radar or visible light beam—all of which we want to avoid. If it involves emitting a signal, it’s not invisible for the sake of this program.”

The news: A new type of artificial eye, made by combining light-sensing electronics with a neural network on a single tiny chip, can make sense of what it’s seeing in just a few nanoseconds, far faster than existing image sensors.

Why it matters: Computer vision is integral to many applications of AI—from driverless cars to industrial robots to smart sensors that act as our eyes in remote locations—and machines have become very good at responding to what they see. But most image recognition needs a lot of computing power to work. Part of the problem is a bottleneck at the heart of traditional sensors, which capture a huge amount of visual data, regardless of whether or not it is useful for classifying an image. Crunching all that data slows things down.

A sensor that captures and processes an image at the same time, without converting or passing around data, makes image recognition much faster using much less power. The design, published in Nature today by researchers at the Institute of Photonics in Vienna, Austria, mimics the way animals’ eyes pre-process visual information before passing it on to the brain.

The scientific revolution was ushered in at the beginning of the 17th century with the development of two of the most important inventions in history — the telescope and the microscope. With the telescope, Galileo turned his attention skyward, and advances in optics led Robert Hooke and Antonie van Leeuwenhoek toward the first use of the compound microscope as a scientific instrument, circa 1665. Today, we are witnessing an information technology-era revolution in microscopy, supercharged by deep learning algorithms that have propelled artificial intelligence to transform industry after industry.

One of the major breakthroughs in deep learning came in 2012, when the performance superiority of a deep convolutional neural network combined with GPUs for image classification was revealed by Hinton and colleagues [1] for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In AI’s current innovation and implementation phase, deep learning algorithms are propelling nearly all computer vision-intensive applications, including autonomous vehicles (transportation, military), facial recognition (retail, IT, communications, finance), biomedical imaging (healthcare), autonomous weapons and targeting systems (military), and automation and robotics (military, manufacturing, heavy industry, retail).

It should come as no surprise that the field of microscopy would ripe for transformation by artificial intelligence-aided image processing, analysis and interpretation. In biological research, microscopy generates prodigious amounts of image data; a single experiment with a transmission electron microscope can generate a data set containing over 100 terabytes worth of images [2]. The myriad of instruments and image processing techniques available today can resolve structures ranging in size across nearly 10 orders of magnitude, from single molecules to entire organisms, and capture spatial (3D) as well as temporal (4D) dynamics on time scales of femtoseconds to seconds.

Circa 2015


University of Utah engineers have taken a step forward in creating the next generation of computers and mobile devices capable of speeds millions of times faster than current machines.

The Utah engineers have developed an ultracompact beamsplitter—the smallest on record—for dividing light waves into two separate channels of information. The device brings researchers closer to producing silicon photonic chips that compute and shuttle data with light instead of electrons. Electrical and computer engineering associate professor Rajesh Menon and colleagues describe their invention today in the journal Nature Photonics.

Silicon photonics could significantly increase the power and speed of machines such as supercomputers, data center servers and the specialized computers that direct autonomous cars and drones with collision detection. Eventually, the technology could reach home computers and mobile devices and improve applications from gaming to video streaming.

A new robot has overcome a fundamental challenge of locomotion by teaching itself how to walk.

Researchers from Google developed algorithms that helped the four-legged bot to learn how to walk across a range of surfaces within just hours of practice, annihilating the record times set by its human overlords.

Their system uses deep reinforcement learning, a form of AI that teaches through trial and error by providing rewards for certain actions.

AI/Humans, our brave now world, happening now.


Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence — and to answer the question: What still makes us as human beings unique?

Mankind is still decades away from self-learning machines that are as intelligent as humans. But already today, chatbots, robots, digital assistants and other artificially intelligent entities exist that can emulate certain human abilities. Scientists and AI experts agree that we are in a race against time: we need to establish ethical guidelines before technology catches up with us. While AI Professor Jürgen Schmidhuber predicts artificial intelligence will be able to control robotic factories in space, the Swedish-American physicist Max Tegmark warns against a totalitarian AI surveillance state, and the philosopher Thomas Metzinger predicts a deadly AI arms race. But Metzinger also believes that Europe in particular can play a pioneering role on the threshold of this new era: creating a binding international code of ethics.

——————————————————————-

DW Documentary gives you knowledge beyond the headlines. Watch high-class documentaries from German broadcasters and international production companies. Meet intriguing people, travel to distant lands, get a look behind the complexities of daily life and build a deeper understanding of current affairs and global events. Subscribe and explore the world around you with DW Documentary.

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Rice researchers created a cost-saving alternative to GPU, an algorithm called “sub-linear deep learning engine” (SLIDE) that uses general purpose central processing units (CPUs) without specialized acceleration hardware.

Chinese technology giant Alibaba recently developed an AI system for diagnosing the COVID-19 (coronavirus).

Alibaba’s like Amazon, Microsoft, a video game company, and a nation-wide healthcare network all rolled into one with every branch being fed solutions from the company’s world-class AI department.

Per a report from Nikkei’s Asian Review (h/t TechSpot), Alibaba claims its new system can detect coronavirus in CT scans of patients’ chests with 96% accuracy against viral pneumonia cases. And it only takes 20 seconds for the AI to make a determination – according to the report, humans generally take about 15 minutes to diagnose the illness as there can be upwards of 300 images to evaluate.