Toggle light / dark theme

The scientific revolution was ushered in at the beginning of the 17th century with the development of two of the most important inventions in history — the telescope and the microscope. With the telescope, Galileo turned his attention skyward, and advances in optics led Robert Hooke and Antonie van Leeuwenhoek toward the first use of the compound microscope as a scientific instrument, circa 1665. Today, we are witnessing an information technology-era revolution in microscopy, supercharged by deep learning algorithms that have propelled artificial intelligence to transform industry after industry.

One of the major breakthroughs in deep learning came in 2012, when the performance superiority of a deep convolutional neural network combined with GPUs for image classification was revealed by Hinton and colleagues [1] for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In AI’s current innovation and implementation phase, deep learning algorithms are propelling nearly all computer vision-intensive applications, including autonomous vehicles (transportation, military), facial recognition (retail, IT, communications, finance), biomedical imaging (healthcare), autonomous weapons and targeting systems (military), and automation and robotics (military, manufacturing, heavy industry, retail).

It should come as no surprise that the field of microscopy would ripe for transformation by artificial intelligence-aided image processing, analysis and interpretation. In biological research, microscopy generates prodigious amounts of image data; a single experiment with a transmission electron microscope can generate a data set containing over 100 terabytes worth of images [2]. The myriad of instruments and image processing techniques available today can resolve structures ranging in size across nearly 10 orders of magnitude, from single molecules to entire organisms, and capture spatial (3D) as well as temporal (4D) dynamics on time scales of femtoseconds to seconds.

Circa 2015


University of Utah engineers have taken a step forward in creating the next generation of computers and mobile devices capable of speeds millions of times faster than current machines.

The Utah engineers have developed an ultracompact beamsplitter—the smallest on record—for dividing light waves into two separate channels of information. The device brings researchers closer to producing silicon photonic chips that compute and shuttle data with light instead of electrons. Electrical and computer engineering associate professor Rajesh Menon and colleagues describe their invention today in the journal Nature Photonics.

Silicon photonics could significantly increase the power and speed of machines such as supercomputers, data center servers and the specialized computers that direct autonomous cars and drones with collision detection. Eventually, the technology could reach home computers and mobile devices and improve applications from gaming to video streaming.

A new robot has overcome a fundamental challenge of locomotion by teaching itself how to walk.

Researchers from Google developed algorithms that helped the four-legged bot to learn how to walk across a range of surfaces within just hours of practice, annihilating the record times set by its human overlords.

Their system uses deep reinforcement learning, a form of AI that teaches through trial and error by providing rewards for certain actions.

AI/Humans, our brave now world, happening now.


Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence — and to answer the question: What still makes us as human beings unique?

Mankind is still decades away from self-learning machines that are as intelligent as humans. But already today, chatbots, robots, digital assistants and other artificially intelligent entities exist that can emulate certain human abilities. Scientists and AI experts agree that we are in a race against time: we need to establish ethical guidelines before technology catches up with us. While AI Professor Jürgen Schmidhuber predicts artificial intelligence will be able to control robotic factories in space, the Swedish-American physicist Max Tegmark warns against a totalitarian AI surveillance state, and the philosopher Thomas Metzinger predicts a deadly AI arms race. But Metzinger also believes that Europe in particular can play a pioneering role on the threshold of this new era: creating a binding international code of ethics.

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Rice researchers created a cost-saving alternative to GPU, an algorithm called “sub-linear deep learning engine” (SLIDE) that uses general purpose central processing units (CPUs) without specialized acceleration hardware.

Chinese technology giant Alibaba recently developed an AI system for diagnosing the COVID-19 (coronavirus).

Alibaba’s like Amazon, Microsoft, a video game company, and a nation-wide healthcare network all rolled into one with every branch being fed solutions from the company’s world-class AI department.

Per a report from Nikkei’s Asian Review (h/t TechSpot), Alibaba claims its new system can detect coronavirus in CT scans of patients’ chests with 96% accuracy against viral pneumonia cases. And it only takes 20 seconds for the AI to make a determination – according to the report, humans generally take about 15 minutes to diagnose the illness as there can be upwards of 300 images to evaluate.

The fact that self-driving trucks did not initially capture the public imagination is perhaps not entirely shocking. After all, most people have never been inside a truck, let alone a self-driving one, and don’t give them more than a passing thought. But just because trucks aren’t foremost in most people’s thoughts, doesn’t mean trucks don’t impact everyone’s lives day in and day out. Trucking is an $800 billion industry in the US. Virtually everything we buy — from our food to our phones to our furniture — reaches us via truck. Automating the movement of goods could, therefore, have at least as profound an impact on our lives as automating how we move ourselves. And people are starting to take notice.

As self-driving industry pioneers, we’re not surprised: we have been saying this for years. We founded Kodiak Robotics in 2018 with the vision of launching a freight carrier that would drive autonomously on highways, while continuing to use traditional human drivers for first- and last-mile pickup and delivery. We developed this model because our experience in the industry convinced us that today’s self-driving technology is best-suited for highway driving. While training self-driving vehicles to drive on interstate highways is complicated, hard work, it’s a much simpler, more constrained problem than driving on city streets, which have pedestrians, public transportation, bikes, pets, and other things that make cities great to live in but difficult for autonomous technology to understand and navigate.

Last summer, the National Security Commission on Artificial Intelligence asked to hear original, creative ideas about how the United States would maintain global leadership in a future enabled by artificial intelligence. RAND researchers stepped up to the challenge.


“Send us your ideas!” That was the open call for submissions about emerging technology’s role in global order put out last summer by the National Security Commission on Artificial Intelligence (NSCAI). RAND researchers stepped up to the challenge, and a wide range of ideas were submitted. Ten essays were ultimately accepted for publication.

The NSCAI, co-chaired by Eric Schmidt, the former chief executive of Alphabet (Google’s parent company), and Robert Work, the former deputy secretary of defense, is a congressionally mandated, independent federal commission set up last year “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States.”

The commission’s ultimate role is to elevate awareness and to inform better legislation. As part of its mission, the commission is tasked with helping the Department of Defense better understand and prepare for a world where AI might impact national security in unexpected ways.

Rice University computer scientists have overcome a major obstacle in the burgeoning artificial intelligence industry by showing it is possible to speed up deep learning technology without specialized acceleration hardware like graphics processing units (GPUs).

Computer scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.