The supercomputer Fugaku – jointly developed by RIKEN and Fujitsu, based on Arm technology – has taken first place on Top500, a ranking of the world’s fastest supercomputers.
It swept other rankings too – claiming the top spot on HPCG, a ranking of supercomputers running real-world applications; and HPL-AI, which ranks supercomputers based on their performance in artificial intelligence applications; and Graph 500, which ranks systems based on data-intensive loads.
This is the first time in history that the same supercomputer has achieved number one on Top500, HPCG, and Graph500 simultaneously. The awards were announced today at the ISC High Performance 2020 Digital, an international high-performance computing conference.
There are many ways artificial intelligence can be used for good and to help solve some of the world’s biggest problems. Many researchers and organizations are prioritizing projects where artificial intelligence can be used for good. Here are my top 10 ways AI is used responsibly.
Both are AI-enabled, allowing them to take in their surroundings and learn and evolve over time. They know what time to start cooking a well-done burger so that it’s done at exactly the same time as a medium-rare burger for the same order, or could learn how to optimize oil use to minimize waste, for instance.
In a pre-pandemic time of restaurant labor shortages, Flippy kept kitchen productivity high and costs low, a giant deal in an industry known for tiny margins. Introducing Flippy into a kitchen can increase profit margins by a whopping 300%, not to mention significantly reduce the stress managers feel when trying to fill shifts.
But even if restaurants have an easier time finding workers as places reopen, Flippy and ROAR aren’t gunning for people’s jobs. They’re designed to be collaborative robots, or cobots, the cost-effective machines created to work with humans, not against them.
Rapid progress has been made in recent years to build these tiny machines, thanks to supramolecular chemists, chemical and biomolecular engineers, and nanotechnologists, among others, working closely together. But one area that still needs improvement is controlling the movements of swarms of molecular robots, so they can perform multiple tasks simultaneously.
Flying automobiles have long been a staple of science fiction’s optimistic visions of tomorrow, right up there with rocket jetpacks, holidays on the moon, and robot butlers. And who wouldn’t want to climb into a vehicle capable of rising up into the air above the clogged arteries of traffic experienced on most major boulevards, highways, and freeways?
Now a lofty new air taxi being built by the Israeli startup firm Urban Aeronautics hopes to cash in on those promises with its new Vertical Takeoff and Landing (VTOL) car that unites technology with Jetsons-like futuristic dreams mostly only observed in films like Blade Runner, The Fifth Element, Back to the Future, and most recently on TV in Season 3 of HBO’s Westworld.
Like many things about Elon Musk, Tesla’s approach to achieving autonomous driving is polarizing. Bucking the map-based trend set by industry veterans such as Waymo, Tesla opted to dedicate its resources in pursuing a vision-based approach to achieve full self-driving instead. This involves a lot of hard, tedious work on Tesla’s part, but today, there are indications that the company’s controversial strategy is finally paying off.
In a recent talk, Tesla AI Director Andrej Karpathy discussed the key differences between the map-based approach of Waymo and Tesla’s camera-based strategy. According to Karpathy, Waymo’s use of pre-mapped data and LiDAR make scaling difficult, since vehicles’ autonomous capabilities are practically tied to a geofenced area. Tesla’s vision-based approach, which uses cameras and artificial intelligence, is not. This means that Autopilot and FSD improvements can be rolled out to the fleet, and they would function anywhere.
This rather ambitious plan for Tesla’s full self-driving system has caught a lot of skepticism in the past, with critics pointing out that map-based FSD is the way to go. Tesla, in response, dug its heels in and doubled down on its vision-based initiative. This, in a way, resulted in Autopilot improvements and the rollout of FSD features taking a lot of time, particularly since training the neural networks, which recognize objects and driving behavior on the road, requires massive amounts of real-world data.
For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?
Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.
Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.