Toggle light / dark theme

https://vimeo.com/234073915

Both are AI-enabled, allowing them to take in their surroundings and learn and evolve over time. They know what time to start cooking a well-done burger so that it’s done at exactly the same time as a medium-rare burger for the same order, or could learn how to optimize oil use to minimize waste, for instance.

In a pre-pandemic time of restaurant labor shortages, Flippy kept kitchen productivity high and costs low, a giant deal in an industry known for tiny margins. Introducing Flippy into a kitchen can increase profit margins by a whopping 300%, not to mention significantly reduce the stress managers feel when trying to fill shifts.

But even if restaurants have an easier time finding workers as places reopen, Flippy and ROAR aren’t gunning for people’s jobs. They’re designed to be collaborative robots, or cobots, the cost-effective machines created to work with humans, not against them.

Rapid progress has been made in recent years to build these tiny machines, thanks to supramolecular chemists, chemical and biomolecular engineers, and nanotechnologists, among others, working closely together. But one area that still needs improvement is controlling the movements of swarms of molecular robots, so they can perform multiple tasks simultaneously.

Flying automobiles have long been a staple of science fiction’s optimistic visions of tomorrow, right up there with rocket jetpacks, holidays on the moon, and robot butlers. And who wouldn’t want to climb into a vehicle capable of rising up into the air above the clogged arteries of traffic experienced on most major boulevards, highways, and freeways?

Now a lofty new air taxi being built by the Israeli startup firm Urban Aeronautics hopes to cash in on those promises with its new Vertical Takeoff and Landing (VTOL) car that unites technology with Jetsons-like futuristic dreams mostly only observed in films like Blade Runner, The Fifth Element, Back to the Future, and most recently on TV in Season 3 of HBO’s Westworld.

Cityhawk 2

Like many things about Elon Musk, Tesla’s approach to achieving autonomous driving is polarizing. Bucking the map-based trend set by industry veterans such as Waymo, Tesla opted to dedicate its resources in pursuing a vision-based approach to achieve full self-driving instead. This involves a lot of hard, tedious work on Tesla’s part, but today, there are indications that the company’s controversial strategy is finally paying off.

In a recent talk, Tesla AI Director Andrej Karpathy discussed the key differences between the map-based approach of Waymo and Tesla’s camera-based strategy. According to Karpathy, Waymo’s use of pre-mapped data and LiDAR make scaling difficult, since vehicles’ autonomous capabilities are practically tied to a geofenced area. Tesla’s vision-based approach, which uses cameras and artificial intelligence, is not. This means that Autopilot and FSD improvements can be rolled out to the fleet, and they would function anywhere.

This rather ambitious plan for Tesla’s full self-driving system has caught a lot of skepticism in the past, with critics pointing out that map-based FSD is the way to go. Tesla, in response, dug its heels in and doubled down on its vision-based initiative. This, in a way, resulted in Autopilot improvements and the rollout of FSD features taking a lot of time, particularly since training the neural networks, which recognize objects and driving behavior on the road, requires massive amounts of real-world data.

For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?

Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.

Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.

There’s a lot of hope that artificial intelligence could help speed up the time it takes to make a drug and also increase the rate of success. Several startups have emerged to capitalize on this opportunity. But Insitro is a bit different from some of these other companies, which rely more heavily on machine learning than biology.


Machine learning can speed up the creation of new drugs and unlock the mysteries of major diseases, says Insitro CEO Daphne Koller.

[Photo: Ivan-balvan/iStock]

MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.

The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.

Their results, published on June 8, 2020, in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.

When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006.

Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades. So, I set out to see just how easy it was: Could I, for example, quickly and cheaply make a camera that could recognize and track chosen objects?

Embedded machine learning is evolving rapidly. In April 2019, Hands On looked at Google’s Coral Dev AI board which incorporates the company’s Edge tensor processing unit (TPU), and in July 2019, IEEE Spectrum featured Adafruit’s software library, which lets even a handheld game device do simple speech recognition. The Jetson Nano is closer to the Coral Dev board: With its 128 parallel processing cores, like the Coral, it’s powerful enough to handle a real-time video feed, and both have Raspberry Pi–style 40-pin GPIO connectors for driving external hardware.