Toggle light / dark theme

Last week, Amazon patented a delivery system involving self driving trucks carrying several small robots that deliver packages to homes.


Once all the small delivery bots are back on board, the truck (which would have a human driver in the near future but likely be autonomous in the less-near future) drives off to the next block—its fleet of mini-me’s restocking with new packages en route—and the scene repeats itself.

Cool/creepy? Good/bad? Depends on your perspective. On the one hand, employing fewer humans would bring Amazon more cost savings in the long run, which it would ideally pass on to customers and re-invest in other parts of the business, leading to hiring more people in a virtuous circle.

But on the other hand, it’s not hard to imagine the secondary vehicles going awry; there would be plenty of obstacles for them to get around (dogs, bikes, sprinklers, and children are just a few that come to mind), and given how hard it’s been to bring self-driving cars to market, Amazon may be underestimating the challenge of maneuvering the small delivery vehicles even 100 feet from truck to doorstep.

Lambda, an AI infrastructure company, this week announced it raised $15 million in a venture funding round from 1517, Gradient Ventures, Razer, Bloomberg Beta, Georges Harik, and others, plus a $9.5 million debt facility. The $24.5 million investment brings the company’s total raised to $28.5 million, following an earlier $4 million seed tranche.

In 2013, San Francisco, California-based Lambda controversially launched a facial recognition API for developers working on apps for Google Glass, Google’s ill-fated heads-up augmented reality display. The API — which soon expanded to other platforms — enabled apps to do things like “remember this face” and “find your friends in a crowd,” Lambda CEO Stephen Balaban told TechCrunch at the time. The API has been used by thousands of developers and was, at least at one point, seeing over 5 million API calls per month.

Since then, however, Lambda has pivoted to selling hardware systems designed for AI, machine learning, and deep learning applications. Among these are the TensorBook, a laptop with a dedicated GPU, and a workstation product with up to four desktop-class GPUs for AI training. Lambda also offers servers, including one designed to be shared between teams and a server cluster, called Echelon, that Balaban describes as “datacenter-scale.”

A dazzling new animation puts you aboard NASA’s robotic Juno spacecraft during its epic flybys last month of Jupiter and the huge moon Ganymede.

On June 7, Juno zoomed within just 645 miles (1038 kilometers) of Ganymede, the largest moon in the solar system. It was the closest a probe had gotten to the icy, heavily cratered world since May 2000, when NASA’s Galileo spacecraft flew by at a distance of about 620 miles (1000 km).

The world’s first 3D-printed steel bridge has opened in Amsterdam. It was created by robotic arms using welding torches to deposit the structure of the bridge layer-by-layer using 4500 kilograms of stainless steel.


The first ever 3D-printed steel bridge has opened in Amsterdam, the Netherlands. It was created by robotic arms using welding torches to deposit the structure of the bridge layer by layer, and is made of 4500 kilograms of stainless steel.

The 12-metre-long MX3D Bridge was built by four commercially available industrial robots and took six months to print. The structure was transported to its location over the Oudezijds Achterburgwal canal in central Amsterdam last week and is now open to pedestrians and cyclists.

More than a dozen sensors attached to the bridge after the printing was completed will monitor strain, movement, vibration and temperature across the structure as people pass over it and the weather changes. This data will be fed into a digital model of the bridge.

Scientists have waited months for access to highly accurate protein structure prediction since DeepMind presented remarkable progress in this area at the 2020 Critical Assessment of Structure Prediction, or CASP14, conference. The wait is now over.

Researchers at the Institute for Protein Design at the University of Washington School of Medicine in Seattle have largely recreated the performance achieved by DeepMind on this important task. These results will be published online by the journal Science on Thursday, July 15.

Unlike DeepMind, the UW Medicine team’s method, which they dubbed RoseTTAFold, is freely available. Scientists from around the world are now using it to build models to accelerate their own research. Since July, the program has been downloaded from GitHub by over 140 independent research teams.

The Google Quantum AI team has found that adding logical qubits to the company’s quantum computer reduced the logical qubit error rate exponentially. In their paper published in the journal Nature, the group describes their work with logical qubits as an error correction technique and outline what they have learned so far.

One of the hurdles standing in the way of the creation of usable quantum computers is figuring out how to either prevent errors from occurring or fixing them before they are used as part of a computation. On traditional computers, the problem is mostly solved by adding a parity bit—but that approach will not work with quantum computers because of the different nature of qubits—attempts to measure them destroy the data. Prior research has suggested that one possible solution to the problem is to group qubits into clusters called logical qubits. In this new effort, the team at AI Quantum has tested this idea on Google’s Sycamore quantum .

Sycamore works with 54 physical qubits, in their work, the researchers created logical qubits of different sizes ranging from five to 21 qubits to see how each would work. In so doing, they found that adding qubits reduced rates exponentially. They were able to measure the extra qubits in a way that did not involve collapsing their state, but that still provided enough information for them to be used for computations.

Today, in a peer-reviewed paper published in the prestigious scientific journal Nature, DeepMind offered further details of how exactly its A.I. software was able to perform so well. It has also open-sourced the code it used to create AlphaFold 2 for other researchers to use.


But it’s still not clear when researchers and drug companies will have easy access to AlphaFold’s structure predictions.