Toggle light / dark theme

The existing jacket can perform one logical operation per second, compared to the more than a billion operations per second typical of a home computer, says Preston. In practice, this means the jacket can only execute short command sequences. Due to the speed of the logic, along with some other engineering challenges, Zhang says he thinks it’ll take five to 10 years for these textile-based robots to reach commercial maturity.

In the future, Preston’s team plans to do away with the carbon dioxide canister, which is impractical. (You have to refill it like you would a SodaStream.) Instead, his team wants to just use ambient air to pump up the jacket. As a separate project, the team has already developed a foam insole for a shoe that pumps the surrounding air into a bladder worn around the waist when the wearer takes a step. They plan to integrate a similar design into the jacket.

Preston also envisions clothing that senses and responds to the wearer’s needs. For example, a sensor on a future garment could detect when the wearer is beginning to lift their arm and inflate without any button-pressing. “Based on some stimulus from the environment and the current state, the logic system can allow the wearable robot to choose what to do,” he says. We’ll be waiting for this fashion trend to blow up.

This year’s Breakthrough Prize in Life Sciences has a strong physical sciences element. The prize was divided between six individuals. Demis Hassabis and John Jumper of the London-based AI company DeepMind were awarded a third of the prize for developing AlphaFold, a machine-learning algorithm that can accurately predict the 3D structure of proteins from just the amino-acid sequence of their polypeptide chain. Emmanuel Mignot of Stanford University School of Medicine and Masashi Yanagisawa of the University of Tsukuba, Japan, were awarded for their work on the sleeping disorder narcolepsy.

The remainder of the prize went to Clifford Brangwynne of Princeton University and Anthony Hyman of the Max Planck Institute of Molecular Cell Biology and Genetics in Germany for discovering that the molecular machinery within a cell—proteins and RNA—organizes by phase separating into liquid droplets. This phase separation process has since been shown to be involved in several basic cellular functions, including gene expression, protein synthesis and storage, and stress responses.

The award for Brangwynne and Hyman shows “the transformative role that the physics of soft matter and the physics of polymers can play in cell biology,” says Rohit Pappu, a biophysicist and bioengineer at Washington University in St. Louis. “[The discovery] could only have happened the way it did: a creative young physicist working with an imaginative cell biologist in an ecosystem where boundaries were always being pushed at the intersection of multiple disciplines.”

Today, Meta announced Make-A-Video, an AI-powered video generator that can create novel video content from text or image prompts, similar to existing image synthesis tools like DALL-E and Stable Diffusion. It can also make variations of existing videos, though it’s not yet available for public use.

The key technology behind Make-A-Video—and why it has arrived sooner than some experts anticipated—is that it builds off existing work with text-to-image synthesis used with image generators like OpenAI’s DALL-E. In July, Meta announced its own text-to-image AI model called Make-A-Scene.

We wonder what Tesla is going to reveal.

Once again, it is that time of the year. The annual Tesla AI Day, a demonstration of the most cutting-edge technologies from all of the company’s operating divisions is tomorrow. While Tesla vehicles receive the majority of press attention, the company has a wide range of other applications and products that it is constantly developing and improving.

Tesla’s AI is growing in popularity over time and has a livestream that millions of people view.

Some people watch because they know Elon Musk’s penchant for spectacle.

Whatever the motivation, on September 30, 2022, we can anticipate some shocks and introduce fresh concepts.

We expect four announcements about what Tesla works on; below are our expectations from Tesla AI Day 2022.

A recent video published on the company’s science blog features a new “pinch-grasping” robot system that could one day do a lot of the work that humans in Amazon warehouses do today. Or, potentially, help workers do their jobs more easily.

The topic of warehouse automation is more relevant than ever in the retail and e-commerce industries, especially for Amazon, which is the largest online retailer and the second-largest private sector employer in the US. Recode reported in June that research conducted inside Amazon predicted that the company could run out of workers to hire in the US by 2024 if it did not execute a series of sweeping changes, including increasing automation in its warehouses.

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers.

📝 The paper “LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action” is available below. Note that this is a collaboration between UC Berkeley, University of Warsaw, and Robotics at Google.
https://sites.google.com/view/lmnav.

Website layout with GPT-3: https://twitter.com/sharifshameem/status/1283322990625607681
Image interpolation video with Stable Diffusion: https://twitter.com/xsteenbrugge/status/1558508866463219712

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers.
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join.

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O’Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers.

Thumbnail background image credit: https://pixabay.com/images/id-6394751/

My head is currently swirling and whirling with a cacophony of conceptions. This maelstrom of meditations was triggered by NVIDIA’s recent announcement of their Jetson Orin Nano system-on-modules that deliver up to 80x the performance over the prior generation, which is, in their own words, “setting a new standard for entry-level edge AI and robotics.”

One of my contemplations centers on their use of the “entry level” qualifier in this context. When I was coming up, this bodacious beauty would have qualified as the biggest, baddest supercomputer on the planet.

I’m being serious. In 1975, which was the year I entered university, Cray Research announced their Cray-1 Supercomputer. Conceived by Seymore Cray, this was the first computer to successfully implement a vector processing architecture.

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

We are on the cusp of a paradigm shift brought by generative AI — but it isn’t about making creativity “quick and easy.” Generative technology opens new heights of human expression and helps creators find their authentic voices.

How we create is changing. The blog you read earlier today may have been made with generative AI. Within 10 years, most creative content will be produced with generative technologies.

Researchers at QuTech—a collaboration between the Delft University of Technology and TNO—have engineered a record number of six, silicon-based, spin qubits in a fully interoperable array. Importantly, the qubits can be operated with a low error-rate that is achieved with a new chip design, an automated calibration procedure, and new methods for qubit initialization and readout. These advances will contribute to a scalable quantum computer based on silicon. The results are published in Nature today.

Different materials can be used to produce qubits, the quantum analog to the bit of the classical computer, but no one knows which material will turn out to be best to build a large-scale quantum computer. To date there have only been smaller demonstrations of quantum chips with high quality qubit operations. Now, researchers from QuTech, led by Prof. Lieven Vandersypen, have produced a six qubit chip in silicon that operates with low error-rates. This is a major step towards a fault-tolerant quantum computer using silicon.

To make the qubits, individual electrons are placed in a linear array of six “” spaced 90 nanometers apart. The array of quantum dots is made in a silicon chip with structures that closely resemble the transistor—a common component in every computer chip. A quantum mechanical property called spin is used to define a qubit with its orientation defining the 0 or 1 logical state. The team used finely-tuned microwave radiation, magnetic fields, and electric potentials to control and measure the spin of individual electrons and make them interact with each other.