Toggle light / dark theme

Alibaba Shuts Down its Quantum Computing Effort

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Reuters’ reported earlier this week that Alibaba “cut a quantum computing laboratory and team from its research arm, donating both the lab and related experimental equipment to Zhejiang University.”

Alibaba was a relatively early entrant among giant e-commerce/cloud providers into quantum computing research, placing the effort in its Alibaba’s DAMO Academy research organization. There are reports it had invested on the order of $15 billion in the effort. According to the Reuters report, about 30 employees are being released with and effort under way to find positions for them at Zhejiang.

Rather than being tied to specific issues with the quantum research, the prevailing opinion seems to be that the quantum work was caught in the larger turmoil surrounding Alibaba and its ongoing reorganization. The company said its DAMO organization will deepen its work on AI and machining learning research which may be able to have a nearer-term impact on Alibaba’s business.

These ‘anthrobots’ created from human cells are healing neurons

The researchers are excited by the potential of how cells cooperate and communicate in the body and how they can be reprogrammed to create new structures and functions.


With the help of Simon Garnier at the New Jersey Institute of Technology, the team characterized the different types of Anthrobots that were produced.

They observed that bots fell into a few discrete categories of shape and movement, ranging in size from 30 to 500 micrometers (from the thickness of a human hair to the point of a sharpened pencil), filling an important niche between nanotechnology and larger engineered devices.

Some were spherical and fully covered in cilia, and some were irregular or football-shaped with more patchy coverage of cilia or just covered with cilia on one side. They traveled in straight lines, moved in tight circles, combined those movements, or just sat around and wiggled.

Researchers use 2D material to reshape 3D electronics for AI hardware

Multifunctional computer chips have evolved to do more with integrated sensors, processors, memory and other specialized components. However, as chips have expanded, the time required to move information between functional components has also grown.

“Think of it like building a house,” said Sang-Hoon Bae, an assistant professor of mechanical engineering and at the McKelvey School of Engineering at Washington University in St. Louis. “You build out laterally and up vertically to get more function, more room to do more specialized activities, but then you have to spend more time moving or communicating between rooms.”

To address this challenge, Bae and a team of international collaborators, including researchers from the Massachusetts Institute of Technology, Yonsei University, Inha University, Georgia Institute of Technology and the University of Notre Dame, demonstrated monolithic 3D integration of layered 2D material into novel processing hardware for artificial intelligence (AI) computing.

AI-Enhanced Imaging: Probing Brain’s Visual Processing

Summary: Researchers used AI to select and generate images for studying brain’s visual processing. Functional MRI (fMRI) recorded heightened brain activity in response to these images, surpassing control images.

The approach enabled tuning visual models to individual responses, enhancing the study of brain’s reaction to visual stimuli. This method, offering an unbiased, systematic view of visual processing, could revolutionize neuroscience and therapeutic approaches.

Tiny robots made from human cells heal damaged tissue

Are robots made from frog cells (Xenopus laevis).


Scientists have developed tiny robots made of human cells that are able to repair damaged neural tissue1. The ‘anthrobots’ were made using human tracheal cells and might, in future, be used in personalized medicine.

Developmental biologist Michael Levin at Tufts University in Medford, Massachusetts, and his colleagues had previously developed tiny robots using clumps of embryonic frog cells. But the medical applications of these ‘xenobots’ were limited, because they weren’t derived from human cells and because they had to be manually carved into the desired shape. The researchers have now developed self-assembling anthrobots and are investigating their therapeutic potential using human tissue grown in the laboratory. They published their findings in Advanced Science.

Levin and his team grew spheroids of human tracheal skin cells in a gel for two weeks, before removing the clusters and growing them for one week in a less viscous solution. This caused tiny hairs on the cells called cilia to move to the outside of the spheroids instead of the inside. These cilia acted as oars, and the researchers found that the resulting anthrobots — each containing a few hundred cells — often swam in one of several patterns. Some swam in straight lines, others swam in circles or arcs, and some moved chaotically.

Robot Dog Designed as Astronaut Companion

A companion robot dog, designed to provide emotional support to astronauts, has been unveiled by a student from South Korea’s Hongik University.

The small-scale robot dog Laika is named after the first dog to orbit the Earth aboard Sputnik 2.

A video shows Laika running, walking, barking and sitting. It’s designed to replicate the movements and behavior of real dogs to provide an approachable design that enables emotional connection for astronauts during lengthy missions.

The Military’s Big Bet on Artificial Intelligence

Number 4 Hamilton Place is a be-columned building in central London, home to the Royal Aeronautical Society and four floors of event space. In May, the early 20th-century Edwardian townhouse hosted a decidedly more modern meeting: Defense officials, contractors, and academics from around the world gathered to discuss the future of military air and space technology.

Things soon went awry. At that conference, Tucker Hamilton, chief of AI test and operations for the United States Air Force, seemed to describe a disturbing simulation in which an AI-enabled drone had been tasked with taking down missile sites. But when a human operator started interfering with that objective, he said, the drone killed its operator, and cut the communications system.

/* */