Toggle light / dark theme

Neoadjuvant chemotherapy and radiation followed by surgical resection of the rectum is a standard treatment for locally advanced rectal cancer. A subset of rectal cancer is caused by a deficiency in mismatch repair. Because mismatch repair–deficient colorectal cancer is responsive to programmed death 1 (PD-1) blockade in the context of metastatic disease, it was hypothesized that checkpoint blockade could be effective in patients with mismatch repair–deficient, locally advanced rectal cancer.

We initiated a prospective phase 2 study in which single-agent dostarlimab, an anti–PD-1 monoclonal antibody, was administered every 3 weeks for 6 months in patients with mismatch repair–deficient stage II or III rectal adenocarcinoma. This treatment was to be followed by standard chemoradiotherapy and surgery. Patients who had a clinical complete response after completion of dostarlimab therapy would proceed without chemoradiotherapy and surgery. The primary end points are sustained clinical complete response 12 months after completion of dostarlimab therapy or pathological complete response after completion of dostarlimab therapy with or without chemoradiotherapy and overall response to neoadjuvant dostarlimab therapy with or without chemoradiotherapy.

A total of 12 patients have completed treatment with dostarlimab and have undergone at least 6 months of follow-up. All 12 patients (100%; 95% confidence interval, 74 to 100) had a clinical complete response, with no evidence of tumor on magnetic resonance imaging, 18 F-fluorodeoxyglucose–positron-emission tomography, endoscopic evaluation, digital rectal examination, or biopsy. At the time of this report, no patients had received chemoradiotherapy or undergone surgery, and no cases of progression or recurrence had been reported during follow-up (range, 6 to 25 months). No adverse events of grade 3 or higher have been reported.

The first exascale computer has officially arrived.

The world’s fastest supercomputer performed more than a quintillion calculations per second, entering the realm of exascale computing. That’s according to a ranking of the world’s speediest supercomputers called the TOP500, announced on May 30. The computer, known as Frontier, is the first exascale computer to be included on the biannual list.

Researchers at Meta Reality Labs are reporting that their work on Codec Avatars 2.0 has reached a level where the avatars are approaching complete realism. The researchers created a prototype Virtual Reality headset that has a custom-built accelerator chip specifically designed to manage the AI processing capable of rendering Meta’s photorealistic Codec Avatars on standalone virtual reality headsets.

The prototype Virtual Reality avatars use very advanced machine learning techniques.

Meta first showcased the work on the sophisticated Codec Avatars far back in March 2019. The avatars are powered using multiple neural networks and are generated via a special capture rig that contains 171 cameras. After the avatars are generated, they are powered in real-time through a prototype virtual reality headset that has five cameras. Two cameras are internal viewing each eye while three are external viewing the lower face. It is though that such advanced and photoreal avatars may one day replace video conferencing.

After nine years working at NASA Jet Propulsion Laboratory, Oliver Toupet is developing cutting-edge AI algorithms that enable the self-driving zoox vehicle to understand and make decisions based on its surroundings, and to optimize trajectories to reach its destination safely and comfortably.

Learn why he says the work he’s doing at Zoox is, in some ways, more challenging than his previous work.


Zoox principal software engineer Olivier Toupet on company’s autonomous robotaxi technology.

As far as data security is concerned, there is an even greater danger than remote cyberattacks: namely tampering with hardware that can be used to read out information—such as credit card data from a card reader. Researchers in Bochum have developed a new method to detect such manipulations. They monitor the systems with radio waves that react to the slightest changes in the ambient conditions. Unlike conventional methods, they can thus protect entire systems, not just individual components—and they can do it at a lower cost. The RUB’s science magazine Rubin features a report by the team from Ruhr-Universität Bochum (RUB), the Max Planck Institute for Security and Privacy and the IT company PHYSEC.

Paul Staat and Johannes Tobisch presented their findings at the IEEE Symposium on Security and Privacy, which took place in the U.S. from 23 to 25 May 2022. Both researchers are doing their Ph.D.s at RUB and conducting research at the Max Planck Institute for Security and Privacy in Bochum in Professor Christof Paar’s team. For their research, they are cooperating with Dr. Christian Zenger from the RUB spin-off company PHYSEC.

Artificial intelligence (AI) plays an important role in many systems, from predictive text to medical diagnoses. Inspired by the human brain, many AI systems are implemented based on artificial neural networks, where electrical equivalents of biological neurons are interconnected, trained with a set of known data, such as images, and then used to recognize or classify new data points.

In traditional neural networks used for , the image of the target object is first formed on an , such as the in a smart phone. Then, the image sensor converts light into , and ultimately into the , which can then be processed, analyzed, stored and classified using computer chips. Speeding up these abilities is key to improving any number of applications, such as face recognition, automatically detecting text in photos, or helping self-driving cars recognize obstacles.

While current, consumer-grade image classification technology on a digital chip can perform billions of computations per second, making it fast enough for most applications, more sophisticated image classification such as identifying moving objects, 3D object identification, or classification of microscopic cells in the body, are pushing the computational limits of even the most powerful technology. The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule.

Electro-optic modulators, which control aspects of light in response to electrical signals, are essential for everything from sensing to metrology and telecommunications. Today, most research into these modulators is focused on applications that take place on chips or within fiber optic systems. But what about optical applications outside the wire and off the chip, like distance sensing in vehicles?

Current technologies to modulate light in are bulky, slow, static, or inefficient. Now, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with researchers at the department of Chemistry at the University of Washington, have developed a compact and tunable electro-optic for free space applications that can modulate light at gigahertz speed.

“Our work is the first step toward a class of free-space electro-optic modulators that provide compact and efficient intensity modulation at gigahertz speed of free-space beams at telecom wavelengths,” said Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, senior author of the paper.

Researchers have pioneered a technique that can dramatically accelerate certain types of computer programs automatically, while ensuring program results remain accurate.

Their system boosts the speeds of programs that run in the Unix shell, a ubiquitous programming environment created 50 years ago that is still widely used today. Their method parallelizes these programs, which means that it splits program components into pieces that can be run simultaneously on multiple computer processors.

This enables programs to execute tasks like web indexing, , or analyzing data in a fraction of their original runtime.