Toggle light / dark theme

UMass Amherst researchers have pushed forward the boundaries of biomedical engineering one hundredfold with a new method for DNA detection with unprecedented sensitivity.

“DNA detection is in the center of bioengineering,” says Jinglei Ping, lead author of the paper that appeared in Proceedings of the National Academy of Sciences.

Ping is an assistant professor of mechanical and , an adjunct assistant professor in and affiliated with the Center for Personalized Health Monitoring of the Institute for Applied Life Sciences. “Everyone wants to detect the DNA at a low concentration with a high sensitivity. And we just developed this method to improve the sensitivity by about 100 times with no cost.”

An internationally-renowned Iranian scientist and this year’s winner of Iran’s prestigious Mustafa Prize for science and technology has hailed the country’s great advances in the fields of nanotechnology and medicine.

“Iran has always been far ahead in the field of nanotechnology,” Omid Farokhzad, who has won the prize for design, development, and clinical translation of novel polymeric nanomedicines used to treat various diseases, especially cancer, said on Monday.

It’s been a busy week for IonQ, the quantum computing start-up focused on developing trapped-ion-based systems. At the Quantum World Congress today, the company announced two new systems (Forte Enterprise and Tempo) intended to be rack-mountable and deployable in a traditional data center. Yesterday, speaking at Tabor Communications (HPCwire parent organization) HPC and AI on Wall Street conference, the company made a strong pitch for reaching quantum advantage in 2–3 years, using the new systems.

If you’ve been following quantum computing, you probably know that deploying quantum computers in the datacenter is a rare occurrence. Access to the vast majority NISQ era computers has been through web portals. The latest announcement from IonQ, along with somewhat similar announcement from neutral atom specialist QuEra in August, and increased IBM efforts (Cleveland Clinic and PINQ2) to selectively place on-premise quantum systems suggest change is coming to the market.

IonQ’s two rack-mounted solutions are designed for businesses and governments wanting to integrate quantum capabilities within their existing infrastructure. “Businesses will be able to harness the power of quantum directly from their own data centers, making the technology significantly more accessible and easy to apply to key workflows and business processes,” reported the company. IonQ is calling the new systems enterprise-grade. (see the official announcement.)

Benchmarks are a key driver of progress in AI. But they also have many shortcomings. The new GPT-Fathom benchmark suite aims to reduce some of these pitfalls.

Benchmarks allow AI developers to measure the performance of their models on a variety of tasks. In the case of language models, for example, answering knowledge questions or solving logic tasks. Depending on its performance, the model receives a score that can then be compared with the results of other models.

These benchmarking results form the basis for further research decisions and, ultimately, investments. They also provide information about the strengths and weaknesses of individual methods.

Robots are great specialists, but poor generalists. Typically, you have to train a model for each task, robot, and environment. Changing a single variable often requires starting from scratch. But what if we could combine the knowledge across robotics and create a way to train a general-purpose robot?

Today, we are launching a new set of resources for general-purpose robotics learning across different robot types, or embodiments. Together with partners from 33 academic labs we have pooled data from 22 different robot types to create the Open X-Embodiment dataset. We also release RT-1-X, a robotics transformer (RT) model derived from RT-1 and trained on our dataset, that shows skills transfer across many robot embodiments.

In this work, we show training a single model on data from multiple embodiments leads to significantly better performance across many robots than those trained on data from individual embodiments. We tested our RT-1-X model in five different research labs, demonstrating 50% success rate improvement on average across five different commonly used robots compared to methods developed independently and specifically for each robot. We also showed that training our visual language action model, RT-2, on data from multiple embodiments tripled its performance on real-world robotic skills.