Toggle light / dark theme

3D printed robotic gripper doesn’t need electronics to function

A new soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The device was developed by a team of roboticists at the University of California San Diego, in collaboration with researchers at the BASF corporation, who detailed their work in Science Robotics.

The researchers wanted to design a soft that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects. No such gripper existed before this work.

“We designed functions so that a series of valves would allow the gripper to both grip on contact and release at the right time,” said Yichen Zhai, a postdoctoral researcher in the Bioinspired Robotics and Design Lab at the University of California San Diego and the leading author of the paper. “It’s the first time such a gripper can both grip and release. All you have to do is turn the gripper horizontally. This triggers a change in the airflow in the valves, making the two fingers of the gripper release.”

Tesla Commences Production of Dojo Supercomputer for Autonomous Vehicle Training

In its second-quarter earnings report for 2023, Tesla revealed its ambitious plan to address vehicle autonomy at scale with four key technology pillars: an extensive real-world dataset, neural net training, vehicle hardware, and vehicle software. Notably, the electric vehicle manufacturer asserted its commitment to developing each of these pillars in-house. A significant milestone in this endeavor was announced as Tesla started the production of its custom-built Dojo training computer, a critical component in achieving faster and more cost-effective neural net training.

While Tesla already possesses one of the world’s most potent Nvidia GPU-based supercomputers, the Dojo supercomputer takes a different approach by utilizing chips specifically designed by Tesla. Back in 2019, Tesla CEO Elon Musk christened this project as “Dojo,” envisioning it as an exceptionally powerful training computer. He claimed that Dojo would be capable of performing an exaflop, or one quintillion (1018) floating-point operations per second, an astounding level of computational power. To put it into perspective, performing one calculation every second on a one exaFLOP computer system would take over 31 billion years, as reported by Network World.

The development of Dojo has been a continuous process. At Tesla’s AI Day in 2021, the automaker showcased its initial chip and training tiles, which would eventually form a complete Dojo cluster, also known as an “exapod.” Tesla’s plan involves combining two sets of three tiles in a tray, and then placing two trays in a computer cabinet to achieve over 100 petaflops per cabinet. With a 10-cabinet system, Tesla’s Dojo exapod will exceed the exaflop barrier of compute power.

GitHub CEO says Copilot will write 80% of code “sooner than later”

By simply pressing the tab key, a developer using Copilot can finish a line, generate blocks of code, or even write entire programs. According to GitHub, over 10,000 organizations, ranging from Coca-Cola to Airbnb, have signed up for Copilot’s enterprise version, and more than 30,000 employees at Microsoft itself now regularly code with assistance from Copilot.

“Sooner than later, 80% of the code is going to be written by Copilot. And that doesn’t mean the developer is going to be replaced.”

Recently, Freethink spoke with Thomas Dohmke, GitHub’s CEO, to learn more about how Copilot promises to refashion programming as a profession, and the questions AI-powered development raises about the future of innovation itself. We also talked about why coding with Copilot is so much fun, how AI is going to change the way we learn, and whether Copilot can fix banks that are still running COBOL on mainframes.

RT-2: New model translates vision and language into action

Robotic Transformer 2 (RT-2) is a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control.

High-capacity vision-language models (VLMs) are trained on web-scale datasets, making these systems remarkably good at recognising visual or language patterns and operating across different languages. But for robots to achieve a similar level of competency, they would need to collect robot data, first-hand, across every object, environment, task, and situation.

In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalised instructions for robotic control, while retaining web-scale capabilities.

Improved AI model boosts GitHub Copilot’s code generation capabilities

GitHub Copilot is getting an upgrade with an improved AI model and enhanced contextual filtering, resulting in faster and more tailored code suggestions for developers.

The new AI model delivers a 13% improvement in latency, while enhanced contextual filtering delivers a 6% relative improvement in code acceptance. These improvements are coming to GitHub Copilot for Individuals and GitHub Copilot for Business.

According to Github, the new model was developed together with OpenAI and Azure AI, and the 13% improvement in latency means that GitHub Copilot generates code suggestions for developers faster than ever before, promising a significant increase in overall productivity.

Unearthing Our Past, Predicting Our Future: Scientists Discover the Genes That Shape Our Bones

This groundbreaking study, which was published as the cover article in the journal Science, not only sheds light on our evolutionary history but also paves the way for a future where physicians could more accurately assess a patient’s likelihood of suffering from ailments like back pain or arthritis later in life.

“Our research is a powerful demonstration of the impact of AI in medicine, particularly when it comes to analyzing and quantifying imaging data, as well as integrating this information with health records and genetics rapidly and at large scale,” said Vagheesh Narasimhan, an assistant professor of integrative biology as well as statistics and data science, who led the multidisciplinary team of researchers, to provide the genetic map of skeletal proportions.

Has JWST shown the Universe is TWICE as old as we think?!

Go to https://brilliant.org/drbecky to get a 30-day free trial and the first 200 people will get 20% off their annual subscription. A new research study has come out claiming that to explain the massive galaxies found at huge distances in James Webb Space Telescope images, the Universe is older than we think, at 26.7 billion years (rather than 13.8 billion years old). In this video I’m diving into that study, looking at what model they used to get at that claim (a combination of the expansion of the universe and “tired light” ideas of redshift), how this impacts our best model of the Universe and the so-called “Crisis is Cosmology”, and why I’m not convinced yet!

#astronomy #JWST #cosmology.

My previous YouTube video on how JWST’s massive galaxies are no longer “impossible” — https://youtu.be/W4KH1Jw6HBI

Gupta et al. (2023; is the universe 26.7 billion years old?) — https://academic.oup.com/mnras/advance-article/doi/10.1093/m…32/7221343
Labbé et al. (2023; over-massive galaxies spotted in JWST data) — https://arxiv.org/pdf/2207.12446.pdf.
Arrabal Haro et al. (2023; z~16 candidate galaxy turns out to be z=4.9) — https://arxiv.org/pdf/2303.15431.pdf.
Zwicky (1929; “tired light” hypothesis raised for first time) — https://www.pnas.org/doi/epdf/10.1073/pnas.15.10.

JWST observing schedules (with public access!): https://www.stsci.edu/jwst/science-execution/observing-schedules.
JWST data archive: https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html.
Twitter bot for JWST current observations: https://twitter.com/JWSTObservation.
The successful proposals in Cycle 2 (click on the proposal number and then “public PDF” to see details): https://www.stsci.edu/jwst/science-execution/approved-progra…cycle-2-go.

00:00 — Introduction: JWST’s massive galaxy problem.

/* */