Toggle light / dark theme

501 likes, — growasentrepreneurs on September 13, 2024: Nvidia’s Blackwell chip is an engineering marvel, crafted from two of the largest chips ever made using TSMC’s 4-nanometer process. It took $10 billion and 3 years to develop, supported by high-speed networking, software, and incredible I/O capabilities.

This chip powers AI factories in data centers, designed to emulate human intelligence—specifically how we read, finish sentences, and summarize information.

Jensen Huang compares it to the intelligence of thousands of people, showcasing Blackwell’s potential to revolutionize AI in an unprecedented way.

It takes years of intense study and a steady hand for humans to perform surgery, but robots might have an easier time picking it up with today’s AI technology.

Researchers at Johns Hopkins University (JHU) and Stanford University have taught a robot surgical system to perform a bunch of surgical tasks as capably as human doctors, simply by training it on videos of those procedures.

The team leveraged a da Vinci Surgical System for this study. It’s a robotic system that’s typically remote controlled by a surgeon with arms that manipulate instruments for tasks like dissection, suction, and cutting and sealing vessels. Systems like these give surgeons much greater control, precision, and a closer look at patients on the operating table. The latest version is estimated to cost over US$2 million, and that doesn’t include accessories, sterilizing equipment, or training.

Amazon is poised to roll out its newest artificial intelligence chips as the Big Tech group seeks returns on its multibillion-dollar semiconductor investments and to reduce its reliance on market leader Nvidia.

Executives at Amazon’s cloud computing division are spending big on custom chips in the hopes of boosting the efficiency inside its dozens of data centers, ultimately bringing down its own costs as well as those of Amazon Web Services’ customers.

The effort is spearheaded by Annapurna Labs, an Austin-based chip start-up that Amazon acquired in early 2015 for $350 million. Annapurna’s latest work is expected to be showcased next month when Amazon announces widespread availability of ‘Trainium 2,’ part of a line of AI chips aimed at training the largest models.

A team of AI researchers and mathematicians affiliated with several institutions in the U.S. and the U.K. has developed a math benchmark that allows scientists to test the ability of AI systems to solve exceptionally difficult math problems. Their paper is posted on the arXiv preprint server.

MIT CSAIL researchers have developed a generative AI system, LucidSim, to train robots in virtual environments for real-world navigation. Using ChatGPT and physics simulators, robots learn to traverse complex terrains. This method outperforms traditional training, suggesting a new direction for robotic training.


A team of roboticists and engineers at MIT CSAIL, Institute for AI and Fundamental Interactions, has developed a generative AI approach to teaching robots how to traverse terrain and move around objects in the real world.

The group has published a paper describing their work and possible uses for it on the arXiv preprint server. They also presented their ideas at the recent Conference on Robot Learning (CORL 2024), held in Munich Nov. 6–9.

Waymo has expanded its robotaxi service to the general public in Los Angeles, allowing anyone with the Waymo One app to request a ride. This marks a significant step in autonomous vehicle technology, as Waymo continues to lead the industry with over 50,000 weekly passengers and a strong safety record.


Waymo on Tuesday opened its robotaxi service to anyone who wants a ride around Los Angeles, marking another milestone in the evolution of self-driving car technology since the company began as a secret project at Google 15 years ago.

The expansion comes eight months after Waymo began offering rides in Los Angeles to a limited group of passengers chosen from a waiting list that had ballooned to more than 300,000 people. Now, anyone with the Waymo One smartphone app will be able to request a ride around an 80-square-mile (129-square-kilometer) territory spanning the second largest U.S. city.

After Waymo received approval from California regulators to charge for rides 15 months ago, the company initially chose to launch its operations in San Francisco before offering a limited service in Los Angeles.

Researchers at Penn Engineering have developed PanoRadar, a system that uses radio waves and AI to provide robots with detailed 3D environmental views, even in challenging conditions like smoke and fog. This innovation offers a cost-effective alternative to LiDAR, enhancing robotic navigation and perception capabilities.


In the race to develop robust perception systems for robots, one persistent challenge has been operating in bad weather and harsh conditions. For example, traditional, light-based vision sensors such as cameras or LiDAR (Light Detection And Ranging) fail in heavy smoke and fog.

However, nature has shown that vision doesn’t have to be constrained by light’s limitations—many organisms have evolved ways to perceive their environment without relying on light. Bats navigate using the echoes of sound waves, while sharks hunt by sensing electrical fields from their prey’s movements.