Toggle light / dark theme

Founder Focus

501 likes, — growasentrepreneurs on September 13, 2024: Nvidia’s Blackwell chip is an engineering marvel, crafted from two of the largest chips ever made using TSMC’s 4-nanometer process. It took $10 billion and 3 years to develop, supported by high-speed networking, software, and incredible I/O capabilities.

This chip powers AI factories in data centers, designed to emulate human intelligence—specifically how we read, finish sentences, and summarize information.

Jensen Huang compares it to the intelligence of thousands of people, showcasing Blackwell’s potential to revolutionize AI in an unprecedented way.

Robot learns to perform surgical tasks expertly just by watching videos

It takes years of intense study and a steady hand for humans to perform surgery, but robots might have an easier time picking it up with today’s AI technology.

Researchers at Johns Hopkins University (JHU) and Stanford University have taught a robot surgical system to perform a bunch of surgical tasks as capably as human doctors, simply by training it on videos of those procedures.

The team leveraged a da Vinci Surgical System for this study. It’s a robotic system that’s typically remote controlled by a surgeon with arms that manipulate instruments for tasks like dissection, suction, and cutting and sealing vessels. Systems like these give surgeons much greater control, precision, and a closer look at patients on the operating table. The latest version is estimated to cost over US$2 million, and that doesn’t include accessories, sterilizing equipment, or training.

Graph-based AI model finds hidden links between science and art to suggest novel materials

Amazon is poised to roll out its newest artificial intelligence chips as the Big Tech group seeks returns on its multibillion-dollar semiconductor investments and to reduce its reliance on market leader Nvidia.

Executives at Amazon’s cloud computing division are spending big on custom chips in the hopes of boosting the efficiency inside its dozens of data centers, ultimately bringing down its own costs as well as those of Amazon Web Services’ customers.

The effort is spearheaded by Annapurna Labs, an Austin-based chip start-up that Amazon acquired in early 2015 for $350 million. Annapurna’s latest work is expected to be showcased next month when Amazon announces widespread availability of ‘Trainium 2,’ part of a line of AI chips aimed at training the largest models.

Virtual training uses generative AI to teach robots how to traverse real world terrain

MIT CSAIL researchers have developed a generative AI system, LucidSim, to train robots in virtual environments for real-world navigation. Using ChatGPT and physics simulators, robots learn to traverse complex terrains. This method outperforms traditional training, suggesting a new direction for robotic training.


A team of roboticists and engineers at MIT CSAIL, Institute for AI and Fundamental Interactions, has developed a generative AI approach to teaching robots how to traverse terrain and move around objects in the real world.

The group has published a paper describing their work and possible uses for it on the arXiv preprint server. They also presented their ideas at the recent Conference on Robot Learning (CORL 2024), held in Munich Nov. 6–9.

Getting robots to navigate in the real world at some point involves teaching them to learn on the fly, or by training them with videos of similar robots in a real-world environment. While such training has proven to be effective in limited environments, it tends to fail when a robot encounters something novel. In this new effort, the team at MIT developed virtual training that better translates to the real world.

Waymo’s robotaxis now open to anyone who wants a driverless ride in Los Angeles

Waymo has expanded its robotaxi service to the general public in Los Angeles, allowing anyone with the Waymo One app to request a ride. This marks a significant step in autonomous vehicle technology, as Waymo continues to lead the industry with over 50,000 weekly passengers and a strong safety record.


Waymo on Tuesday opened its robotaxi service to anyone who wants a ride around Los Angeles, marking another milestone in the evolution of self-driving car technology since the company began as a secret project at Google 15 years ago.

The expansion comes eight months after Waymo began offering rides in Los Angeles to a limited group of passengers chosen from a waiting list that had ballooned to more than 300,000 people. Now, anyone with the Waymo One smartphone app will be able to request a ride around an 80-square-mile (129-square-kilometer) territory spanning the second largest U.S. city.

After Waymo received approval from California regulators to charge for rides 15 months ago, the company initially chose to launch its operations in San Francisco before offering a limited service in Los Angeles.

Giving robots superhuman vision using radio signals

Researchers at Penn Engineering have developed PanoRadar, a system that uses radio waves and AI to provide robots with detailed 3D environmental views, even in challenging conditions like smoke and fog. This innovation offers a cost-effective alternative to LiDAR, enhancing robotic navigation and perception capabilities.


In the race to develop robust perception systems for robots, one persistent challenge has been operating in bad weather and harsh conditions. For example, traditional, light-based vision sensors such as cameras or LiDAR (Light Detection And Ranging) fail in heavy smoke and fog.

However, nature has shown that vision doesn’t have to be constrained by light’s limitations—many organisms have evolved ways to perceive their environment without relying on light. Bats navigate using the echoes of sound waves, while sharks hunt by sensing electrical fields from their prey’s movements.

Radio waves, whose wavelengths are orders of magnitude longer than , can better penetrate smoke and fog, and can even see through certain materials—all capabilities beyond human vision. Yet robots have traditionally relied on a limited toolbox: They either use cameras and LiDAR, which provide detailed images but fail in challenging conditions, or traditional radar, which can see through walls and other occlusions but produces crude, .

China’s military robot-wolf follows voice commands in real time

While labeled a “robot wolf” by its designers, this platform presents itself as a powerful tactical tool likely aimed at military or security applications, where its design and capabilities stand to offer significant operational value.

The robot’s four-legged design is an immediate indicator of its…


At China’s Zhuhai Air Show, a new robotic quadruped known as robot-wolf stole the spotlight demonstrating its capability to respond to real-time voice commands.

/* */