Toggle light / dark theme

Jacopo Pantaleoni joined Nvidia in 2001 when the company had less than 500 employees. He worked on what was then a small research project to improve Nvidia’s graphics processing units so they could better render images on computers and gaming consoles.

More than two decades later, Nvidia has more than 26,000 employees and its GPUs are at the center of the generative AI explosion. Pantaleoni had climbed the ranks to become a principal engineer and research scientist, one of the highest ranking positions for an individual contributor, he says. Then, in July, as Nvidia boomed like no other company, Pantaleoni says he resigned, giving up a substantial amount of unvested stock units, after coming to a realization.

“This market of machine learning, artificial intelligence” is “almost entirely driven by the big players— Googles, Amazons, Metas”—that have the “enormous amounts of data and enormous amounts of capital” to develop AI at scale. Those companies are also Nvidia’s biggest customers. “This was not the world I wanted to help build,” he said.

Waymo is offering some Los Angeles residents the chance to ride in the self-driving car across the city.

The company is kicking off its Waymo One tour, which they say is ‘the world’s first fully autonomous ride-hailing service,’ in October.

“Chauffeured by the Waymo Driver — Waymo’s autonomous driving technology that never gets distracted, drowsy or drunk — Angelenos will discover new and stress-free ways to explore their city, whether it’s finding hidden gem thrift spots in Mid City, trying a new cafe in Koreatown or catching a concert in DTLA,” the company said.

Google’s new Bard extension will apparently summarize emails, plan your travels, and — oh, yeah — fabricate emails that you never actually sent.

Last week, Google plugged its large language model-powered chatbot called Bard into a bevy of Google products including Gmail, Google Drive, Google Docs, Google Maps, and the Google-owned YouTube, among other apps and services. While it’s understandable that Google would want to marry its newer generative AI efforts with its already-established product lineup, it seems that Google might have moved a little too fast.

According to New York Times columnist Kevin Roose, Bard isn’t the helpful inbox assistant that Google apparently wants it to be — at least yet. In his testing, says Roose, the AI hallucinated entire email correspondences that never took place.

Scientists have been exploring both experimental and theoretical ways to prove quantum supremacy.

Ramis Movassagh, a researcher at Google Quantum AI, recently had a study published in the journal Nature Physics. Here, he has reportedly demonstrated in theory that simulating random quantum circuits and determining their output will be extremely difficult for classical computers. In other words, if a quantum computer solves this problem, it can achieve quantum supremacy.

But why do such problems exist?

I have said it time and time again. Ironically I have been an electronic music producer for decades.

Electronic producer, singer and AI advocate Holly Herndon has drawn a comparison between AI music and sampling, saying that AI music could impact music in the same way sampling did hip-hop.

Herndon made the statement during a recent interview with Mixmag, as part of a feature entitled The rise of AI music: a force for good or a new low for artistic creativity? The feature explores the advantages and disadvantages of using AI technology to create music.

“Sampling old records to create something new led to the formation of genres like hip hop and innovative new forms of artistic expression.” She says. “AI music has the potential to do something very similar.”


What does it take to bring life-changing medical robotic devices to reality? This is a question Dr. Gregory Fischer, founder and CEO of AiM Medical Robotics, explored in his keynote “From Concept to Commercialization: It’s not Brain Surgery, or is it?” at BIOMEDevice Boston, MA. As a researcher, professor, and lead investigator supported by federal government grants, director of a state-funded medtech accelerator, and founder of multiple medical device companies, Fischer has a unique perspective on conceptualizing, refining, and commercializing medical devices, as well as the challenges that come with each step.

Focusing on neurosurgery, he highlighted specific challenges clinicians face during procedures including an inability to leverage real-time intraoperative MR imaging for precision — surgeons must transfer a patient mid-surgery to an MRI in a separate room and sometimes even a separate building within the hospital complex — resulting in inefficient workflow and interruptions in sterility and anesthesia during transfers. Additionally, he mentioned limited compatibility with various MRI scanners, and an increased risk of human errors because of complex manual processes.

Integrating robotic assistance, he said, enhances the reachable target area and improves dexterity and precision of motion during such difficult procedures such as neurosurgery, adds enhanced feedback and virtual fixtures, reduces procedure time, and avoids ergonomic issues. An increase in intervention accuracy through inherent integration with image guidance tools, and improved diagnostic and therapeutic outcomes are also advantages of robotic assistance, according to Fischer.

A team of Japanese researchers claims that they were able to use AI to translate the clucks and noises of chickens.

AI Chicken Language Translator?

This novel feat was documented in a preprint that is yet to undergo peer review. The research team was reportedly able to develop a system that can interpret chicken’s different emotional states. These covered the fowls’ feelings of anger, fear, hunger, excitement, contentment, and distress.

The technology relied on an AI technique that the researchers refer to as Deep Emotional Analysis Learning, according to professor Adrian David Cheok from the University of Tokyo.

Imagine typing “dramatic intro music” and hearing a soaring symphony or writing “creepy footsteps” and getting high-quality sound effects. That’s the promise of Stable Audio, a text-to-audio AI model announced Wednesday by Stability AI that can synthesize stereo 44.1 kHz music or sounds from written descriptions. Before long, similar technology may challenge musicians for their jobs.

Now Stability and Harmonai want to break into commercial AI audio production with Stable Audio. Judging by production samples, it seems like a significant audio quality upgrade from previous AI audio generators we’ve seen.

The company announced a slew of AI-powered tools including backgrounds and video topic suggestions at its creator event.

More content on YouTube is going to be created at least in part using generative AI. The video platform announced several new AI-powered tools for creators at its annual Made on YouTube event on Thursday. Among the features coming later this year or next are AI-generated photo and video backgrounds, AI video topic suggestions, and music search.

A new feature called Dream Screen will create AI-generated videos and photos that creators can place in the background of their YouTube Shorts. Initially, creators will be able to type in prompts to generate backgrounds; eventually,… More.


A slew of new AI-powered tools is coming to creators.