Toggle light / dark theme

Even the exact definition of AGI is still heavily debated, making it a murky milestone.

Regardless, the stakes are high: the AI industry has poured untold billions of dollars into building out datacenters to train AI models, an investment that’s likely many years away from paying off.

Naturally, OpenAI CEO and hypeman Sam Altman has remained optimistic. During a Reddit AMA this week, he even claimed that AGI is “achievable with current hardware.”

On a recent Friday afternoon, Kashif Hoda was waiting for a train near Harvard Square when a young man asked him for directions.


A month later, he found out just how strange. He had been an unwitting guinea pig in an experiment meant to show just how easy it was to rig artificial intelligence tools to identify someone and retrieve the person’s biographical information — potentially including a phone number and home address — without the person’s realizing it.

A friend texted Mr. Hoda, telling him that he was in a video that was going viral. Mr. Nguyen and a fellow Harvard student, Caine Ardayfio, had built glasses used for identifying strangers in real time, and had demonstrated them on two “real people” at the subway station, including Mr. Hoda, whose name was incorrectly transcribed in the video captions as “Vishit.”

Disney is adding another layer to its AI and extended reality strategies. As first reported by Reuters, the company recently formed a dedicated emerging technologies unit. Dubbed the Office of Technology Enablement, the group will coordinate the company’s exploration, adoption and use of artificial intelligence, AR and VR tech.

It has tapped Jamie Voris, previously the CTO of its Studios Technology division, to oversee the effort. Before joining Disney in 2010, Voris was the chief technology officer at the National Football League. More recently, he led the development of the company’s Apple Vision Pro app. Voris will report to Alan Bergman, the co-chairman of Disney Entertainment. Reuters reports the company eventually plans to grow the group to about 100 employees.

“The pace and scope of advances in AI and XR are profound and will continue to impact consumer experiences, creative endeavors, and our business for years to come — making it critical that Disney explore the exciting opportunities and navigate the potential risks,” Bergman wrote in an email Disney shared with Engadget. “The creation of this new group underscores our dedication to doing that and to being a positive force in shaping responsible use and best practices.”

About a year and a half ago, quantum control startup Quantum Machines and Nvidia announced a deep partnership that would bring together Nvidia’s DGX Quantum computing platform and Quantum Machine’s advanced quantum control hardware. We didn’t hear much about the results of this partnership for a while, but it’s now starting to bear fruit and getting the industry one step closer to the holy grail of an error-corrected quantum computer.

In a presentation earlier this year, the two companies showed that they are able to use an off-the-shelf reinforcement learning model running on Nvidia’s DGX platform to better control the qubits in a Rigetti quantum chip by keeping the system calibrated.

Yonatan Cohen, the co-founder and CTO of Quantum Machines, noted how his company has long sought to use general classical compute engines to control quantum processors. Those compute engines were small and limited, but that’s not a problem with Nvidia’s extremely powerful DGX platform. The holy grail, he said, is to run quantum error correction. We’re not there yet. Instead, this collaboration focused on calibration, and specifically calibrating the so-called “π pulses” that control the rotation of a qubit inside a quantum processor.