Toggle light / dark theme

Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and first author Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, achieved this quantum speedup advantage in the context of a “bitstring guessing game.”

By effectively mitigating the errors often encountered at this level, they have successfully managed bitstrings of up to 26 bits long, significantly larger than previously possible. (For context, a bit refers to a binary number that can either be a zero or a one).

Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.’”.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

The skies above where I reside near New York City were noticeably apocalyptic last week. But to some in Silicon Valley, the fact that we wimpy East Coasters were dealing with a sepia hue and a scent profile that mixed cigar bar, campfire and old-school happy hour was nothing to worry about. After all, it is AI, not climate change, that appears to be top of mind to this cohort, who believe future superintelligence is either going to kill us all, save us all, or almost kill us all if we don’t save ourselves first.

Whether they predict the “existential risks” of runaway AGI that could lead to human “extinction” or foretell an AI-powered utopia, this group seems to have equally strong, fixed opinions (for now, anyway — perhaps they are “loosely held”) that easily tip into biblical prophet territory.

Finally, stories born from paranoia teach you to see A.I. as the ultimate surveillance tool, watching your every eye moment and jiggle of your mouse. But what if it’s used instead to catch you doing things well, and to foster trust between managers and employees?

With the ability to compile reports of your accomplishments—or even assess their quality—A.I. can help managers better appreciate the output of their employees, rather than relying on quantified inputs, like time spent at your desk. It can watch out for deadlines and critical paths, automatically steering you toward the work that’s most urgent. And if you do fall behind on deadlines, A.I. can let your manager know: They don’t have to poke their nose in all the time just to catch the one time you fell behind. With A.I. helping everyone focus their attention to match intentions as they do their work, managers can instead spend their time investing in ways to support their team and grow individuals.

The way we work right now will soon look vestigial, a kind of social scaffolding in our journey to build something better. We know that A.I. will transform the future of work. Will the future edifices of our labor be austere, brutalist towers that callously process resources? Or will they be beautiful, intricate monuments to growth and thriving?

Missed the GamesBeat Summit excitement? Don’t worry! Tune in now to catch all of the live and virtual sessions here.

Hexagon, a Nordic company focused on digital reality, is collaborating with Nvidia to enable industrial digital twins to capture data in real time.

Digital twins are digital replicas of designs such as factories or buildings in the real. The idea of the collaboration is to unite reality capture, manufacturing twins, AI, simulation and visualization to deliver real-time comparisons to real-world models.

Connecting artificial intelligence systems to the real world through robots and designing them using principles from evolution is the most likely way AI will gain human-like cognition, according to research from the University of Sheffield.

In a paper published in Science Robotics, Professor Tony Prescott and Dr. Stuart Wilson from the University’s Department of Computer Science, say that AI systems are unlikely to resemble real brain processing no matter how large their neural networks or the datasets used to train them might become, if they remain disembodied.

Current AI systems, such as ChatGPT, use large to solve difficult problems, such as generating intelligible written text. These networks teach AI to process data in a way that is inspired by the human brain and also learn from their mistakes in order to improve and become more accurate.

In the film “Top Gun: Maverick,” Maverick, played by Tom Cruise, is charged with training young pilots to complete a seemingly impossible mission—to fly their jets deep into a rocky canyon, staying so low to the ground they cannot be detected by radar, then rapidly climb out of the canyon at an extreme angle, avoiding the rock walls. Spoiler alert: With Maverick’s help, these human pilots accomplish their mission.

A machine, on the other hand, would struggle to complete the same pulse-pounding task. To an , for instance, the most straightforward path toward the target is in conflict with what the machine needs to do to avoid colliding with the canyon walls or staying undetected. Many existing AI methods aren’t able to overcome this conflict, known as the stabilize-avoid problem, and would be unable to reach their goal safely.

MIT researchers have developed a new technique that can solve complex stabilize-avoid problems better than other methods. Their machine-learning approach matches or exceeds the safety of existing methods while providing a tenfold increase in stability, meaning the agent reaches and remains stable within its goal region.