Toggle light / dark theme

Imagine you’re in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they’re always looking out for different things. If they’re both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive co-pilot; a partnership between and machine, rooted in understanding .

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the , it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems.

As we plunge head-on into the game-changing dynamic of general artificial intelligence, observers are weighing in on just how huge an impact it will have on global societies. Will it drive explosive economic growth as some economists project, or are such claims unrealistically optimistic?

Few question the potential for change that AI presents. But in a world of litigation, and ethical boundaries, will AI be able to thrive?

Two researchers from Epoch, a research group evaluating the progression of artificial intelligence and its potential impacts, decided to explore arguments for and against the likelihood that innovation ushered in by AI will lead to explosive growth comparable to the Industrial Revolution of the 18th and 19th centuries.

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More

Microsoft has joined the race for large language model (LLM) application frameworks with its open source Python library, AutoGen.

As described by Microsoft, AutoGen is “a framework for simplifying the orchestration, optimization, and automation of LLM workflows.” The fundamental concept behind AutoGen is the creation of “agents,” which are programming modules powered by LLMs such as GPT-4. These agents interact with each other through natural language messages to accomplish various tasks.

In case you missed the hype, Humane is a startup founded by ex-Apple executives that’s working on a device called the “Ai Pin” that uses projectors, cameras and AI tech to act as a sort of wearable AI assistant. Now, the company has unveiled the AI Pin in full at a Paris fashion show (Humane x Coperni) as a way to show off the device’s new form factor. “Supermodel Naomi Campbell is the first person outside of the company to wear the device in public, ahead of its full unveiling on November 9,” Humane wrote.

The company describes the device as a “screenless, standalone device and software platform built from the ground up for AI.” It’s powered by an “advanced” Qualcomm Snapdragon platform and equipped with a mini-projector that takes the place of a smartphone screen, along with a camera and speaker. It can perform functions like AI-powered optical recognition, but is also supposedly “privacy-first” thanks to qualities like no wake word and thus no “always on” listening.”

It’s been a busy week for IonQ, the quantum computing start-up focused on developing trapped-ion-based systems. At the Quantum World Congress today, the company announced two new systems (Forte Enterprise and Tempo) intended to be rack-mountable and deployable in a traditional data center. Yesterday, speaking at Tabor Communications (HPCwire parent organization) HPC and AI on Wall Street conference, the company made a strong pitch for reaching quantum advantage in 2–3 years, using the new systems.

If you’ve been following quantum computing, you probably know that deploying quantum computers in the datacenter is a rare occurrence. Access to the vast majority NISQ era computers has been through web portals. The latest announcement from IonQ, along with somewhat similar announcement from neutral atom specialist QuEra in August, and increased IBM efforts (Cleveland Clinic and PINQ2) to selectively place on-premise quantum systems suggest change is coming to the market.

IonQ’s two rack-mounted solutions are designed for businesses and governments wanting to integrate quantum capabilities within their existing infrastructure. “Businesses will be able to harness the power of quantum directly from their own data centers, making the technology significantly more accessible and easy to apply to key workflows and business processes,” reported the company. IonQ is calling the new systems enterprise-grade. (see the official announcement.)

Benchmarks are a key driver of progress in AI. But they also have many shortcomings. The new GPT-Fathom benchmark suite aims to reduce some of these pitfalls.

Benchmarks allow AI developers to measure the performance of their models on a variety of tasks. In the case of language models, for example, answering knowledge questions or solving logic tasks. Depending on its performance, the model receives a score that can then be compared with the results of other models.

These benchmarking results form the basis for further research decisions and, ultimately, investments. They also provide information about the strengths and weaknesses of individual methods.