Toggle light / dark theme

AI-powered crimefighting dog ‘Beth’ patrols Atlanta apartment complex

ATLANTA — An innovative approach to public safety is taking shape on Cleveland Avenue, where Atlanta City Councilman Antonio Lewis has partnered with the 445 Cleveland apartment complex to deploy AI-powered robotic dogs to deter crime.

The robotic dog, named “Beth,” is equipped with 360-degree cameras, a siren, and stair-climbing capabilities. Unlike other artificial intelligence robots like “Spunky” on Boulevard, Beth is monitored in real time by a human operator located in Bogotá, Colombia.

“Our operator who is physically watching these cameras needs to deploy the dog. It’s all in one system, and they are just controlling it, like a video game at home, except it’s not a video game—it’s Beth,” said Avi Wolf, the owner of 445 Cleveland.

Ingenuity’s Last Hop: Lessons from Mars’ First Aircraft

What can NASA’s Ingenuity helicopter on Mars teach us about flying on other planets? This is what engineers at NASA’s Jet Propulsion Laboratory recently investigated ever since the robotic pioneer performed its last flight on the Red Planet’s surface on January 18, 2024. The purpose of the investigation was to ascertain the likely causes for Ingenuity’s final flight, as the team found damage to the helicopter’s rotor blades in images sent back to Earth. This investigation holds the potential to help scientists and engineers improve upon Ingenuity’s design for future flying robots on other worlds.

“When running an accident investigation from 100 million miles away, you don’t have any black boxes or eyewitnesses,” said Dr. Håvard Grip, who is a research technologist at NASA JPL and Ingenuity’s first pilot. “While multiple scenarios are viable with the available data, we have one we believe is most likely: Lack of surface texture gave the navigation system too little information to work with.”

The reason for Ingenuity’s “retirement” was due to damage to its rotor blades it sustained during Flight 72, which turned out to be its final flight, due to navigation system failures in identifying a safe landing spot. As a result, engineers hypothesized that Ingenuity experienced a hard landing due to insufficient navigation data, breaking the rotor blades due to higher-than-expected loads. The findings from this investigation will help engineers implement better designs for NASA’s upcoming Mars Sample Return mission, which is currently in the design phase with an anticipated launch date of 2026.

AI-enhanced brain sensor tracks poorly understood chemistry

Researchers have developed a device that can simultaneously measure six markers of brain health. The sensor, which is inserted through the skull into the brain, can pull off this feat thanks to an artificial intelligence (AI) system that pieces apart the six signals in real time.

Being able to continuously monitor biomarkers in patients with traumatic brain injury could improve outcomes by catching swelling or bleeding early enough for doctors to intervene. But most existing devices measure just one marker at a time. They also tend to be made with metal, so they can’t easily be used in combination with magnetic resonance imaging.


Simultaneous access to measurements could improve outcomes for brain injuries.

Why Do Neural Networks Hallucinate (And What Are Experts Doing About It)?

Originally published on Towards AI.

AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly “understanding” the information they’re presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem — they break trust and sometimes lead to serious mistakes.

So, why do these models, which seem so advanced, get things so wrong? The reason isn’t only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess — and guess wrong. Interestingly, there’s a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gödel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries — some truths can’t be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just can’t handle.

Meta announces the construction of its largest data center to date

The company has unveiled plans for a new data center in Richland Parish, Louisiana, marking a significant expansion of its global infrastructure. The $10 billion investment will be Meta’s largest data center to date, spanning a massive 4 million square feet. This state-of-the-art facility will be a crucial component in the company’s ongoing efforts to support the rapid growth of artificial intelligence (AI) technologies. Meta did not disclose an estimated completion or operational date for this facility.

The new Richland Parish data center will create over 500 full-time operational jobs, providing a substantial boost to the local economy. During peak construction, the project is expected to employ more than 5,000 workers.

Meta’s decision to build in Richland Parish was driven by several factors, including the region’s robust infrastructure, reliable energy grid, and business-friendly environment. The company also cited the strong support from local community partners, which played a critical role in facilitating the development of the data center.

Google DeepMind’s Breakthrough “AlphaQubit” Closing in on the Holy Grail of Quantum Computing

The dream of building a practical, fault-tolerant quantum computer has taken a significant step forward.

In a breakthrough study recently published in Nature, researchers from Google DeepMind and Google Quantum AI said they have developed an AI-based decoder, AlphaQubit, which drastically improves the accuracy of quantum error correction—a critical challenge in quantum computing.

“Our work illustrates the ability of machine learning to go beyond human-designed algorithms by learning from data directly, highlighting machine learning as a strong contender for decoding in quantum computers,” researchers wrote.

/* */