Toggle light / dark theme

Originally published on Towards AI.

In its most basic form, Bayesian Inference is just a technique for summarizing statistical inference which states how likely an hypothesis is given any new evidence. The method comes from Bayes’ Theorem, which provides a way to calculate the probability that an event will happen or has happened, given any prior knowledge of conditions (from which an event may not happen):

Here’s a somewhat rough rendering of Bayes’ Theorem:

UNIVERSITY PARK, Pa. — A recently developed electronic tongue is capable of identifying differences in similar liquids, such as milk with varying water content; diverse products, including soda types and coffee blends; signs of spoilage in fruit juices; and instances of food safety concerns. The team, led by researchers at Penn State, also found that results were even more accurate when artificial intelligence (AI) used its own assessment parameters to interpret the data generated by the electronic tongue.

(Many people already posted this. This is the press release from Penn Sate who did the research)


The tongue comprises a graphene-based ion-sensitive field-effect transistor, or a conductive device that can detect chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are non-functionalized, meaning that one sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to assess, all of which are related to how a sample liquid interacts with the sensor’s electrical properties. Based on these researcher-specified parameters, the AI could accurately detect samples — including watered-down milks, different types of sodas, blends of coffee and multiple fruit juices at several levels of freshness — and report on their content with greater than 80% accuracy in about a minute.

“After achieving a reasonable accuracy with human-selected parameters, we decided to let the neural network define its own figures of merit by providing it with the raw sensor data. We found that the neural network reached a near ideal inference accuracy of more than 95% when utilizing the machine-derived figures of merit rather than the ones provided by humans,” said co-author Andrew Pannone, a doctoral student in engineering science and mechanics advised by Das. “So, we used a method called Shapley additive explanations, which allows us to ask the neural network what it was thinking after it makes a decision.”

ATLANTA — An innovative approach to public safety is taking shape on Cleveland Avenue, where Atlanta City Councilman Antonio Lewis has partnered with the 445 Cleveland apartment complex to deploy AI-powered robotic dogs to deter crime.

The robotic dog, named “Beth,” is equipped with 360-degree cameras, a siren, and stair-climbing capabilities. Unlike other artificial intelligence robots like “Spunky” on Boulevard, Beth is monitored in real time by a human operator located in Bogotá, Colombia.

“Our operator who is physically watching these cameras needs to deploy the dog. It’s all in one system, and they are just controlling it, like a video game at home, except it’s not a video game—it’s Beth,” said Avi Wolf, the owner of 445 Cleveland.

What can NASA’s Ingenuity helicopter on Mars teach us about flying on other planets? This is what engineers at NASA’s Jet Propulsion Laboratory recently investigated ever since the robotic pioneer performed its last flight on the Red Planet’s surface on January 18, 2024. The purpose of the investigation was to ascertain the likely causes for Ingenuity’s final flight, as the team found damage to the helicopter’s rotor blades in images sent back to Earth. This investigation holds the potential to help scientists and engineers improve upon Ingenuity’s design for future flying robots on other worlds.

“When running an accident investigation from 100 million miles away, you don’t have any black boxes or eyewitnesses,” said Dr. Håvard Grip, who is a research technologist at NASA JPL and Ingenuity’s first pilot. “While multiple scenarios are viable with the available data, we have one we believe is most likely: Lack of surface texture gave the navigation system too little information to work with.”

The reason for Ingenuity’s “retirement” was due to damage to its rotor blades it sustained during Flight 72, which turned out to be its final flight, due to navigation system failures in identifying a safe landing spot. As a result, engineers hypothesized that Ingenuity experienced a hard landing due to insufficient navigation data, breaking the rotor blades due to higher-than-expected loads. The findings from this investigation will help engineers implement better designs for NASA’s upcoming Mars Sample Return mission, which is currently in the design phase with an anticipated launch date of 2026.

Researchers have developed a device that can simultaneously measure six markers of brain health. The sensor, which is inserted through the skull into the brain, can pull off this feat thanks to an artificial intelligence (AI) system that pieces apart the six signals in real time.

Being able to continuously monitor biomarkers in patients with traumatic brain injury could improve outcomes by catching swelling or bleeding early enough for doctors to intervene. But most existing devices measure just one marker at a time. They also tend to be made with metal, so they can’t easily be used in combination with magnetic resonance imaging.


Simultaneous access to measurements could improve outcomes for brain injuries.

Originally published on Towards AI.

AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly “understanding” the information they’re presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem — they break trust and sometimes lead to serious mistakes.

So, why do these models, which seem so advanced, get things so wrong? The reason isn’t only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess — and guess wrong. Interestingly, there’s a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gödel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries — some truths can’t be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just can’t handle.

The company has unveiled plans for a new data center in Richland Parish, Louisiana, marking a significant expansion of its global infrastructure. The $10 billion investment will be Meta’s largest data center to date, spanning a massive 4 million square feet. This state-of-the-art facility will be a crucial component in the company’s ongoing efforts to support the rapid growth of artificial intelligence (AI) technologies. Meta did not disclose an estimated completion or operational date for this facility.

The new Richland Parish data center will create over 500 full-time operational jobs, providing a substantial boost to the local economy. During peak construction, the project is expected to employ more than 5,000 workers.

Meta’s decision to build in Richland Parish was driven by several factors, including the region’s robust infrastructure, reliable energy grid, and business-friendly environment. The company also cited the strong support from local community partners, which played a critical role in facilitating the development of the data center.

Science & Technology

Soukaina Filali Boubrahimi and Shah Muhammad Hamdi lead NSF-funded research to craft a comprehensive solar flare catalog of images and data, along with data-driven, machine learning models poised to deploy in an operational environment.