Toggle light / dark theme

An Australian cultured meat startup has “resurrected” the woolly mammoth — in the hope that people will think about eating it.

The challenge: Our traditional way of producing meat — by raising and slaughtering animals — is both bad for the environment and arguably unethical, yet demand for meat continues to increase.

Cultured meat, which is grown from muscle cells in a lab, can perfectly replicate the flavor of meat that comes from animals, so carnivores may prefer it to plant-based alternatives — once prices come down, at least. But some people may hesitate to even try cultured beef or pork when they could just keep eating the “real” stuff.

Tissue contamination distracts AI models from making accurate real-world diagnoses. Human pathologists are extensively trained to detect when tissue samples from one patient mistakenly end up on another patient’s microscope slides (a problem known as tissue contamination). But such contamination can easily confuse artificial intelligence (AI) models, which are often trained in pristine, simulated environments, reports a new Northwestern Medicine study.

“We train AIs to tell ‘A’ versus ‘B’ in a very clean, artificial environment, but, in real life, the AI will see a variety of materials that it hasn’t trained on. When it does, mistakes can happen,” said corresponding author Dr. Jeffery Goldstein, director of perinatal pathology and an assistant professor of perinatal pathology and autopsy at Northwestern University Feinberg School of Medicine.

“Our findings serve as a reminder that AI that works incredibly well in the lab may fall on its face in the real world. Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear — and AI companies hope — that the computers are coming for our jobs. Not yet.”

A new type of ultra-sensitive sensor has been made to detect incredibly low levels of lead ions in water. This advanced sensor may pave the way for developing next-generation water quality monitoring systems.

What distinguishes the sensor is its capacity to detect lead ions at concentrations as low as one femtomole per liter of water, demonstrating an incredibly high degree of sensitivity.

According to the University of California, San Diego experts, this range is “one million times” more sensitive than any known sensing technologies for water contamination monitoring.

Imagine you work for a car rental agency or a package delivery company and you’re in charge of a fleet of vehicles. If you’re switching to EV vehicles, it becomes more complex to manage your vehicles due to long charging time and limited charging point availabilities.

Guided Energy, a French startup that raised $5.2 million from Sequoia Capital and Dynamo Ventures at the end of 2023, is building a software tool that help EV fleet operators when it comes to charge management and dispatch. The company aggregates data from vehicles, public and private charging points and uses machine learning to tell you when and where you’re supposed to charge your vehicles.

“The beauty of the EV ecosystem is that it is all online. This means, we connect to both EVs and charging points directly. Where customers already have telematics or supervision platforms in place, we can integrate with them using APIs into our platform, giving them a single, real-time, unified view of their EV operations,” co-founder and CEO Anant Kapoor told me.

Meta is promising to roll out auto-labeling for AI-generated images — as soon as it figures out how, that is.

Nick Clegg, Meta’s president of global affairs, said in a policy update that the company is currently working with “industry partners” to formulate criteria that will help identify AI content. Once those criteria are determined, Meta will begin automatically labeling posts featuring any AI-generated images, video, or audio “in the coming months.”

“This approach represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content, and there are ways that people can strip out invisible markers,” Clegg wrote. “So we’re pursuing a range of options. We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers.”