Toggle light / dark theme

MIT researchers developed a new approach for assessing predictions with a spatial dimension, like forecasting weather or mapping air pollution.

Re relying on a weather app to predict next week’s temperature. How do you know you can trust its forecast? Scientists use statistical and physical models to make predictions about everything from weather to air pollution. But checking whether these models are truly reliable is trickier than it seems—especially when the locations where we have validation data don Traditional validation methods struggle with this problem, failing to provide consistent accuracy in real-world scenarios. In this work, researchers introduce a new validation approach designed to improve trust in spatial predictions. They define a key requirement: as more validation data becomes available, the accuracy of the validation method should improve indefinitely. They show that existing methods don’t always meet this standard. Instead, they propose an approach inspired by previous work on handling differences in data distributions (known as “covariate shift”) but adapted for spatial prediction. Their method not only meets their strict validation requirement but also outperforms existing techniques in both simulations and real-world data.

By refining how we validate predictive models, this work helps ensure that critical forecasts—like air pollution levels or extreme weather events—can be trusted with greater confidence.


A new evaluation method assesses the accuracy of spatial prediction techniques, outperforming traditional methods. This could help scientists make better predictions in areas like weather forecasting, climate research, public health, and ecological management.

Technically this year we have a global pandemic but with 11 different viruses that have evolved.


For the first time the pandemic began, deaths from influenza have outpaced deaths from COVID-19 in 22 states, plus New York City and Washington, D.C. Dr. Jon LaPook has the latest numbers.

Selective serotonin reuptake inhibitor (SSRI) antidepressants are some of the most widely prescribed drugs in the world, and new research suggests they could also protect against serious infections and life-threatening sepsis. Scientists at the Salk Institute studying a mouse model of sepsis uncovered how the SSRI fluoxetine can regulate the immune system and defend against infectious disease, and found that this protection is independent to peripheral serotonin. The findings could encourage additional research into the potential therapeutic uses of SSRIs during infection.

“When treating an infection, the optimal treatment strategy would be one that kills the bacteria or virus while also protecting our tissues and organs,” commented professor Janelle Ayres, PhD, holder of the Salk Institute Legacy Chair and Howard Hughes Medical Institute Investigator. “Most medications we have in our toolbox kill pathogens, but we were thrilled to find that fluoxetine can protect tissues and organs, too. It’s essentially playing offense and defense, which is ideal, and especially exciting to see in a drug that we already know is safe to use in humans.”

Ayres is senior author of the team’s report in Science Advances. In their paper, titled “Fluoxetine promotes IL-10–dependent metabolic defenses to protect from sepsis-induced lethality,” the investigators stated, “Our work reveals a beneficial ‘off-target’ effect of fluoxetine, and reveals a protective immunometabolic defense mechanism with therapeutic potential.”

Artificial intelligence (AI) has the potential to revolutionize the drug discovery process, offering improved efficiency, accuracy, and speed. However, the successful application of AI is dependent on the availability of high-quality data, the addressing of ethical concerns, and the recognition of the limitations of AI-based approaches. In this article, the benefits, challenges, and drawbacks of AI in this field are reviewed, and possible strategies and approaches for overcoming the present obstacles are proposed. The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods, as well as the potential advantages of AI in pharmaceutical research, are also discussed. Overall, this review highlights the potential of AI in drug discovery and provides insights into the challenges and opportunities for realizing its potential in this field.

Summary: A new study reveals how AI-driven deep learning models can decode the genetic regulatory switches that define brain cell types across species. By analyzing human, mouse, and chicken brains, researchers found that some brain cell types remain highly conserved over 320 million years, while others have evolved uniquely.

This regulatory code not only sheds light on brain evolution but also provides new tools for studying gene regulation in health and disease. The findings highlight how AI can identify preserved and divergent genetic instructions controlling brain function across species.

The study also has implications for understanding neurological disorders by linking genetic variants to cognitive traits. Researchers are now expanding their models to study the brains of various animals and human disease states like Parkinson’s.

Dream, a company focused on artificial intelligence (AI)-powered cybersecurity for nations and critical infrastructure, raised $100 million in a Series B funding round to expand into additional markets in which cyber threats pose challenges to national security.

The round valued Dream at $1.1 billion, the company said in a Monday (Feb. 17) press release.

Cyberattacks on critical infrastructure are becoming more common and more sophisticated, Sebastian Kurz, co-founder and president of Dream, and former prime minister of Austria, said in the release.

Science and Technology: 💻 A recent study conducted by researchers at Fudan University in China suggests that artificial intelligence (AI) systems may have taken a worrying step: self-replication without human assistance.

S Llama31-70B-Instruct and Alibaba “Successful self-replication without human assistance is the essential step for AI to outperform[humans], and is an early signal for rogue AIs,” the authors wrote in the paper. They added that understanding the risks associated with next-generation artificial intelligence, or “frontier AI,” is key to anticipating undesirable scenarios. The scientists investigated two specific scenarios: “shutdown avoidance” and “replication chain.” In the first case, an AI model was instructed to detect signs that it was about to shut down and then create a copy of itself before that happened. In the second scenario, the AI was told to replicate itself and then program the replica to do the same, starting a possible infinite cycle of copying.

To conduct these tests, the team created a “scaffolding agent,” or a set of system tools and instructions that allow the AI to interact with the operating system. They said there was no direct human interference in the process. At various times, when obstacles such as software conflicts or missing files arose, AIs found ways to work around them, including terminating conflicting processes and restarting the system.


Tudo sobre política, saúde, justiça, comportamento, entretenimento. Confira os fatos mais relevantes do país e do mundo em tempo real e de graça.