Toggle light / dark theme

Have you ever left a bottle of liquid in the freezer, only to find it cracked or shattered? To save you from tedious freezer cleanups, researchers at the University of Amsterdam have investigated why this happens, and how to prevent it. They discovered that while the liquid is freezing, pockets of liquid can get trapped inside the ice. When these pockets eventually freeze, the sudden expansion creates extreme pressure—enough to break glass.

“Newton had an apple fall on his head. I found my freezer full of ,” says Menno Demmenie, first author of the new study that was recently published in Scientific Reports.

He continues, “The usual explanation for frost damage is that water expands when it freezes, but this does not explain why half-filled bottles also burst in our freezers. Our work addresses how ice can break a even when it has plenty of space to expand into.”

When Bluesky CEO Jay Graber took the SXSW stage this week, she managed to make fun of Mark Zuckerberg without mentioning Meta at all. Her black T-shirt was emblazoned with black text stretching across the chest and sleeves, similar to the style of a T-shirt that the billionaire founder wore at an event last year. Graber’s shirt declared in Latin, Mundus sine Caesaribus. Or, “a world without Caesars.”

On Bluesky, users expressed such excitement over Graber’s T-shirt that the platform decided to sell replicas to raise money for its developer ecosystem.

The $40 shirt, available in sizes S – XL, sold out in roughly 30 minutes.

— Kreiner, et al.

In this article, the authors review the current understanding of obesity-related kidney disease and focus on the intertwined cardiometabolic abnormalities, which, in addition to obesity and diabetes, include risk factors such as hypertension, dyslipidemia, and systemic inflammation.

Full text is available


Obesity is a serious chronic disease and an independent risk factor for the new onset and progression of chronic kidney disease (CKD). CKD prevalence is expected to increase, at least partly due to the continuous rise in the prevalence of obesity. The concept of obesity-related kidney disease (OKD) has been introduced to describe the still incompletely understood interplay between obesity, CKD, and other cardiometabolic conditions, including risk factors for OKD and cardiovascular disease, such as diabetes and hypertension. Current therapeutics target obesity and CKD individually. Non-pharmacological interventions play a major part, but the efficacy and clinical applicability of lifestyle changes and metabolic surgery remain debatable, because the strategies do not benefit everyone, and it remains questionable whether lifestyle changes can be sustained in the long term.

Threat intelligence company GreyNoise warns that a critical PHP remote code execution vulnerability that impacts Windows systems is now under mass exploitation.

Tracked as CVE-2024–4577, this PHP-CGI argument injection flaw was patched in June 2024 and affects Windows PHP installations with PHP running in CGI mode. Successful exploitation enables unauthenticated attackers to execute arbitrary code and leads to complete system compromise following successful exploitation.

A day after PHP maintainers released CVE-2024–4577 patches on June 7, 2024, WatchTowr Labs released proof-of-concept (PoC) exploit code, and the Shadowserver Foundation reported observing exploitation attempts.

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

Scientists have long suspected that phosphorene nanoribbons (PNRs)—thin pieces of black phosphorus, only a few nanometers wide—might exhibit unique magnetic and semiconducting properties, but proving this has been difficult.

In a recent study published in Nature, researchers focused on exploring the potential for magnetic and semiconducting characteristics of these nanoribbons. Using techniques such as ultrafast magneto-optical spectroscopy and electron paramagnetic resonance they were able to demonstrate the magnetic behavior of PNRs at room temperature, and show how these magnetic properties can interact with light.

The study, carried out at the Cavendish Laboratory in collaboration with other institutes, including the University of Warwick, University College London, Freie Universität Berlin and the European High Magnetic Field lab in Nijmegen, revealed several key findings about phosphorene nanoribbons.