Toggle light / dark theme

Non-invasive sensor measures intracranial pressure more accurately, aiding early intervention

A technology developed by the Brazilian company brain4care has been shown to be able to measure absolute values of intracranial pressure (ICP) more accurately than existing non-invasive methods. This is the result of a study published in the journal npj Digital Medicine by researchers from the University of São Paulo in Brazil, the University of Cambridge in the United Kingdom, Emory University in the United States, and the company itself.

“The study included the largest number of patients and showed that the technology we developed had the lowest error in estimating the value of intracranial pressure among all the non-invasive methods already available in the world,” Gustavo Frigieri, scientific director of brain4care and one of the authors of the study, told Agência FAPESP.

The technology developed by the Brazilian company consists of a sensor placed on the patient’s head that registers the nanometric expansions of the skull in each cardiac cycle and generates, in real time, a wave that indicates the variations in volume and intracranial pressure. The data obtained are processed by an artificial intelligence platform that generates reports to help doctors make decisions.

With $200M in seed funding, Flagship-backed Lila Sciences touts ambitious AI vision

Biotech incubator Flagship Pioneering has uncorked its latest company. Lila Sciences is looking to use $200 million in seed funding to develop new advanced artificial intelligence that can power fully autonomous research labs, according to a March 10 press release.

In addition to Flagship, the financing comes from General Catalyst, March Capital, the ARK Venture Fund, Altitude Life Science Ventures, Blue Horizon Advisors, the State of Michigan Retirement System, Modi Ventures and a wholly owned subsidiary of the Abu Dhabi Investment Authority, according to the release.

Researchers develop innovative method for secure operations on encrypted data without decryption

A hospital that wants to use a cloud computing service to perform artificial intelligence data analysis on sensitive patient records needs a guarantee those data will remain private during computation. Homomorphic encryption is a special type of security scheme that can provide this assurance.

The technique encrypts data in a way that anyone can perform computations without decrypting the data, preventing others from learning anything about underlying patient records. However, there are only a few ways to achieve homomorphic encryption, and they are so computationally intensive that it is often infeasible to deploy them in the real world.

MIT researchers have developed a new theoretical approach to building homomorphic encryption schemes that is simple and relies on computationally lightweight cryptographic tools. Their technique combines two tools so they become more powerful than either would be on its own. The researchers leverage this to construct a “somewhat homomorphic” encryption scheme—that is, it enables users to perform a limited number of operations on encrypted data without decrypting it, as opposed to fully homomorphic encryption that can allow more complex computations.

Exo 2: A new programming language for high-performance computing, with much less code

Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.

But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?

That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called Exo 2.

Most AI struggles to read clocks and calendars, study finds

Some of the world’s most advanced AI systems struggle to tell the time and work out dates on calendars, a study suggests.

While AI models can perform such as writing essays and generating art, they have yet to master some skills that humans carry out with ease, researchers say.

A team from the University of Edinburgh has shown that state-of-the-art AI models are unable to reliably interpret clock-hand positions or correctly answer questions about dates on calendars.

Tesla Robotaxi Spotted Driving Autonomously at Giga Texas

Fueling excitement, Tesla’s Cybercab was spotted navigating the expansive grounds of Gigafactory Texas autonomously. Tesla Cybercab, also labeled as the Robotaxi, was unveiled by CEO Elon Musk in October 2024, during the ‘We Robot’ event in California. The two-seat vehicle has no steering wheel or pedals – it represents Tesla’s end goal for a completely autonomous transportation network.

The Cybercab has butterfly doors that open automatically, a hatchback layout for the cargo room, and an inductive charging technique that eliminates the need for conventional charging ports. Tesla expects to start production of the Cybercabs before 2027, and the price is estimated at $30,000.

The Tesla CyberCab is an autonomous vehicle that Tesla plans to use in its upcoming ride-hailing system. The CyberCab represents its distinct vehicle type because it is specially designed without any human driver functionalities for enhanced efficiency combined with premium passenger comfort and an extended product life span.

The Holy Grail of Technology

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

Episode 51: Anthony Aguirre on Cosmology, Zen, Entropy, and Information

Blog post with audio player, show notes, and transcript: https://www.preposterousuniverse.com/podcast/2019/06/17/epis…formation/

Patreon: https://www.patreon.com/seanmcarroll.

Cosmologists have a standard set of puzzles they think about: the nature of dark matter and dark energy, whether there was a period of inflation, the evolution of structure, and so on. But there are also even deeper questions, having to do with why there is a universe at all, and why the early universe had low entropy, that most working cosmologists don’t address. Today’s guest, Anthony Aguirre, is an exception. We talk about these deep issues, and how tackling them might lead to a very different way of thinking about our universe. At the end there’s an entertaining detour into AI and existential risk.

Anthony Aguirre received his Ph.D. in Astronomy from Harvard University. He is currently associate professor of physics at the University of California, Santa Cruz, where his research involves cosmology, inflation, and fundamental questions in physics. His new book, Cosmological Koans, is an exploration of the principles of contemporary cosmology illustrated with short stories in the style of Zen Buddhism. He is the co-founder of the Foundational Questions Institute, the Future of Life Institute, and the prediction platform Metaculus.

Scientists develop AI-powered digital twin model that can control and adapt its physical doppelganger

Scientists say they have developed a new AI-assisted model of a digital twin with the ability to adapt and control the physical machine in real time.

The discovery, reported in the journal IEEE Access, adds a new dimension to the digital copies of real-world machines, like robots, drones, or even autonomous cars, according to the authors.

Digital twins are exact replicas of things in the physical world. They are likened to video game versions of real machines with which they digitally twin, and are constantly updated with real-time data.

/* */