Toggle light / dark theme

Researchers develop innovative method for secure operations on encrypted data without decryption

A hospital that wants to use a cloud computing service to perform artificial intelligence data analysis on sensitive patient records needs a guarantee those data will remain private during computation. Homomorphic encryption is a special type of security scheme that can provide this assurance.

The technique encrypts data in a way that anyone can perform computations without decrypting the data, preventing others from learning anything about underlying patient records. However, there are only a few ways to achieve homomorphic encryption, and they are so computationally intensive that it is often infeasible to deploy them in the real world.

MIT researchers have developed a new theoretical approach to building homomorphic encryption schemes that is simple and relies on computationally lightweight cryptographic tools. Their technique combines two tools so they become more powerful than either would be on its own. The researchers leverage this to construct a “somewhat homomorphic” encryption scheme—that is, it enables users to perform a limited number of operations on encrypted data without decrypting it, as opposed to fully homomorphic encryption that can allow more complex computations.

Exo 2: A new programming language for high-performance computing, with much less code

Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.

But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?

That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called Exo 2.

Most AI struggles to read clocks and calendars, study finds

Some of the world’s most advanced AI systems struggle to tell the time and work out dates on calendars, a study suggests.

While AI models can perform such as writing essays and generating art, they have yet to master some skills that humans carry out with ease, researchers say.

A team from the University of Edinburgh has shown that state-of-the-art AI models are unable to reliably interpret clock-hand positions or correctly answer questions about dates on calendars.

Tesla Robotaxi Spotted Driving Autonomously at Giga Texas

Fueling excitement, Tesla’s Cybercab was spotted navigating the expansive grounds of Gigafactory Texas autonomously. Tesla Cybercab, also labeled as the Robotaxi, was unveiled by CEO Elon Musk in October 2024, during the ‘We Robot’ event in California. The two-seat vehicle has no steering wheel or pedals – it represents Tesla’s end goal for a completely autonomous transportation network.

The Cybercab has butterfly doors that open automatically, a hatchback layout for the cargo room, and an inductive charging technique that eliminates the need for conventional charging ports. Tesla expects to start production of the Cybercabs before 2027, and the price is estimated at $30,000.

The Tesla CyberCab is an autonomous vehicle that Tesla plans to use in its upcoming ride-hailing system. The CyberCab represents its distinct vehicle type because it is specially designed without any human driver functionalities for enhanced efficiency combined with premium passenger comfort and an extended product life span.

Interpretable deep learning for deconvolutional analysis of neural signals

Recent methods applying deep learning to model neural activity often rely on “black-box” approaches that lack an interpretable connection between neural activity and network parameters. Here, building on advances in interpretable AI, Tolooshams et al. propose a new method, deconvolutional unrolled neural learning (DUNL), and apply it to several datasets.

The Holy Grail of Technology

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

Episode 51: Anthony Aguirre on Cosmology, Zen, Entropy, and Information

Blog post with audio player, show notes, and transcript: https://www.preposterousuniverse.com/podcast/2019/06/17/epis…formation/

Patreon: https://www.patreon.com/seanmcarroll.

Cosmologists have a standard set of puzzles they think about: the nature of dark matter and dark energy, whether there was a period of inflation, the evolution of structure, and so on. But there are also even deeper questions, having to do with why there is a universe at all, and why the early universe had low entropy, that most working cosmologists don’t address. Today’s guest, Anthony Aguirre, is an exception. We talk about these deep issues, and how tackling them might lead to a very different way of thinking about our universe. At the end there’s an entertaining detour into AI and existential risk.

Anthony Aguirre received his Ph.D. in Astronomy from Harvard University. He is currently associate professor of physics at the University of California, Santa Cruz, where his research involves cosmology, inflation, and fundamental questions in physics. His new book, Cosmological Koans, is an exploration of the principles of contemporary cosmology illustrated with short stories in the style of Zen Buddhism. He is the co-founder of the Foundational Questions Institute, the Future of Life Institute, and the prediction platform Metaculus.

Scientists develop AI-powered digital twin model that can control and adapt its physical doppelganger

Scientists say they have developed a new AI-assisted model of a digital twin with the ability to adapt and control the physical machine in real time.

The discovery, reported in the journal IEEE Access, adds a new dimension to the digital copies of real-world machines, like robots, drones, or even autonomous cars, according to the authors.

Digital twins are exact replicas of things in the physical world. They are likened to video game versions of real machines with which they digitally twin, and are constantly updated with real-time data.

Advanced Skin Disease Diagnosis and Treatment Leveraging Convolutional Neural Networks for Image-Based Prediction and Comprehensive Health Assistance

【Advanced Skin Disease Diagnosis and Treatment: Leveraging Convolutional Neural Networks for Image-Based Prediction and Comprehensive Health Assistance】 Full article: (Authored by Noshin Un Noor, et al., from World University of Bangladesh, Bangladesh.)

Skin_diseases are a major global health concern, encompassing a wide range of conditions with varying severity. Prompt and precise diagnosis is critical for effective treatment. However, traditional methods often rely on dermatologists, creating disparities in access to care. This study creates and assesses a highly accurate Convolutional Neural Network (CNN) model that can use digital photos of skin lesions to diagnose a variety of skin conditions, and looks into how well various CNN architectures and pre-trained models may increase the precision and effectiveness of diagnosing skin conditions.


Abstract

Skin conditions are a worldwide health issue that requires prompt and accurate diagnosis in order to be effectively treated. This study presents a Convolutional Neural Network (CNN)-based automated skin disease diagnostic method. The work uses preprocessing methods like scaling, normalization, and augmentation to improve model robustness using the DermNet dataset, which consists of 19,500 pictures from 23 disease categories. TensorFlow and Keras were used to create a unique CNN architecture, which produced an impressive accuracy of 94.65%. Metrics like precision, recall, and F1-score were used to validate the model’s performance, showing that it outperformed more conventional machine learning techniques like SVM and KNN. The system incorporates patient-reported symptoms in addition to diagnosis to provide a comprehensive approach to health support, allowing for remote accessibility and tailored therapy suggestions. This work recognizes issues like dataset variability and processing needs while showcasing the revolutionary potential of AI in dermatology. In order to improve model interpretability and clinical integration, future possibilities include dataset extension, real-world validation, and the use of explainable AI.

Skin Disease Diagnosis, Dermatological Image Analysis, Medical Image Classification, Convolutional Neural Networks (CNNs), Healthcare Accessibility, Deep Learning Applications, DermNet Dataset