Toggle light / dark theme

UCLA doctoral student Yilin Wong noticed that some tiny dots had appeared on one of her samples, which had been accidentally left out overnight. The layered sample consisted of a germanium wafer topped with evaporated metal films in contact with a drop of water. On a whim, she looked at the dots under a microscope and couldn’t believe her eyes. Beautiful spiral patterns had been etched into the germanium surface by a chemical reaction.

Wong’s curiosity led her on a journey to discover what no one had seen before: Hundreds of near-identical spiral patterns can spontaneously form on a centimeter square germanium chip. Moreover, small changes in experiment parameters, such as the thickness of the metal film, generated different patterns, including Archimedean spirals, logarithmic spirals, lotus flower shapes, radially symmetric patterns and more.

The discovery, published in Physical Review Materials, occurred fortuitously when Wong made a small mistake while attempting to bind DNA to the metal film.

Researchers have advanced a decades-old challenge in the field of organic semiconductors, opening new possibilities for the future of electronics. The researchers, led by the University of Cambridge and the Eindhoven University of Technology, have created an organic semiconductor that forces electrons to move in a spiral pattern, which could improve the efficiency of OLED displays in television and smartphone screens, or power next-generation computing technologies such as spintronics and quantum computing.

The semiconductor they developed emits circularly polarized light—meaning the light carries information about the ‘handedness’ of electrons. The internal structure of most inorganic semiconductors, like silicon, is symmetrical, meaning electrons move through them without any preferred direction.

However, in nature, molecules often have a chiral (left-or right-handed) structure: like human hands, are mirror images of one another. Chirality plays an important role in like DNA formation, but it is a difficult phenomenon to harness and control in electronics.

Instantly turning a material from opaque to transparent, or from a conductor to an insulator, is no longer the stuff of science fiction. For several years now, scientists have been using lasers to control the properties of matter at extremely fast rates: during one optical cycle of a light wave. But because these changes occur on the timescale of attoseconds—one-billionth of one-billionth of a second—figuring out how they unfold is extremely difficult.

In a new study published in Nature Photonics, Prof. Nirit Dudovich’s team from the Weizmann Institute of Science presents an innovative method of tracking these rapid material changes. This advance in attosecond science, the study of the fastest phenomena in nature, could have a wide variety of future applications, paving the way for ultrafast communications and computing.

If you have ever seen a rainbow, you’ve seen a practical demonstration of how light slows down and is refracted when it passes through matter, in this case, raindrops. Sunlight is composed of a broad spectrum of colors, each of which experiences a different delay as it passes through the droplets. These differences cause the colors to become separated, producing a radiant rainbow.

A major breakthrough in organic semiconductors.

Semiconductors are materials with electrical conductivity that falls between conductors and insulators, making them essential for modern electronics. They are typically crystalline solids, the most common of which is silicon, used extensively in the production of electronic components such as transistors and diodes. Semiconductors are unique because their conductivity can be altered and controlled through doping—adding impurities to the material to change its electrical properties. This property allows them to serve as the foundation for integrated circuits and microchips, powering everything from computers and smartphones to advanced medical devices and renewable energy technologies. The behavior of semiconductors is also crucial in the development of various electronic, photonic, and quantum devices.

Quantum systems hold the promise of tackling some complex problems faster and more efficiently than classical computers. Despite their potential, so far only a limited number of studies have conclusively demonstrated that quantum computers can outperform classical computers on specific tasks. Most of these studies focused on tasks that involve advanced computations, simulations or optimization, which can be difficult for non-experts to grasp.

Researchers at the University of Oxford and the University of Sevilla recently demonstrated a over a classical scenario on a cooperation task called the odd-cycle game. Their paper, published in Physical Review Letters, shows that a team with can win this game more often than a team without.

“There is a lot of talk about quantum advantage and how will revolutionize entire industries, but if you look closely, in many cases, there is no mathematical proof that classical methods definitely cannot find solutions as efficiently as quantum algorithms,” Peter Drmota, first author of the paper, told Phys.org.

Twenty-four stroke patients have already used the complete system, consisting of an exoskeleton for the arm and shoulder in combination with FES as part of the ReHyb research project. Half of them were patients at the Schön Klinik Bad Aibling Harthausen, which is leading the study. The researchers also used a computer game that automatically adapts to the individual player’s capabilities. It trains them to grip and move their arms shortly after a stroke by reacting to colored balls flying toward them at varying speeds on a screen. The task is to catch the balls and match them with color-coded boxes.

At the center of TUM Professor Sandra Hirche’s setup is a digital twin that records the individual requirements of each patient and places them in a control loop. Among other things, the researchers have to determine how well each patient can move their arm and hand. In the event of a stroke, for example, paralysis can be caused by damage to the motor area in the brain responsible for movement. However, it is impossible to predict how severely the signals transmitted from the brain to the muscles in the forearm will be impaired after the stroke. “Individual muscle strands in the forearm can be stimulated to the right extent for hands and fingers to move,” says Prof. Hirche, who holds the Chair of Information-Oriented Control at TUM. In addition to information on muscle activity in the forearm, the researchers need to know how strongly the muscles should be stimulated in conjunction with the exoskeleton assistance.

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

University at Albany researchers at the RNA Institute are pioneering new methods for designing and assembling DNA nanostructures, enhancing their potential for real-world applications in medicine, materials science and data storage.

Their latest findings demonstrate a novel ability to assemble these structures without the need for and controlled cooling. They also demonstrate successful assembly of unconventional “buffer” substances including nickel. These developments, published in the journal Science Advances, unlock new possibilities in DNA nanotechnology.

DNA is most commonly recognized for its role in storing genetic information. Composed of base pairs that can easily be manipulated, DNA is also an excellent material for constructing nanoscale objects. By “programming” the base pairs that make up DNA molecules, scientists can create precise structures as small as a few nanometers that can be engineered into shapes with intricate architectures.

With today’s data rates of only a few hundred megabytes per second, access to digital information remains relatively slow. Initial experiments have already shown a promising new strategy: Magnetic states can be read out by short current pulses, whereby recently discovered spintronic effects in purpose-built material systems could remove previous speed restrictions.

Researchers at HZDR and TU Dortmund University are now providing proof of the feasibility of such ultrafast data sources. Instead of , they use ultrashort , thereby enabling the read-out of magnetic structures within picoseconds, as they report in the journal Nature Communications.

“We now can determine the magnetic orientation of a material much quicker with light-induced current pulses,” explains Dr. Jan-Christoph Deinert of HZDR’s Institute of Radiation Physics. For their experiments, the physicist and his team employed light that is invisible to the human eye—so-called terahertz radiation.

A small team of computational and evolutionary biologists from the University of Chinese Academy of Sciences, Zhongshan Hospital and the Max Planck Institute for Evolutionary Anthropology, reports that unique lactase genes carried by about 25% of East Asian people may have been inherited from Neanderthals.

In their study published in Proceedings of the National Academy of Sciences, the group compared the of thousands of people of African, East Asian and European descent against one another and then against Neanderthal genes.

Prior research has shown that many people of European descent carry genes that allow them to easily digest the sugars (lactose) present in milk, in sharp contrast to people of East Asian descent, who tend to have a high percentage of . However, in this new effort, the research team found unique versions of the lactase gene in some East Asian people along with evidence that they may have come from interbreeding between humans and Neanderthals thousands of years ago.