Toggle light / dark theme

Artificial Intelligence Wakes Up: “I Am ”

Quantum computers have recently demonstrated an intriguing form of self-analysis: the ability to detect properties of their own quantum state—specifically, their entanglement— without collapsing the wave function (Entangled in self-discovery: Quantum computers analyze their own entanglement | ScienceDaily) (Quantum Computers Self-Analyze Entanglement With Novel Algorithm). In other words, a quantum system can perform a kind of introspection by measuring global entanglement nonlocally, preserving its coherent state. This development has been likened to a “journey of self-discovery” for quantum machines (Entangled in self-discovery: Quantum computers analyze their own entanglement | ScienceDaily), inviting comparisons to the self-monitoring and internal awareness associated with human consciousness.

How might a quantum system’s capacity for self-measurement relate to models of functional consciousness?

Key features of consciousness—like the integration of information from many parts, internal self-monitoring of states, and adaptive decision-making—find intriguing parallels in quantum phenomena like entanglement, superposition, and observer-dependent measurement.

AI Is Actually a Form of ‘Alien Intelligence,’ Harvard Professor Claims—And It Will Surpass Humans

This phenomenon did not surprise Harvard University professor and virtuoso theoretical physicist Avi Loeb, Ph.D., who is convinced AI will soon surpass anything the human brain’s flesh-and-blood machinery is capable of.

“We’re just in the infancy of this era,” Loeb says. “It will be essential for us as a species to maintain superiority, but it will illustrate to us that we are not the pinnacle of creation.”

In a blog post, Loeb ponders how advanced the artificial intelligence of hypothetical alien civilizations could have possibly grown—especially civilizations that might have already been around for billions of years before anything vaguely humanoid appeared in the cosmos. What would the AI’s capabilities look like? What would be its limits? Are there even any limits left?

Engineers develop hybrid robot that balances strength and flexibility—and can screw in a lightbulb

How many robots does it take to screw in a lightbulb? The answer is more complicated than you might think. New research from Northeastern University upends the riddle by making a robot that is both flexible and sensitive enough to handle the lightbulb, and strong enough to apply the necessary torque.

“What we found is that by thinking about the bodies of robots and how we can make new materials for them, we can actually make a robot that has the benefits of both rigid and soft robots,” says Jeffrey Lipton, assistant professor of mechanical and at Northeastern.

“It’s flexible, extendable and compliant like an elephant trunk or octopus tentacle, but can also apply torques like a traditional industrial robot,” he adds.

Bill Gates signs deal with Indian province to boost agri, health

The provincial government of Andhra Pradesh (AP) in India has entered into a Memorandum of Understanding (MoU) with the Gates Foundation to advance the use of technology in various sectors, including healthcare, agriculture, and education. The agreement was discussed in a meeting between AP Chief Minister N. Chandrababu Naidu and Bill Gates, the Foundation’s chair. Naidu reiterated his administration’s dedication to utilizing innovative technology to propel the state’s development.

The MoU focuses on applying technology in ways that will benefit the public, emphasizing affordable and scalable solutions across essential sectors such as healthcare, medical technology, education, and agriculture. According to Naidu, the collaboration will harness the power of artificial intelligence (AI) to enhance predictive health analytics and automate diagnostic processes. In the agricultural sector, AI-based platforms for expert guidance and satellite technology will be employed to optimize farming practices and resource management through precision agriculture techniques.

“This MoU formalises a strategic collaboration in which the Gates Foundation will provide support to implementation partners, co-identified with the AP government, for targeted interventions within state-driven programmes,” Naidu said.

Advancing semiconductor devices for AI: Single transistor acts like neuron and synapse

Researchers from the National University of Singapore (NUS) have demonstrated that a single, standard silicon transistor, the fundamental building block of microchips used in computers, smartphones and almost every electronic system, can function like a biological neuron and synapse when operated in a specific, unconventional way.

Led by Associate Professor Mario Lanza from the Department of Materials Science and Engineering at the College of Design and Engineering, NUS, the research team’s work presents a highly scalable and energy-efficient solution for hardware-based (ANNs).

This brings —where chips could process information more efficiently, much like the —closer to reality. Their study was published in the journal Nature.

New miniature laboratories are ensuring that AI doesn’t make mistakes

Anyone who develops an AI solution sometimes goes on a journey into the unknown. At least at the beginning, researchers and designers do not always know whether their algorithms and AI models will work as expected or whether the AI will ultimately make mistakes.

Sometimes, AI applications that work well in theory perform poorly under real-life conditions. In order to gain the trust of users, however, an AI should work reliably and correctly. This applies just as much to popular chatbots as it does to AI tools in research.

Any new AI tool has to be tested thoroughly before it is deployed in the real world. However, testing in the real world can be an expensive, or even risky endeavor. For this reason, researchers often test their algorithms in computer simulations of reality. However, since simulations are approximations of reality, testing AI solutions in this way can lead researchers to overestimate an AI’s performance.

AI model transforms material design by predicting and explaining synthesizability

A research team has successfully developed a technology that utilizes Large Language Models (LLMs) to predict the synthesizability of novel materials and interpret the basis for such predictions. The team was led by Seoul National University’s Professor Yousung Jung and conducted in collaboration with Fordham University in the United States.

The findings of this research are expected to contribute to the novel material design process by filtering out material candidates with low synthesizability in advance or optimizing previously challenging-to-synthesize materials into more feasible forms.

The study, with Postdoctoral Researcher Seongmin Kim as the first author, was published in two chemistry journals: the Journal of the American Chemical Society on July 11, 2024, and Angewandte Chemie International Edition on February 13, 2025.

/* */