Toggle light / dark theme

AI agent autonomously solves complex cybersecurity challenges using text-based tools

Artificial intelligence agents—AI systems that can work independently toward specific goals without constant human guidance—have demonstrated strong capabilities in software development and web navigation. Their effectiveness in cybersecurity has remained limited, however.

That may soon change, thanks to a research team from NYU Tandon School of Engineering, NYU Abu Dhabi and other universities that developed an AI agent capable of autonomously solving complex cybersecurity challenges.

The system, called EnIGMA, was presented this month at the International Conference on Machine Learning (ICML) 2025 in Vancouver, Canada.

Nanotechnology in AI: Building Faster, Smaller, and Smarter Systems

As artificial intelligence (AI) rapidly advances, the physical limitations of conventional semiconductor hardware have become increasingly apparent. AI models today demand vast computational resources, high-speed processing, and extreme energy efficiency—requirements that traditional silicon-based systems struggle to meet. However, nanotechnology is stepping in to reshape the future of AI by offering solutions that are faster, smaller, and smarter at the atomic scale.

The recent article published by AZoNano provides a compelling overview of how nanotechnology is revolutionizing the design and operation of AI systems, pushing beyond the constraints of Moore’s Law and Dennard scaling. Through breakthroughs in neuromorphic computing, advanced memory devices, spintronics, and thermal management, nanomaterials are enabling the next generation of intelligent systems.

AI can evolve to feel guilt—but only in certain social environments

Guilt is a highly advantageous quality for society as a whole. It might not prevent initial wrongdoings, but guilt allows humans to judge their own prior judgments as harmful and prevents them from happening again. The internal distress caused by feelings of guilt often—but not always—results in the person taking on some kind of penance to relieve themselves from internal turmoil. This might be something as simple as admitting their wrongdoing to others and taking on a slight stigma of someone who is morally corrupt. This upfront cost might be initially painful, but can relieve further guilt and lead to better cooperation for the group in the future.

As we interact more and more with and use it in almost every aspect of our modern society, finding ways to instill ethical decision-making becomes more critical. In a recent study, published in the Journal of the Royal Society Interface, researchers used to explore how and when guilt evolves in multi-agent systems.

The researchers used the “prisoners’ dilemma”—a game where two players must choose between cooperating and defecting. Defecting provides an agent with a higher payoff, but they must betray their partner. This, in turn, makes it more likely that the partner will also defect. However, if the game is repeated over and over, results in a better payoff for both agents.

Liquid droplets trained to play tic-tac-toe

Artificial intelligence and high-performance computing are driving up the demand for massive sources of energy. But neuromorphic computing, which aims to mimic the structure and function of the human brain, could present a new paradigm for energy-efficient computing.

To this end, researchers at Lawrence Livermore National Laboratory (LLNL) created a droplet-based platform that uses ions to perform simple neuromorphic computations. Using its ability to retain , the team trained the droplet system to recognize handwritten digits and play tic-tac-toe. The work was published in Science Advances.

The authors were inspired by the , which computes with ions instead of electrons. Ions move through fluids, and moving them may require less energy than moving electrons in solid-state devices.

Stitched for strength: The physics of jamming in stiff, knitted fabrics

School of Physics Associate Professor Elisabetta Matsumoto is unearthing the secrets of the centuries-old practice of knitting through experiments, models, and simulations. Her goal? Leveraging knitting for breakthroughs in advanced manufacturing—including more sustainable textiles, wearable electronics, and soft robotics.

Matsumoto, who is also a principal investigator at the International Institute for Sustainability with Knotted Chiral Meta Matter (WPI-SKCM2) at Hiroshima University, is the corresponding author on a new study exploring the physics of ‘’—a phenomenon when soft or stretchy materials become rigid under low stress but soften under higher tension.

The study, “Pulling Apart the Mechanisms That Lead to Jammed Knitted Fabrics,” is published in Physical Review E, and also includes Georgia Tech Matsumoto Group graduate students Sarah Gonzalez and Alexander Cachine in addition to former postdoctoral fellow Michael Dimitriyev, who is now an assistant professor at Texas A&M University.

Microsoft in talks to maintain access to OpenAI’s tech beyond AGI milestone

Microsoft is reportedly in advanced talks with OpenAI for a new agreement that would give it ongoing access to the startup’s technology even if OpenAI achieves what it defines as AGI, or advanced general intelligence. If the deal goes through, it would clear a key hurdle in OpenAI’s transition toward becoming a fully commercial enterprise.

The companies have been negotiating regularly, and they could come to an agreement in a few weeks, Bloomberg reports, citing three anonymous sources. The report cited some of the sources as saying that while the talks have been positive, other roadblocks could emerge in the form of regulatory scrutiny and Elon Musk’s lawsuit to block OpenAI’s for-profit transition.

OpenAI is currently structured as a mission-driven nonprofit that oversees a capped for-profit company — a setup that’s meant to limit how fully it can commercialize or raise money. That structure hasn’t stopped it from raising billions and operating like a traditional tech company, but OpenAI still wants to shake off its constraints.

From AI to Organoids: How Growing Brain-like Structures are Advancing Machine Learning

Artificial Intelligence (AI) is usually built with silicon chips and code. But scientists are now exploring something very different. In 2025, they are growing brain organoids, which are small, living structures made from human stem cells. These organoids act like simple versions of the human brain. They form real neural connections and send electrical signals. They even show signs of learning and memory.

By linking organoids with AI systems, researchers are beginning to explore new computational approaches. Recent studies have shown that organoids possess the ability to recognize speech, detect patterns, and respond to input. Living brain tissue may help create AI models that learn and adapt faster than traditional machines. Early results indicate that organoid-based systems could offer a more flexible and energy-efficient form of intelligence.

Brain Organoids and the Emergence of Organoid Intelligence.

/* */