Toggle light / dark theme

Researchers have created a new AI algorithm called Torque Clustering, which greatly enhances an AI system’s ability to learn and identify patterns in data on its own, without human input.

Researchers have developed a new AI algorithm, Torque Clustering, which more closely mimics natural intelligence than existing methods. This advanced approach enhances AI’s ability to learn and identify patterns in data independently, without human intervention.

Torque Clustering is designed to efficiently analyze large datasets across various fields, including biology, chemistry, astronomy, psychology, finance, and medicine. By uncovering hidden patterns, it can provide valuable insights, such as detecting disease trends, identifying fraudulent activities, and understanding human behavior.

To test this new system, the team executed what is known as Grover’s search algorithm—first described by Indian-American computer scientist Lov Grover in 1996. This search looks for a particular item in a large, unstructured dataset using superposition and entanglement in parallel. The search algorithm also exhibits a quadratic speedup, meaning a quantum computer can solve a problem with the square root of the input rather than just a linear increase. The authors report that the system achieved a 71 percent success rate.

While operating a successful distributed system is a big step forward for quantum computing, the team reiterates that the engineering challenges remain daunting. However, networking together quantum processors into a distributed network using quantum teleportation provides a small glimmer of light at the end of a long, dark quantum computing development tunnel.

“Scaling up quantum computers remains a formidable technical challenge that will likely require new physics insights as well as intensive engineering effort over the coming years,” David Lucas, principal investigator of the study from Oxford University, said in a press statement. “Our experiment demonstrates that network-distributed quantum information processing is feasible with current technology.”

Artificial Intelligence (AI) is revolutionizing industries globally, and medical education is no exception. For a nation like India, where the healthcare system faces immense pressure, AI integration in medical learning is more than a convenience, it’s a necessity. AI-powered tools offer medical students transformative benefits: personalized learning pathways that adapt to individual knowledge gaps, advanced clinical simulation platforms for risk-free practice, intelligent tutoring systems that provide immediate feedback, and sophisticated diagnostic training algorithms that enhance clinical reasoning skills. From offering personalized guidance to transforming clinical training, chatbots and digital assistants are redefining how future healthcare professionals prepare for their complex and demanding roles, enabling more efficient, interactive, and comprehensive medical education.

Personalized learning One of AI’s greatest contributions to medical education is its ability to create and extend personalized learning experiences. Conventional methods, on the other hand, often utilize a one-size-fits-all approach, leaving students to fend for themselves when they struggle. AI has the power to change this by analyzing a student’s performance and crafting study plans tailored to their strengths and weaknesses. This means students can focus on areas where they need the most help, saving time and effort.

Breyt Coakley, Principal Investigator at Helios Remote Sensing Systems, Inc. discusses Cognitive Software Algorithms Techniques for Electronic Warfare. Helios is developing machine learning algorithms to detect agile emitters, not yet in Signal Intelligence (SIGINT) databases, without fragmentation. Traditional deinterleaving fragments these emitters into multiple unknown emitters, or even worse misidentifies them as matching multiple incorrect SIGINT database entries.

How can machine learning help determine the best times and ways to use solar energy? This is what a recent study published in Advances in Atmospheric Sciences hopes to address as a team of researchers from the Karlsruhe Institute of Technology investigated how machine learning algorithms can be used to predict and forecast weather patterns to enable more cost-effective approaches for using solar energy. This study has the potential to help enhance renewable energy technologies by fixing errors that are often found in current weather prediction models, leading to more efficient use of solar power by predicting when weather patterns will enable the availability of the Sun for solar energy needs.

For the study, the researchers used a combination of statistical methods and machine learning algorithms to help predict the most efficient times of day that photovoltaic (PV) power generation will achieve maximum production output. Their methods used what’s known as post-processing, which involves correcting weather forecasting errors before that data enters PV models, resulting in changing PV model predictions, resulting in establishing more accurate weather forecasting from machine learning algorithms.

“One of our biggest takeaways was just how important the time of day is,” said Dr. Sebastian Lerch, who is a professor at the Karlsruhe Institute of Technology and a co-author on the study. “We saw major improvements when we trained separate models for each hour of the day or fed time directly into the algorithms.”

What if time is not as fixed as we thought? Imagine that instead of flowing in one direction—from past to future—time could flow forward or backwards due to processes taking place at the quantum level. This is the thought-provoking discovery made by researchers at the University of Surrey, as a new study reveals that opposing arrows of time can theoretically emerge from certain quantum systems.

For centuries, scientists have puzzled over the arrow of time—the idea that time flows irreversibly from past to future. While this seems obvious in our experienced reality, the underlying laws of physics do not inherently favor a single direction. Whether time moves forward or backwards, the equations remain the same.

Dr. Andrea Rocco, Associate Professor in Physics and Mathematical Biology at the University of Surrey and lead author of the study, said, One way to explain this is when you look at a process like spilled milk spreading across a table, it’s clear that time is moving forward. But if you were to play that in reverse, like a movie, you’d immediately know something was wrong—it would be hard to believe milk could just gather back into a glass.

A game of chess requires its players to think several moves ahead, a skill that computer programs have mastered over the years. Back in 1996, an IBM supercomputer famously beat the then world chess champion Garry Kasparov. Later, in 2017, an artificial intelligence (AI) program developed by Google DeepMind, called AlphaZero, triumphed over the best computerized chess engines of the time after training itself to play the game in a matter of hours.

More recently, some mathematicians have begun to actively pursue the question of whether AI programs can also help in cracking some of the world’s toughest problems. But, whereas an average game of chess lasts about 30 to 40 moves, these research-level math problems require solutions that take a million or more steps, or moves.

In a paper appearing on the arXiv preprint server, a team led by Caltech’s Sergei Gukov, the John D. MacArthur Professor of Theoretical Physics and Mathematics, describes developing a new type of machine-learning algorithm that can solve math problems requiring extremely long sequences of steps. The team used their to solve families of problems related to an overarching decades-old math problem called the Andrews–Curtis conjecture. In essence, the algorithm can think farther ahead than even advanced programs like AlphaZero.

Quantum computing is an alternative computing paradigm that exploits the principles of quantum mechanics to enable intrinsic and massive parallelism in computation. This potential quantum advantage could have significant implications for the design of future computational intelligence systems, where the increasing availability of data will necessitate ever-increasing computational power. However, in the current NISQ (Noisy Intermediate-Scale Quantum) era, quantum computers face limitations in qubit quality, coherence, and gate fidelity. Computational intelligence can play a crucial role in optimizing and mitigating these limitations by enhancing error correction, guiding quantum circuit design, and developing hybrid classical-quantum algorithms that maximize the performance of NISQ devices. This webinar aims to explore the intersection of quantum computing and computational intelligence, focusing on efficient strategies for using NISQ-era devices in the design of quantum-based computational intelligence systems.

Speaker Biography:
Prof. Giovanni Acampora is a Professor of Artificial Intelligence and Quantum Computing at the Department of Physics “Ettore Pancini,” University of Naples Federico II, Italy. He earned his M.Sc. (cum laude) and Ph.D. in Computer Science from the University of Salerno. His research focuses on computational intelligence and quantum computing. He is Chair of the IEEE-SA 1855 Working Group, Founder and Editor-in-Chief of Quantum Machine Intelligence. Acampora has received multiple awards, including the IEEE-SA Emerging Technology Award, IBM Quantum Experience Award and Fujitsu Quantum Challenge Award for his contributions to computational intelligence and quantum AI.

David Furman, an immunologist and data scientist at the Buck Institute for Research on Aging and Stanford University, uses artificial intelligence to parse big data to identify interventions for healthy aging.

Read more.

David Furman uses computational power, collaborations, and cosmic inspiration to tease apart the role of the immune system in aging.