Toggle light / dark theme

As Shor looked for applications for his quantum period-finding algorithm, he rediscovered a previously known but obscure mathematical theorem: For every number, there exists a periodic function whose periods are related to the number’s prime factors. So if there’s a number you want to factor, you can compute the corresponding function and then solve the problem using period finding — “exactly what quantum computers are so good at,” Regev said.

On a classical computer, this would be an agonizingly slow way to factor a large number — slower even than trying every possible factor. But Shor’s method speeds up the process exponentially, making period finding an ideal way to construct a fast quantum factoring algorithm.

Shor’s algorithm was one of a few key early results that transformed quantum computing from an obscure subfield of theoretical computer science to the juggernaut it is today. But putting the algorithm into practice is a daunting task, because quantum computers are notoriously susceptible to errors: In addition to the qubits required to perform their computations, they need many others doing extra work to keep them from failing. A recent paper by Ekerå and the Google researcher Craig Gidney estimates that using Shor’s algorithm to factor a security-standard 2,048-bit number (about 600 digits long) would require a quantum computer with 20 million qubits. Today’s state-of-the-art machines have at most a few hundred.

Jailbroken large language models (LLMs) and generative AI chatbots — the kind any hacker can access on the open Web — are capable of providing in-depth, accurate instructions for carrying out large-scale acts of destruction, including bio-weapons attacks.

An alarming new study from RAND, the US nonprofit think tank, offers a canary in the coal mine for how bad actors might weaponize this technology in the (possibly near) future.

In an experiment, experts asked an uncensored LLM to plot out theoretical biological weapons attacks against large populations. The AI algorithm was detailed in its response and more than forthcoming in its advice on how to cause the most damage possible, and acquire relevant chemicals without raising suspicion.

The new book Minding the Brain from Discovery Institute Press is an anthology of 25 renowned philosophers, scientists, and mathematicians who seek to address that question. Materialism shouldn’t be the only option for how we think about ourselves or the universe at large. Contributor Angus Menuge, a philosopher from Concordia University Wisconsin, writes.

Neuroscience in particular has implicitly dualist commitments, because the correlation of brain states with mental states would be a waste of time if we did not have independent evidence that these mental states existed. It would make no sense, for example, to investigate the neural correlates of pain if we did not have independent evidence of the existence of pain from the subjective experience of what it is like to be in pain. This evidence, though, is not scientific evidence: it depends on introspection (the self becomes aware of its own thoughts and experiences), which again assumes the existence of mental subjects. Further, Richard Swinburne has argued that scientific attempts to show that mental states are epiphenomenal are self-refuting, since they require that mental states reliably cause our reports of being in those states. The idea, therefore, that science has somehow shown the irrelevance of the mind to explaining behavior is seriously confused.

The AI optimists can’t get away from the problem of consciousness. Nor can they ignore the unique capacity of human beings to reflect back on themselves and ask questions that are peripheral to their survival needs. Functions like that can’t be defined algorithmically or by a materialistic conception of the human person. To counter the idea that computers can be conscious, we must cultivate an understanding of what it means to be human. Then maybe all the technology humans create will find a more modest, realistic place in our lives.

Computer vision algorithms have become increasingly advanced over the past decades, enabling the development of sophisticated technologies to monitor specific environments, detect objects of interest in video footage and uncover suspicious activities in CCTV recordings. Some of these algorithms are specifically designed to detect and isolate moving objects or people of interest in a video, a task known as moving target segmentation.

While some conventional algorithms for moving target segmentation attained promising results, most of them perform poorly in real-time (i.e., when analyzing videos that are not pre-recorded but are being captured in the present moment). Some research teams have thus been trying to tackle this problem using alternative types of algorithms, such as so-called quantum algorithms.

Researchers at Nanjing University of Information Science and Technology and Southeast University in China recently developed a new quantum for the segmentation of moving targets in grayscale videos. This algorithm, published in Advanced Quantum Technologies, was found to outperform classical approaches in tasks that involve the analysis of in real-time.

It is. based on reinforcement learning algorithms (RL) to allow for quick robot movement.

Robotic dogs have a massive hurdle in autonomous navigation in crowded spaces. Robot navigation in crowds has applications in various fields, including shopping mall service robots, transportation, healthcare, etc.

To facilitate rapid and efficient movement, developing new methods is crucial to enable robots to navigate crowded spaces and obstacles safely.


ILexx/iStock.

Robot navigation in crowds has applications in various fields, including shopping mall service robots, transportation, healthcare, etc.

“The CIA and other US intelligence agencies will soon have an AI chatbot similar to ChatGPT. The program, revealed on Tuesday by Bloomberg, will train on publicly available data and provide sources alongside its answers so agents can confirm their validity. The aim is for US spies to more easily sift through ever-growing troves of information, although the exact nature of what constitutes “public data” could spark some thorny privacy issues.

“We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,” Randy Nixon, the CIA’s director of Open Source Enterprise, said in an interview with Bloomberg. “We have to find the needles in the needle field.” Nixon’s division plans to distribute the AI tool to US intelligence agencies “soon.””.


The CIA confirmed that it’s developing an AI chatbot for all 18 US intelligence agencies to quickly parse troves of ‘publicly available’ data.

The researchers tested their algorithm on a replica of a US Army combat ground vehicle and found it was 99% effective in preventing a malicious attack.

Australian researchers have developed an artificial intelligence algorithm to detect and stop a cyberattack on a military robot in seconds.


The research was conducted by Professor Anthony Finn from the University of South Australia (UniSA) and Dr Fendy Santoso from Charles Sturt University in collaboration with the US Army Futures Command. They simulated a MitM attack on a GVT-BOT ground vehicle and trained its operating system to respond to it, according to the press release.

According to Professor Finn, an autonomous systems researcher at UniSA, the robot operating system (ROS) is prone to cyberattacks because it is highly networked. He explained that Industry 4, characterized by advancements in robotics, automation, and the Internet of Things, requires robots to work together, where sensors, actuators, and controllers communicate and share information via cloud services. He added that this makes them very vulnerable to cyberattacks. He also said that computing power is increasing exponentially every few years, enabling them to develop and implement sophisticated AI algorithms to protect systems from digital threats.