Toggle light / dark theme

In the brand-new world of AI, we’re slowly learning that there’s a big difference between a small routine transaction like buying a hamburger, and something much more complex and high-stakes.


Human AI Robot With Flowing Binary High-Res Stock Photo — Getty Images

For reference, here’s more on the company’s mission statement:

“The company’s mission is to unlock and elevate human agency for everyone, freeing us up from the drudgery of daily boring tasks to focus on doing what we love. The vision is not of machines replacing humans but augmenting them. Through AI Agents, the team aims to amplify the human potential, allowing individuals to truly focus on what they love. With their AI products, MultiOn aspires to unlock parallelization for humanity: Imagine a world where tasks are conducted concurrently, no longer bound by the linear constraints of time of a single person.”

This led to the creation of a “bionic eye” that uses a combination of AI and several advanced scanning techniques, including optical imaging, thermal imaging, and tomography (the technique used for CT scans), to capture differences between parts of the scrolls that were blank and those that contained ink — all without having to physically unroll them.

Where’s Plato? On April 23, team leader Graziano Ranocchia announced that the group had managed to extract about 1,000 words from a scroll titled “The History of the Academy” and that the words revealed Plato’s burial place: a private part of the garden near a shrine to the Muses.

The recovered text, which accounted for about 30% of the scroll, also revealed that Plato may have been sold into slavery between 404 and 399 BC — historians previously thought this had happened later in the philosopher’s life, around 387 BC.

In a recent study merging the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng have explored the vulnerabilities of neural networks through the lens of the uncertainty principle in physics. Their work, published in the National Science Review, draws a parallel between the susceptibility of neural networks to targeted attacks and the limitations imposed by the uncertainty principle—a well-established theory in quantum physics that highlights the challenges of measuring certain pairs of properties simultaneously.

The researchers’ quantum-inspired analysis of neural network vulnerabilities suggests that adversarial attacks leverage the trade-off between the precision of input features and their computed gradients. “When considering the architecture of deep neural networks, which involve a loss function for learning, we can always define a conjugate variable for the inputs by determining the gradient of the loss function with respect to those inputs,” stated in the paper by Dr. Jun-Jie Zhang, whose expertise lies in mathematical physics.

This research is hopeful to prompt a reevaluation of the assumed robustness of neural networks and encourage a deeper comprehension of their limitations. By subjecting a neural network model to adversarial attacks, Dr. Zhang and Prof. Meng observed a compromise between the model’s accuracy and its resilience.

Graph Neural Networks (GNNs) are crucial in processing data from domains such as e-commerce and social networks because they manage complex structures. Traditionally, GNNs operate on data that fits within a system’s main memory. However, with the growing scale of graph data, many networks now require methods to handle datasets that exceed memory limits, introducing the need for out-of-core solutions where data resides on disk.

Despite their necessity, existing out-of-core GNN systems struggle to balance efficient data access with model accuracy. Current systems face a trade-off: either suffer from slow input/output operations due to small, frequent disk reads or compromise accuracy by handling graph data in disconnected chunks. For instance, while pioneering, these challenges have limited previous solutions like Ginex and MariusGNN, showing significant drawbacks in training speed or accuracy.

The DiskGNN framework, developed by researchers from Southern University of Science and Technology, Shanghai Jiao Tong University, Centre for Perceptual and Interactive Intelligence, AWS Shanghai AI Lab, and New York University, emerges as a transformative solution specifically designed to optimize the speed and accuracy of GNN training on large datasets. This system utilizes an innovative offline sampling technique that prepares data for quick access during training. By preprocessing and arranging graph data based on expected access patterns, DiskGNN reduces unnecessary disk reads, significantly enhancing training efficiency.

Musk, Gates, Hawking, Altman and Putin all fear artificial general intelligence, AGI. But what is AGI and why might it be an advantage that more people are trying to develop it despite very serious risks?

“We are all so small and weak. Imagine how easy life would be if we had an owl to help us build nests,” said one sparrow to the flock. Others agreed:

“Yes, and we could use it to look after our elderly and our children. And it could give us good advice and keep an eye on the cat.”

A new book by Nick Bostrom is a major publishing and cultural event. His 2014 book, Superintelligence, helped to wake the world up to the impact of the first Big Bang in AI, the arrival of deep learning. Since then we have had a second Big Bang in AI, with the introduction of transformer systems like GPT-4. Bostrom’s previous book focused on the downside potential of advanced AI. His new one explores the upside.

Deep Utopia is an easier read than its predecessor, although its author cannot resist using some of the phraseology of professional philosophers, so readers may have to look up words like “modulo” and “simpliciter.” Despite its density and its sometimes grim conclusions, Superintelligence had a sprinkling of playful self-ridicule and snark. There is much more of this in the current offering.

The structure of Deep Utopia is deeply odd. The book’s core is a series of lectures by an older version of the author, which are interrupted a couple of times by conflicting bookings of the auditorium, and once by a fire alarm. The lectures are attended and commented on by three students, Kelvin, Tessius and Firafax. At one point they break the theatrical fourth wall by discussing whether they are fictional characters in a book, a device reminiscent of the 1991 novel Sophie’s World.