Toggle light / dark theme

Sometimes when you’re considering how to bring the power of AI to a clinical context, it sort of takes a new way of thinking to get inspired about what’s possible.

I was thinking about this the other day, inspired by some people who have been working hard on genomics, oncology research, and other types of biological and anatomical applications. There’s so much of it, suddenly, especially at these institutions that I’m so close to – to call it a “revolution” in my view, isn’t hyperbolic.

In the brand-new world of AI, we’re slowly learning that there’s a big difference between a small routine transaction like buying a hamburger, and something much more complex and high-stakes.


Human AI Robot With Flowing Binary High-Res Stock Photo — Getty Images

For reference, here’s more on the company’s mission statement:

“The company’s mission is to unlock and elevate human agency for everyone, freeing us up from the drudgery of daily boring tasks to focus on doing what we love. The vision is not of machines replacing humans but augmenting them. Through AI Agents, the team aims to amplify the human potential, allowing individuals to truly focus on what they love. With their AI products, MultiOn aspires to unlock parallelization for humanity: Imagine a world where tasks are conducted concurrently, no longer bound by the linear constraints of time of a single person.”

This led to the creation of a “bionic eye” that uses a combination of AI and several advanced scanning techniques, including optical imaging, thermal imaging, and tomography (the technique used for CT scans), to capture differences between parts of the scrolls that were blank and those that contained ink — all without having to physically unroll them.

Where’s Plato? On April 23, team leader Graziano Ranocchia announced that the group had managed to extract about 1,000 words from a scroll titled “The History of the Academy” and that the words revealed Plato’s burial place: a private part of the garden near a shrine to the Muses.

The recovered text, which accounted for about 30% of the scroll, also revealed that Plato may have been sold into slavery between 404 and 399 BC — historians previously thought this had happened later in the philosopher’s life, around 387 BC.

In a recent study merging the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng have explored the vulnerabilities of neural networks through the lens of the uncertainty principle in physics. Their work, published in the National Science Review, draws a parallel between the susceptibility of neural networks to targeted attacks and the limitations imposed by the uncertainty principle—a well-established theory in quantum physics that highlights the challenges of measuring certain pairs of properties simultaneously.

The researchers’ quantum-inspired analysis of neural network vulnerabilities suggests that adversarial attacks leverage the trade-off between the precision of input features and their computed gradients. “When considering the architecture of deep neural networks, which involve a loss function for learning, we can always define a conjugate variable for the inputs by determining the gradient of the loss function with respect to those inputs,” stated in the paper by Dr. Jun-Jie Zhang, whose expertise lies in mathematical physics.

This research is hopeful to prompt a reevaluation of the assumed robustness of neural networks and encourage a deeper comprehension of their limitations. By subjecting a neural network model to adversarial attacks, Dr. Zhang and Prof. Meng observed a compromise between the model’s accuracy and its resilience.

Graph Neural Networks (GNNs) are crucial in processing data from domains such as e-commerce and social networks because they manage complex structures. Traditionally, GNNs operate on data that fits within a system’s main memory. However, with the growing scale of graph data, many networks now require methods to handle datasets that exceed memory limits, introducing the need for out-of-core solutions where data resides on disk.

Despite their necessity, existing out-of-core GNN systems struggle to balance efficient data access with model accuracy. Current systems face a trade-off: either suffer from slow input/output operations due to small, frequent disk reads or compromise accuracy by handling graph data in disconnected chunks. For instance, while pioneering, these challenges have limited previous solutions like Ginex and MariusGNN, showing significant drawbacks in training speed or accuracy.

The DiskGNN framework, developed by researchers from Southern University of Science and Technology, Shanghai Jiao Tong University, Centre for Perceptual and Interactive Intelligence, AWS Shanghai AI Lab, and New York University, emerges as a transformative solution specifically designed to optimize the speed and accuracy of GNN training on large datasets. This system utilizes an innovative offline sampling technique that prepares data for quick access during training. By preprocessing and arranging graph data based on expected access patterns, DiskGNN reduces unnecessary disk reads, significantly enhancing training efficiency.

Musk, Gates, Hawking, Altman and Putin all fear artificial general intelligence, AGI. But what is AGI and why might it be an advantage that more people are trying to develop it despite very serious risks?

“We are all so small and weak. Imagine how easy life would be if we had an owl to help us build nests,” said one sparrow to the flock. Others agreed:

“Yes, and we could use it to look after our elderly and our children. And it could give us good advice and keep an eye on the cat.”