Toggle light / dark theme

Closing the AI equity gap by focusing on trust and safety

AI is becoming an increasingly powerful technology that can benefit humanity and sustainable development, but empowering local AI ecosystems across all countries needs to be prioritized to build the tools and coalitions necessary to ensure robust safety measures. Here, we make a case for reimagining trust and safety, a critical building block for closing the AI equity gap.

“Trust and safety” is not a term that developing countries are always familiar with, yet people are often impacted by it in their everyday interactions. Traditionally, trust and safety refers to the rules and policies that private sector companies put in place to manage and mitigate risks that affect the use of their digital products and services, as well as the operational enforcement systems that determine reactive or proactive restrictions. The decisions that inform these trust and safety practices carry implications for what users can access online, as well as what they can say or do.

Novel framework can generate images more aligned with user expectations

Generative models, artificial neural networks that can generate images or texts, have become increasingly advanced in recent years. These models can also be advantageous for creating annotated images to train algorithms for computer vision, which are designed to classify images or objects contained within them.

While many generative models, particularly generative adversarial networks (GANs), can produce synthetic images that resemble those captured by cameras, reliably controlling the content of the images they produce has proved challenging. In many cases, the images generated by GANs do not meet the exact requirements of users, which limits their use for various applications.

Researchers at Seoul National University of Science and Technology recently introduced a new image framework designed to incorporate the content users would like generated images to contain. This framework, introduced in a paper published on the arXiv preprint server, allows users to exert greater control over the image generation process, producing images that are more aligned with the ones they were envisioning.

Behavioral chaos in Alzheimer’s disease mice decoded by machine learning

In a recent study published in the journal Cell Reports, researchers used the machine learning (ML)-based Variational Animal Motion Embedding (VAME) segmentation platform to analyze behavior in Alzheimer’s disease (AD) mouse models and tested the effect of blocking fibrinogen-microglia interactions. They found that AD models showed age-dependent behavioral disruptions, including increased randomness and disrupted habituation, largely prevented by reducing neuroinflammation, with VAME outperforming traditional methods in sensitivity and specificity.

Background

Behavioral alterations, central to neurological disorders, are complex and challenging to measure accurately. Traditional task-based tests provide limited insight into disease-induced changes. However, advances in computer vision and ML tools, such as DeepLabCut, SLEAP, and VAME, now enable the segmentation of spontaneous mouse behavior into postural units (motifs) to uncover sequence and hierarchical structure, offering scalable, unbiased measures of brain dysfunction.

Platform allows AI to learn from constant, nuanced human feedback rather than large datasets

I had wondered if AI could just learn and advance from it s users.


During your first driving class, the instructor probably sat next to you, offering immediate advice on every turn, stop and minor adjustment. If it was a parent, they might have even grabbed the wheel a few times and shouted “Brake!” Over time, those corrections and insights developed experience and intuition, turning you into an independent, capable driver.

Although advancements in artificial intelligence (AI) have made a reality, the used to train them remain a far cry from even the most nervous side-seat driver. Rather than nuance and real-time instruction, AI learns primarily through massive datasets and extensive simulations, regardless of the application.

Now, researchers from Duke University and the Army Research Laboratory have developed a platform to help AI learn to perform more like humans. Nicknamed GUIDE for short, the AI framework will be showcased at the upcoming Conference on Neural Information Processing Systems (NeurIPS 2024), taking place Dec. 9–5 in Vancouver, Canada. The work is also available on the arXiv preprint server.