Toggle light / dark theme

Boosted by China’s rapid development pace in artificial intelligence (AI) technologies, more companies have noted the huge business potential in AI companionship sectors, as simulated AI pets with adorable appearances gaining increasing popularity among Chinese consumers.

Zhang Yi, CEO of the iiMedia Research Institute, told the Global Times on Sunday that consumers’ demand for emotional support and the capability of current AI technologies offer this type of products greater business potential.

A Beijing-based student in her 20’s surnamed Zhang, who is also an AI technology fan, told the Global Times on Sunday that she bought “Boo Boo,” a simulated AI robotic pet developed by Hangzhou-based Genmoor Technology.

Meta might yet teach its AI to more consistently show the right posts at the right time. Still, there’s a bigger lesson it could learn from Bluesky, though it might be an uncomfortable one for a tech giant to confront. It’s that introducing algorithms into a social feed may cause more problems than it solves—at least if timeliness matters, as it does with any service that aspires to scoop up disaffected Twitter users.

For a modern social network, Bluesky stays out of your way to a shocking degree. (So does Mastodon; I’m a fan, but it seems to be more of an acquired taste.) Bluesky’s primary view is “Following”—the most recent posts from the people you choose to follow, just as in the golden age of Twitter. (Present-day Twitter and Threads have equivalent views, but not as their defaults.) Starter Packs, which might be Bluesky’s defining feature, let anyone curate a shareable list of users. You can follow everyone in one with a single click, or pick and choose, but either way, you decide.

Companies currently rely heavily on simulations to ensure that new versions meet a wide range of requirements. AV 2.0 systems are more sensitive to differences between real-world data and simulated data, so simulations need to be as realistic as possible. Instead of using hand-built 3D environments and pre-programmed vehicle behaviors, future testing will need to use advanced machine learning techniques to create highly realistic and scalable simulations.

It’s crucial for car manufacturers and the wider vehicle industry to adopt AI technologies in their development processes and products. There’s enormous potential for improved autonomous driving capabilities, better interaction between humans and machines and increased productivity for developers.

Just as software revolutionized many industries, AI is set to do the same—but even faster. Companies that quickly embrace these technologies may have a first-mover advantage and the chance to set industry standards. Those that delay may quickly fall behind, as their products will lack features compared to competitors.

AI is becoming an increasingly powerful technology that can benefit humanity and sustainable development, but empowering local AI ecosystems across all countries needs to be prioritized to build the tools and coalitions necessary to ensure robust safety measures. Here, we make a case for reimagining trust and safety, a critical building block for closing the AI equity gap.

“Trust and safety” is not a term that developing countries are always familiar with, yet people are often impacted by it in their everyday interactions. Traditionally, trust and safety refers to the rules and policies that private sector companies put in place to manage and mitigate risks that affect the use of their digital products and services, as well as the operational enforcement systems that determine reactive or proactive restrictions. The decisions that inform these trust and safety practices carry implications for what users can access online, as well as what they can say or do.

Generative models, artificial neural networks that can generate images or texts, have become increasingly advanced in recent years. These models can also be advantageous for creating annotated images to train algorithms for computer vision, which are designed to classify images or objects contained within them.

While many generative models, particularly generative adversarial networks (GANs), can produce synthetic images that resemble those captured by cameras, reliably controlling the content of the images they produce has proved challenging. In many cases, the images generated by GANs do not meet the exact requirements of users, which limits their use for various applications.

Researchers at Seoul National University of Science and Technology recently introduced a new image framework designed to incorporate the content users would like generated images to contain. This framework, introduced in a paper published on the arXiv preprint server, allows users to exert greater control over the image generation process, producing images that are more aligned with the ones they were envisioning.

In a recent study published in the journal Cell Reports, researchers used the machine learning (ML)-based Variational Animal Motion Embedding (VAME) segmentation platform to analyze behavior in Alzheimer’s disease (AD) mouse models and tested the effect of blocking fibrinogen-microglia interactions. They found that AD models showed age-dependent behavioral disruptions, including increased randomness and disrupted habituation, largely prevented by reducing neuroinflammation, with VAME outperforming traditional methods in sensitivity and specificity.

Background

Behavioral alterations, central to neurological disorders, are complex and challenging to measure accurately. Traditional task-based tests provide limited insight into disease-induced changes. However, advances in computer vision and ML tools, such as DeepLabCut, SLEAP, and VAME, now enable the segmentation of spontaneous mouse behavior into postural units (motifs) to uncover sequence and hierarchical structure, offering scalable, unbiased measures of brain dysfunction.