Toggle light / dark theme

AI is becoming an increasingly powerful technology that can benefit humanity and sustainable development, but empowering local AI ecosystems across all countries needs to be prioritized to build the tools and coalitions necessary to ensure robust safety measures. Here, we make a case for reimagining trust and safety, a critical building block for closing the AI equity gap.

“Trust and safety” is not a term that developing countries are always familiar with, yet people are often impacted by it in their everyday interactions. Traditionally, trust and safety refers to the rules and policies that private sector companies put in place to manage and mitigate risks that affect the use of their digital products and services, as well as the operational enforcement systems that determine reactive or proactive restrictions. The decisions that inform these trust and safety practices carry implications for what users can access online, as well as what they can say or do.

A new Science Advances study demonstrates a vaccine for cancer immunotherapy that would speed up the efficiency of messenger RNA translation in cytoplasm—and effectively inhibited tumor growth.

An international team of astronomers has observed an extremely radio-loud quasar known as J1601+3102. As a result, they found that the quasar hosts a large extended radio jet. The discovery is reported in a research paper published Nov. 25 on the arXiv preprint server.

Quasars, or quasi-stellar objects (QSOs), are (AGN) of very high luminosity powered by (SMBHs), emitting observable in radio, infrared, visible, ultraviolet and X-ray wavelengths. They are among the brightest and most distant objects in the known universe, and serve as fundamental tools for numerous studies in astrophysics as well as cosmology.

J1601+3102 is an extremely radio-loud quasar at a redshift of 4.9, discovered in 2022. It has a radio flux at a level of 69 mJy, bolometric luminosity of about 26 quattuordecillion erg/s and a steep spectral index.

In this respect, I believe regulators have fallen short. In a world facing ongoing cyber threats, the standards for cybersecurity are set surprisingly low that their rules typically only recognize encryption of all stored data as a requirement. This is despite the fact that encryption—not firewalls, monitoring, identity management or multifactor authentication—is the purpose-built technology for protecting data against the strongest and most capable adversaries. Stronger regulations are needed to ensure encryption becomes a mandated standard, not just an optional recommendation.

Fortunately, companies need not wait until regulators realize their folly and can opt to do better today. Some companies already have. They approach data security as an exercise in risk mitigation rather than passing an audit. From this perspective, data encryption quickly becomes an obvious requirement for all their sensitive data as soon as it is ingested into a data store.

Another beneficial development is that encryption has become easier and faster to implement, including the ability to process encrypted data without exposure, a capability known as privacy-enhanced computation. While there will always be some overhead to adopting data encryption, many have found that the return on investment has shifted decisively in favor of encrypting all sensitive data due to its substantial security benefits.

Generative models, artificial neural networks that can generate images or texts, have become increasingly advanced in recent years. These models can also be advantageous for creating annotated images to train algorithms for computer vision, which are designed to classify images or objects contained within them.

While many generative models, particularly generative adversarial networks (GANs), can produce synthetic images that resemble those captured by cameras, reliably controlling the content of the images they produce has proved challenging. In many cases, the images generated by GANs do not meet the exact requirements of users, which limits their use for various applications.

Researchers at Seoul National University of Science and Technology recently introduced a new image framework designed to incorporate the content users would like generated images to contain. This framework, introduced in a paper published on the arXiv preprint server, allows users to exert greater control over the image generation process, producing images that are more aligned with the ones they were envisioning.