AI is becoming an increasingly powerful technology that can benefit humanity and sustainable development, but empowering local AI ecosystems across all countries needs to be prioritized to build the tools and coalitions necessary to ensure robust safety measures. Here, we make a case for reimagining trust and safety, a critical building block for closing the AI equity gap.
“Trust and safety” is not a term that developing countries are always familiar with, yet people are often impacted by it in their everyday interactions. Traditionally, trust and safety refers to the rules and policies that private sector companies put in place to manage and mitigate risks that affect the use of their digital products and services, as well as the operational enforcement systems that determine reactive or proactive restrictions. The decisions that inform these trust and safety practices carry implications for what users can access online, as well as what they can say or do.
Leave a reply