Singapore has open-sourced the world’s first AI governance testing framework and toolkit, called “AI Verify”. As a single integrated software toolkit operating within a user’s enterprise environment, AI Verify allows users to conduct technical tests on their AI models, and record process checks for reporting to any shareholders or regulatory bodies. Additional toolkits can be built upon AI Verify, for instance, to take into account sectoral requirements applicable to health care entities. The framework and toolkit can be used by any company looking to develop or deploy AI, to validate performance of an AI system using internationally recognized governance principles such as accountability, safety, human-centricity, and fairness, all of which are in line with the AI governance principles expounded by the OECD, European Union, and the World Health Organization (among others). To read more about this development in Singapore, please see our original blog post on our Privacy World blog.