Singapore launches GenAI governance framework with 9 core pillars
share on
AI Verify Foundation and Infocomm Media Development Authority (IMDA) have partnered up to launch the "Model Governance Framework for Generative AI" (MCF-Gen AI) to address concerns over the technology, as well as facilitate innovation.
The framework comprises nine dimensions to foster a trusted ecosystem. Within these nine dimensions, the framework calls for all key stakeholders including policymakers, industry, the research community and the broader public to collectively do their part.
The nine dimensions includes accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment R&D and AI public for good.
Don't miss: How small companies can better adopt gen AI
What do each of these dimensions represent?
- Accountability puts in place the right incentive structure for different players in the AI system development life cycle to be responsible to end-users.
- Data ensures data quality and addresses potentially contentious training data in a pragmatic way.
- Trusted development and deployment enhances transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation and disclosure.
- Incident reporting implements an incident management system for timely notification, remediation and continuous improvements.
- Testing and assurance provides external validation and added trust through third-party testing, and develops common AI testing standards for consistency.
- Security addresses new threat vectors that arise through generative AI models.
- Content provenance ensures transparency about where content comes from as useful signals for end-users.
- Safety and alignment accelerates R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values.
- AI for public good includes harnessing AI to benefit the public by democratising access, improving public sector adoption, upskilling workers and developing AI systems sustainably.
In tandem, the development and adoption of AI poses unique challenges for small states. To enable small states to harness AI positively, Singapore will be working with Rwanda to lead the development of a 'Digital Forum of Small States' (Digital FOSS) AI governance playbook.
Tailored for small states, the playbook will address the challenges associated with the secure design, development, evaluation, and implementation of AI systems, taking into consideration the unique constraints that small states face. The aim is to facilitate collaboration among policymakers in FOSS, to establish a trusted ecosystem where AI technologies are utilised for the benefit of the public.
Singapore, as the convenor of Digital FOSS, will facilitate the consultation of small states on an outline of the playbook during the Digital FOSS Fellowship Programme, which is in its second run this year, said IMDA in a statement. The outline has been developed in collaboration with Rwanda, following preliminary consultations with a few small states at the beginning of the year.
"The input received from Digital FOSS will play a pivotal role in shaping the playbook as a useful guide for small states and foster an inclusive global discourse on AI," said IMDA in a statement. The playbook will be available in end 2024.
This comes after Singapore unveiled a new framework for AI earlier in January this year, asking the international community for its views. According to the IMDA, given AI’s impact is not limited to individual countries, the proposed framework aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally.
“While it remains a dynamically developing space, there is growing global consensus that consistent principles are needed to create a trusted environment - one that enables end-users to use AI confidently and safely, while allowing space for cutting-edge innovation,” said IMDA.
The new framework expands on the existing framework covering traditional AI which was last updated in 2020. “With Generative AI, there is a need to update the earlier model governance framework to holistically address new issues that have emerged,” said IMDA.
It added that the proposed framework integrates ideas from the earlier discussion paper on Generative AI2, which put forward a conceptual foundation. It also drew on earlier technical work to provide an initial catalogue and guidance on suggested practices for safety evaluation of Generative AI models.
Related articles:
Study: Content production has increased by 56.7% with the introduction of gen AI
SG creates new Gen-AI frameworks, asks global input to build trust
MY govt to look into establishing a regulatory framework for AI
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window