Related Articles
The Monetary Authority of Singapore (MAS) has recently released a Consultation Paper outlining proposed Guidelines on Artificial Intelligence (AI) Risk Management ("AIRM Guidelines") in the financial sector. As financial institutions increasingly adopt AI technologies, it is crucial to ensure these tools are used responsibly and effectively.
By way of background, MAS has previously established principles on Fairness, Ethics, Accountability and Transparency (FEAT), for guiding the use of AI in the financial sector. However, the latest AIRM Guidelines build on and complement these principles by introducing more rigorous and detailed requirements for AI risk management by financial institutions, as outlined below.
This article summarises the key aspects of the AIRM Guidelines and explains how they are designed to operate within financial institutions.
The AIRM Guidelines focus on several critical areas aimed at establishing a robust framework for AI risk management:
Financial institutions should implement strong governance structures to ensure accountability for AI-related risks. This involves defining clear roles and responsibilities at the board and senior management levels to oversee AI deployment, to foster the appropriate risk culture for the use of AI, and to ensure that existing risk management frameworks, policies, and practices across the organisation adequately identify, assess, and address risks posed by AI.
Before deploying AI technologies, institutions must establish systems and procedures to identify, inventorise, and assess the risk materiality of all AI use cases, systems, or models. This includes ensuring consistent identification of AI usage, maintaining an up-to-date inventory, consistently applying a structured methodology to assess risk materiality, and ensuring that risk controls are proportionate to the assessed risks.
Financial institutions should implement robust, proportionate controls across the entire AI life cycle and develop contingency plans for high-risk AI use cases. Key areas include effective data management, ensuring that data is consistently fit for purpose, representative, and of high quality, with strong governance throughout the AI life cycle. Financial institutions should also ensure transparency and explainability are tailored to the risk and impact of each AI system, with higher standards applied where AI decisions have significant effects on customers or risk management outcomes. Fairness must be clearly defined, with controls in place to identify and mitigate harmful biases or discriminatory outcomes, and these safeguards should be reinforced by appropriate human oversight, with regular reviews to ensure that oversight remains effective.
In addition, institutions should manage third-party AI with controls that are adequate for the risk profile of each use case, carefully select AI algorithms and data features based on objectives and risks, and conduct thorough evaluation and testing proportionate to risk materiality.
Technology and cybersecurity risks should be addressed through secure and resilient infrastructure, and the AI development process should be well documented to support reproducibility and auditability. Prior to deployment, independent and cybersecurity reviews should be conducted, and after deployment, robust monitoring and periodic reviews of aggregate risks should be maintained.
Finally, comprehensive change management processes, including clear procedures for decommissioning AI systems, are necessary to ensure ongoing risk control and compliance.
Financial institutions should ensure that personnel involved in developing, deploying, and maintaining AI systems possess the necessary competence and conduct, supported by appropriate recruitment, training, and resources proportionate to the risk profile of each AI use case.
Regular reviews should be conducted to keep skills and capacity up to date, especially as AI technologies evolve.
Additionally, institutions must ensure that their technology infrastructure is robust, resilient, and secure, meeting the performance and risk management needs of AI systems in line with relevant regulatory and industry standards.
The AIRM Guidelines are designed to provide a structured approach for financial institutions to navigate the complexities of AI deployment. They are intended to be applied to all financial institutions, with implementation tailored to be commensurate with the size and nature of each institution's activities.
Here’s how they are meant to function:
The MAS is soliciting feedback from various stakeholders, including financial institutions, until 31 January 2026. This consultative approach aims to ensure the guidelines are practical and relevant to industry needs.
MAS proposes a 12-month transition period following the issuance of the AIRM Guidelines, allowing institutions to gradually align their practices with the new standards without disrupting operations.
The AIRM Guidelines represent a significant advancement in ensuring that financial institutions can leverage AI technologies while effectively managing associated risks. By establishing clear expectations for governance, risk assessment, and ethical practices, the MAS aims to create a safe and responsible AI environment in Singapore's financial sector.
As stakeholders engage in the consultation process, their insights will be invaluable in shaping a framework that balances innovation with robust risk management practices.
Singapore law aspects of the article were written by the team from Virtus Law LLP (a member of the Stephenson Harwood (Singapore) Alliance).