Does the company engage with policymakers and other relevant stakeholders on AI governance?
ARTIFICIAL INTELLIGENCE APPLICATIONS IN FINANCIAL SERVICES
QUESTIONS FOR BOARDS
Given the financial implications, companies should ensure that senior management and the board have sufficient understanding of AI and other technology used in the business to provide proper oversight. This is particularly important given the increasing expectations for board directors to oversee material issues that affect a company’s long-term value. The board is “responsible for determining the nature and extent of the significant risks it is willing to take in achieving its strategic objectives,” according to the UK Corporate Governance Code.6 It “should maintain sound risk management and internal control systems”7 to ensure that the risk framework is sufficiently up to date, and that the entity’s risk appetite is
appropriately set, monitored, and communicated. The decision making, implementation,and use of AI must take place within a risk management framework that captures changes to the business. Whether the framework follows the International Organization for Standardization (ISO), the Committee of Sponsoring Organizations (COSO), or another model, it will cover four main activities: risk identification, risk assessment, risk mitigation,and risk monitoring. These will be complemented by early intervention, incident preparedness, crisis response plans, and training.
In addition to the specific questions on AI applications outlined in “How is AI applied in financial services?”, we would expect board directors to address the following questions:
• What is the company’s AI footprint?
• Does the board have any oversight of the company’s use of AI?
• If yes, what is the specific expertise that will enable the board to oversee the use of AI?
• How does the board oversee the use of AI? What are the related documents that the board reviews? What questions does the board pose to the management team?
• Does the company have a set of AI governance principles? If so, how are these implemented? How does the board assure itself that these principles are fit for purpose and actually implemented?
• Does the board have the appropriate skills and expertise to oversee the risks and opportunities arising from AI? If not, does it at least have access to such skills and expertise?
• Does the company engage with policymakers and other relevant stakeholders on AI governance?