The future of AI in finance: Israel sets the stage for new regulatory framework
The future of AI in finance: Israel sets the stage for new regulatory framework
A special team releases interim findings on regulating AI in the financial sector, from transparency to competition risks.
A special inter-ministerial team has published an interim report inviting public comments on the use of artificial intelligence (AI) in the financial sector. Established at the end of 2022, the team aims to prepare for the growing integration of AI in finance and to establish guiding principles for financial regulation on the subject.
While AI presents many benefits in the financial domain, there are concerns over its potential misuse, including risks of fraud, disinformation, and privacy breaches, which underscore the need for regulatory oversight. The team comprises representatives from the Ministry of Justice, the Ministry of Finance, the Competition Authority, the Securities Authority, the Capital Market Authority, and the Bank of Israel. The report is open for public comment until December 15.
The team’s primary position is that AI should be encouraged in the financial sector due to numerous advantages, such as reducing operating costs, enhancing product and service quality, expanding financial accessibility, and assisting financial entities with compliance and regulatory enforcement. However, the use of AI also brings risks in terms of transparency, privacy, and reliability. Additionally, specific risks to financial stability, such as the potential for AI to trigger harmful "herd behavior" (e.g., en masse buying or selling of securities or sudden withdrawals from banks), have been identified. Cybersecurity risks, financial fraud, disinformation, and competition concerns—particularly if access to advanced AI is restricted to dominant financial entities—are also noted.
As outlined in the report, the team emphasizes a risk-based regulatory approach where the level of oversight is matched to the importance of the financial service and its impact on the customer. For example, an AI chatbot providing basic customer service would face limited regulatory requirements, while an AI-powered credit underwriting system, with a significant effect on individuals, would be subject to more stringent regulation.
One recommendation addresses the "black box" problem—the difficulty in fully explaining how AI systems arrive at their decisions. The team suggests differentiating between general transparency around how the AI system operates and specific explanations for individual decisions. They recommend a general disclosure requirement for all AI systems, with additional specific disclosure requirements based on factors such as human involvement in the process.
Human involvement is a key consideration in AI use; increased human oversight can reduce AI’s efficiency. To address this, the team proposes a "graded model of human involvement," balancing general oversight with direct involvement in medium- to high-risk decision-making.
The report identifies three financial areas where AI is already being applied: investment advice and portfolio management, banking credit, and underwriting and insurance.
In investment consulting and portfolio management, AI offers the advantage of expanding access to investment services. However, risks such as failure to meet fiduciary responsibilities, "gamification" that may encourage risky behavior, potential declines in service quality, and dependency on a few dominant systems are noted. A key recommendation is to update the 2016 "online services instruction" to address both terminology (e.g., defining "generative" and "explanatory" AI) and substantive requirements, such as clarifying the roles of licensees in evaluating system outputs.
For credit underwriting, the team suggests relying on existing regulations, which are deemed suitable for addressing AI challenges. Nonetheless, there is concern about "credit pushing," where AI might promote excessive borrowing. To mitigate this, disclosure requirements on the use of AI in credit underwriting are recommended to ensure transparency for both customers and regulators.
In underwriting and insurance, AI can enhance the alignment of premiums with risk through advanced modeling. The recommendation is to maintain the current regulatory framework for risk management and consumer protection, including privacy safeguards, while updating it as needed to address AI-specific risks, such as model risk management, disclosure, and informing customers when AI is used in customer interactions, such as with chatbots.