Increase In Bank AI Review Demonstrates Third-Party Risk

According to Joe Sergienko, Berkley Research Group’s managing director, banks need to ensure they can appropriately explain and supervise the models they deploy, especially if they are cooperating with a supplier, as authorities focus on how lenders are using AI and machine learning in their processes.

Does the bank comprehend what’s happening within the model, and can they sufficiently explain that? In AI and machine learning models, that’s a difficult hill to climb, Sergienko added. In May, the Consumer Financial Protection Bureau (CFPB) issued warnings for financial institutions using AI or learning algorithms to underwrite loans or issue credit, advising businesses to be ready to explain to customers the precise reasons for refusing a credit application.

Rohit Chopra, CFPB Director, said in a statement that companies are not relieved of their legal obligations if they let a black version determine lending decisions. Every applicant has a legal right to a precise description of why their credit application was denied, and this right is not weakened just because a company uses a complicated algorithm that it doesn’t comprehend. According to Sergienko, when AI is used more frequently in financial services, banks must not only justify an algorithm’s outcomes to their clients but also to regulators who will demand them.

Banks are expected to make judgments based on the output of the models; therefore, Sergienko stated that they need to comprehend what these systems are doing and be able to communicate it. How can they make a wise judgement if they don’t grasp what it’s doing? It is not a reasonable answer to a regulator to say, this is the conclusion the model gave us.

These fintech alliances can complicate the problem as more banks use third-party vendors to implement AI for their lending choices. Sergienko said. Collaborating with fintechs that focus on AI may be the sector’s shiny new object, but banks should be cautious due to the fintechs’ infancy and startup status.

Is fintech expected to last for a while? What if it suddenly went bust? Sergienko emphasised the need for banks to have a business continuity strategy in place in the event that a fintech goes out of business. Additionally, it’s crucial that the fintech AI model a bank chooses to deploy has external validation, Sergienko added.

Since fintechs are still relatively new and aren’t formally governed by the Fed, FDIC, or OCC, banks must either demand this of their fintech partners or perform the evaluations themselves. When a vendor’s algorithm is being employed as part of an institution’s Bank Secrecy Act and anti-money laundering programme, Sergienko said upfront due diligence and continuing vendor monitoring are crucial.

If suddenly one can’t access the model’s output or one can’t trust the accuracy of the model’s output, he said, that’s a major compliance risk. Running a more conventional model simultaneously with the AI machine learning algorithms to compare outcomes is one example of how a bank may add controls or monitoring.

He continued, hopefully the AI/machine learning model is superior. Sergienko claimed that while looking over some of the contracts between banks and fintechs, he frequently finds overly simple language that excludes the idea of employing controls and validating the AI model. Because it’s a need for the banks, they must also require it of the fintechs, according to Sergienko.

According to Marcia Tal, a former executive vice president of Citi, whose company, PositivityTech, uses an AI predictive model to identify prejudice in financial institutions, as more banks implement AI in their zones of lending and compliance, many organisations will need to step up efforts to efficiently manage the algorithms they use.