The consumer financial protection bureau (CPFB) is closely tracking how generative AI as well as ChatGPT tech, when used by the banks, can go on to undermine or otherwise create risk when it comes to customer care, said Rohit Chopra, the director of the agency, on April 25.
The CPFB is looking for a future of banking, be it in the metaverse or some kind of augmented reality context, and they are indeed starting to witness some of the building blocks that are already there. The agency is already doing some work when it comes to knowing how generative AI can go on to undermine or also create risk when it comes to customised customer care due to the possibility that biases get introduced or even the wrong information is laid out.
These remarks from Chopra came before the interagency initiative that was announced by the bureau in accordance with the Justice Department, Equal Employment Opportunity Commission, and Federal Trade Commission. The move aims to crack down on the AI that’s unchecked when it comes to lending, employment, and housing.
In a joint statement that was released, the agency opined that they would go on to commit their respective laws as well as regulations in order to apply discriminatory practises by companies that are into AI tech deployment.
Chopra added that unchecked AI poses a grave danger to civil rights in many ways that are already being experienced. Moreover, tech companies as well as financial institutions are churning out huge amounts of data and using it to make decisions concerning their lives, including whether one gets a loan or what advertisements are seen.
The interagency statement has come to light as the machine learning growth, generative AI, and attention surrounding next-gen ChatGPT has already raised queries when it comes to security and bias across a range of industries. The CFPB is already taking into account how financial firms happen to be using generative AI and also discovering ways that firms may implement the technologies in the years ahead. Chopra adds that he thinks that generative AI is mostly going to affect how people can trust certain messages. Apparently, CPFB is working towards teaching the tech whistle-blowers to raise an alarm while their own technology may as well be violating the law.