BNP Paribas focuses on 100 use cases

The bank uses generative artificial intelligence to assess both the environmental and social criteria of its customers. But also to optimize its processes.

In terms of artificial intelligence, BNP Paribas has drawn up a strategic plan for 2025 with the goal of achieving 1,000 use cases in production, which should ultimately represent €500 million in value. The bank has 700 data scientists and AI experts who contribute to this roadmap in conjunction with its business lines. With the advent of genetic artificial intelligence at the end of 2022, BNP Paribas has launched a project aiming to realize, in this area, about a hundred additional use cases in an exploratory way.

“When ChatGPT was launched, we had a reflex of caution, particularly due to the confidential nature of our customers' data. We decided to block the public ChatGPT channel to avoid any information leakage and created 100% secure solutions”, explains Hugues Even, CDO of BNP Paribas. These solutions are intended for deployment either in-house or in cloud mode. On the GenAI side, BNP Paribas prefers open source large language models (LLM) such as Llama or Mistral. Upside, LLMs apparently go through a series of security tests to ensure they don't contain malicious code, malware or backdoors.

In terms of use cases, BNP Paribas unsurprisingly uses an AI-oriented RAG (for augmented recovery generation). “For example, we use LLM to query document databases on the fly to assess our clients against ESG criteria (on environment, social and governance, editor's note)”, explains Hugues Even. The bank measures its customers on the basis of questionnaires of 40 to 50 questions depending on the sector of activity, with various indicators: greenhouse gas emissions, decarbonization targets, etc. The LLM will look for the answers to these questions from various sources: the client's CSR report, its annual report, its website, various articles… RAG allows answering the questionnaire in a semi-automatic way. “An analyst then intervenes systematically to verify that the information produced is coherent and accurate,” insists Hugues Even.

Avoid hallucinations

“We adjust the prompts to avoid hallucinations by slipping in prompts like: 'If you don't know, don't invent'. But also by adjusting the AI ​​so that it's not creative but remains formal,” Hugues Even comments. “Ultimately, what is complicated is achieving a system with sufficiently structured documentation to be as accurate as possible, providing targeted links to the sources of the content used.”

Another example, BNP Paribas uses RAG to process the body of their processes. Based on a business question, related to a given investment, for example, RAG allows us to gather possible answers from a list of processes. “Where traditional AI consists of small models that are locally specialized through sharable functions, genetic AI is based on significantly larger models that do not encourage the use of overly fragmented LLMs,” adds Hugues Even.

When to choose the cloud over in-house development? “On secure provider tenants, the bank will deploy public data. For confidential and 'privileged' or information-intensive data, we will instead choose the on-premises installation route,” Hugues Even answers.

The experiments the bank is running allow it to measure the capacity required to train its models, manage RAGs or even run LLMs to make them available. “Models like the Mistral, medium or small type, are of interest to us as they are both efficient, but also much more frugal in terms of machine capacity. We estimate that 80% of use cases can be handled by medium-sized models. ” trusts the CDO.

“We find the open source foundation models to be efficient enough, compared to the cost and footprint of developing from scratch.”

Unlike RAG, BNP Paribas also has structured databases from which the team writes content. A use case applied to the asset management field. For each fund marketed, BNP Paribas produces semi-automatic performance analyzes of its funds, generally delivered on a monthly or quarterly basis.

Unlike Crédit Mutuel Arkea, BNP Paribas has not created its own foundation model. “We find open source foundation models to be quite cost-effective compared to the cost and footprint that developing from scratch would entail,” explains Hugues Even. “We manage to get great results with open source foundation models, adapting them to our own content.”

On the IT side, BNP Paribas' IT teams see the LLM as a productivity tool. A tool designed to code, document code, or even translate an application originally written in an older generation language into a newer language. “These experiments give good results,” says Hugues Even.

Monitoring the compliance of operations

At BNP Paribas, LLMs have also made great progress on the speech-to-text algorithms front. Their main advantages in this area: the reduction of error rates, but also the evolution towards multilingual models. “This technology is an important topic for 2024. It will allow us to better support our call center teams as well as our advisors on the phone with customers,” explains Hugues Even. “It helps in the in-depth analysis of the quality of the dialogue, but also in identifying key themes, intentions, commercial opportunities and generally the level of customer satisfaction by conducting call reports.”

LLM-based speech-to-text will also enable more efficient control of operations and regulatory compliance. “Voice represents a significant mass of untapped data that we will be able to process with a speech-to-text LLM,” notes Hugues Even.

Regarding genetic artificial intelligence, BNP Paribas has developed ad hoc governance for its assets developing in this area at group level. It already exists for the company's production AI cloud platform, which is already deployed and will soon be extended to its on-premises implementation.

Leave a Reply

Your email address will not be published. Required fields are marked *