TOO SIMPLE TO BE REAL?
Updated: Mar 17
I am not bringing breaking news when I state that banks and bankinsurers today face two burning challenges. The first challenge relates to regulatory compliance, the second with the disruption of existing business models and revenues by new, mainly digital players. Those new players target specific business domains such as payments, loans, securities trading and simple insurance policies.
The common denominator of both challenges is, DATA.
Considering compliance first, we see a high and still increasing pressure by the regulators on the following:
Reporting (financial, transactions,…)
Market Abuse Detection and reporting
Customer identification (private and corporate)
Mifid (Best Execution, Transaction Cost Analysis,…)
In order to properly compete with the digital players, the banks absolutely need to provide customers with qualitative, user-friendly and responsive digital channels.
Fortunately for them, banks and particularly bankinsurers have 2 major advantages compared to the new challengers :
A complete service and product offering (payments, securities trading, investments, loans, insurance policies, leasing….)
Massive amounts of client data, historical and real-time in their records.
So, the good news is : the bankers have all data needed to satisfy regulators and clients, more than anybody else!
However, the bad news is : the bankers do not master their data!
Banking data sits in multiple:
systems (often very old)
and data can be
static or streaming
structured or non-structured
inconsistent across different sources
hidden in forgotten databases
And as if this were not yet complicated enough, it can be really difficult to find out where the data originally came from and how it relates with data in other systems and databases…
So, the 2 key questions are :
how can bankinsurers take benefit of their biggest asset and competitive advantage, in a timely fashion?
how can they avoid deep cuts in the profits due to penalties and loss of market share?
We think there is only 3 steps to take :
DISCOVER the data and find out everything there is to know about the data : create a complete metadata depository!
For every single use case (monitoring, reporting, transactional, whatever): CONFIGURE and store the correct data-view and make it re-usable and automated, using the metadata having been stored as a result from the discovery step.
For every single use case, DELIVER the correct set of data i.e. all relevant but only relevant data, harmonized and understandable for the analyst, reporting tool or app, with the required speed.
Does this sound too simple to be real? Shakespeare and Cervantes already answered that question ages ago : “The proof of the pudding, is in the eating!”