A different take on data integrity

Accenture’s Technology Vision for Insurance 2018 study gave insight on how insurance companies feel they’re able to trust their information. Based on the study, only 28 % of respondents admitted to validating their data sources, and 19 % felt that despite best efforts they often fail in achieving this. These types of answers are to be expected considering the vast amount of data and different data sources that these companies are handling on a daily basis.

So, what does this translate to in practice? The study states that 80 % of insurance executives see that shortcomings in the integrity of their information may expose them to data manipulation, biased decision making and overall issues with automated decision making, including artificial intelligence. And indeed, 32 % of respondents said they have been scammed by falsified sensory or location data, with additional 34 % suspecting this to be the case without capabilities to verify it.

It is a commonly shared view that in the future years technologies such as A.I., blockhains and IoT sensors will be transforming also the business of insurance companies. All these technologies are based on the idea of collecting, transmitting and utilizing vast amounts of data in all sorts of advanced manners. The study brings up the term “Data Veracity”, which refers to correctness and trustworthiness of data. This is coined to be a key requirement for the advanced solutions to function properly. So how can we ensure that the data that is collected, transmitted and analysed actually reflects reality? Can we trust what our technology is telling us? Is the technology working in accordance to actual reality?

A sensor could provide us with data that is well within the parameters of reality. This data is then stored and transmitted, unchanged, via blockchain, to an advanced A.I. further processing the data, and then starting the cycle all over again. Imagine a situation where this data seems correct but is factually wrong. This could be due to a human error, a computer glitch or in the worst case intentional manipulation. Now multiply this by the amount of cycles involved in processing and transmitting the data by different systems. Add to the mix the problems that are bound to occur by issues in interfaces, interpretation and extradition, which, let’s face it, are not going away any time soon with this pace of technological development. No wonder insurance executives are feeling overwhelmed.

So how are these challenges usually tackled? Despite advances in A.I., humans still have the upper hand in assessing context. “Does this make sense in the big picture?” is a question most effectively still answered by a person rather than a machine. However, humans are not that well equipped to handling such huge amounts of data that are the trademark of today’s, and especially tomorrow’s, technology.

To tackle the challenges in data veracity, contextual understanding is required. It is brought by understanding real life processes which create the actual data. This means that the behaviour of data and cumulation of information is monitored closely paying attention to these processes. This requires some detachment from the world of technology and more focus on the real life. The answer should be sought from bringing technology more together with real life processes.

Instead of trying to conquer massive pools of data all at once, monitoring individual processes, the streams, that bring the data into the pool, it’s correctness can truly be maintained. This will also eventually help clean the whole pool. Metaphorically speaking, instead of cleaning the water, efforts should be placed in finding the source of the pollution and fixing the core issue. Instead of just trying to minimize the impacts, this is a far more sustainable strategy to ensure the integrity of information in the long run.

If you are interested on how we employ this approach in our mission to bring automated data validation and assurance to everyone, go see what we have to offer at www.huginn.com