Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

Recently, forward-looking insurance companies have been digitalizing as many steps of the claims management process as possible. A major impetus has been cost reduction, often

Recently, forward-looking insurance companies have been digitalizing as many steps of the claims management process as possible. A major impetus has been cost reduction, often through increased efficiency and accuracy in processing claims and in reducing settlement timelines. In response to clients in its insurance market, ELIS Innovation Hub initiated an Artificial Intelligence (AI) project to explore the viability of anonymizing sensitive information on insurance claims. The ELIS Innovation Hub, which is a business unit of the Italian consulting company, CONSEL, has over 60 client companies and is committed to innovation, consulting, and training. The AI project was viewed as the first step in an end-to-end digital transformation of the insurance claim management process to make operations more streamlined and personnel allocation more efficient. Other goals were to make the claim management process more effective in terms of time and accuracy, as well as more satisfying to customers because of its personalized and more secure services. Digitally transforming the insurance claims management process is especially complex when its intent is to have the customer take an active part in it and when AI deep learning technology is involved. This particular project was designed to build a lightweight, scalable prototype (called a use case) that detects and removes sensitive data from claim images. The data needed to be anonymized to comply with the European Union General Data Protection Regulation (GDPR) and the process needed to be automated. Automating the process could shrink the average time to process claims from days to hours. The prototype was built using open-source software and an "over-the-top" cloud platform. It applied a deep-learning-based, six-step workflow developed by ELIS Innovation Hub: Problem setting: The problem to be solved needed to be clearly defined from both a business and a technical point of view. The major problem was identifying sensitive objects in insurance claim forms. This involved the subtasks of figuring out which items were sensitive and specifying their location through a coordinate system. Sensitive data were defined to be license plates, people's faces and shapes, and vehicle identification numbers (VINs). Using the coordinates provided through localization, a box could be drawn around the sensitive datawhich could then be blurred or removed from the image. Functional model analysis, selection, and evaluation: This step focused on identifying algorithmic models for solving the problem and then selecting the one that best meets business objectives. The most appropriate functional architecture and relevant performance evaluation metrics needed to be identified and selected as well. The RetinaNet architecture was selected on the basis of performance accuracy, ease of implementing with Python TensorFlow and high-performance cloud platforms, and ability to handle class imbalance with the available data set through the "focal loss function." The focal loss function makes it possible to concentrate the algorithm design on learning from "real" data. The training and validation losses were selected as the in-process performance indicators, whereas post-process indicators included Recall and Precision. It was especially important to maximize Recall, to minimize false negatives (FN), and to reduce the need for humans in the process. Data preparation: Data preparation involved Data Collection and Analysis, Data Annotation, and Data Organization. In Data Collection and Analysis, the data (i.e., 14,000 claim images from business units in several European countries) to be used in training and validating the deep learning model were already collected and available to the developers. The claim image data were analyzed to understand the distribution of classes within the dataset. In Data Annotation, the claim images were labeled for subsequent AI training and learning phases. The labeled data were entered into the Analytical Base Table for a detector model to process them during training. Annotations were done manually using an open-source software called LabelImg. During Data Organization, the data set was partitioned into separate training, validation, and testing sets that were fed to the deep learning architectures. Specifically, of the 6500 claim images with sensitive data, 70% were used for training, 25% for validation and 5% for testing. The training set was composed in such a way that the model could generalize image content and simultaneously be able to learn specific features for the sensitive objects to be recognized. The Data Annotation and Organization were carried out incrementally in batches until all collected image data were used. The manual annotation of the training and validation sets was the most time-consuming aspect of the project. Model set-up: In the model set-up step, the underlying model was implemented and refined. In this AI case, a machine-learning algorithm to detect sensitive data in images was used. In particular, the convolutional neural network, designed according to the RetinaNet architectural framework, was trained on several samples of images, and its algorithm was fine-tuned. Model training, validation, and testing: In this step, training, validation, and testing were repeated three times until the measured value for the most relevant performance metrics satisfied the insurance company requirements. The training sets were expanded with each interaction. The deep neural network architecture (ResNet-50) was used to extract sensitive features from the claim images for each class (e.g., person faces, VIN). To do so, weights were obtained from the API reference of Keras library that it could set up a statistical sample with a feature distribution that was representative of reality. Also, using the OpenCV Python library made it possible to blur (anonymize) the sensitive data. Model Deployment: This step involved the creation of a proof of concept (PoC) for a visual web application that claimants could use. The web application employed a user interface that allows the claimants to upload their claim images. The web application relies on high-performing cloud platforms (e.g., Microsoft Azure, Google Cloud, or Amazon We Services) to deal with scalability, as well as model training issues. After three training iterations, the prototype (use case) proved to be very sensitive and had an average Recall of 94%. This means that the prototype identified 94% of the actual positive (sensitive data) correctly. The average Precision over the various classes (mAP) was 83%, which was deemed satisfactory by the insurance company. Other metrics were also good. The prototype results were deemed good enough to demonstrate the viability of a digital transformation of the claim management process. It was concluded that the prototype might be a viable first step in digitalizing the entire claim management process. Sources: Primarily adapted from Alessandra Andreozzi, Lorenzo Ricciardi Celsi, and Antonella Martini, "Enabling the Digitalization of Claim Management in the Insurance Value Chain Through AI-Based Prototypes: The ELIS Innovation Hub Approach," in Digitalization Cases Vol. 2:Mastering Digital Transformation for Global Business, edited by N. Urbach, M. Roeglinger, K. Kaurz, R. Alinda Alias, C. Saunders, and M. Wiener (Springer Nature Switzerland AG, Cham, Switzerland, 2021); See also Allesandra Andreozzi, R. Celsi, and A. Martini (2019). "Leveraging Deep Learning for Automated Image Anonymization in the Insurance Domain," R&D Management Conference, Paris; J. Kremer and P. Peddanagari. "How Insurers Can Optimize Claims," Ernst & Young, March 2, 2021, How insurers can optimize claims: Automation and humans in the loop | EY - US (accessed January 15, 2023); ELIS Innovation Hub, https://www.elis.org/eih/ (accessed January 14, 2023). Discussion Questions Compare and contrast the development methodology developed by the ELIS deep-learning-based, six-step workflow with the waterfall (Systems Development Life Cycle) methodology. Was ELIS deep-learning-based, six-step workflow approach a good approach for this project? Provide a rationale for your response. Discuss the advantages and disadvantages of developing a prototype in this case

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

Fundamental Managerial Accounting Concepts

Authors: Edmonds, Tsay, olds

6th Edition

71220720, 78110890, 9780071220729, 978-0078110894

More Books

Students also viewed these Accounting questions