MT TL
Follow Us
ITA MEMBERSHIP

DATA

DATA

Predictive Models: Right Place + Right Time = Better Decisions

JJ Jagannathan | April 06, 2015

Predictive modeling is a powerful tool that can help make better business decisions in every segment of the P&C insurance value chain. Daily, we come across all kinds of industry statistics on how P&C carriers plan to increase their investment in predictive analytics and where they plan to make their investments.

The ‘how we do it’ is equally important as the ‘what we do.’ The efficiency of how a predictive analytics project is run matters more than saying, “Yes, we do predictive analytics because it is a recommended industry best practice.” When I stepped back to think about how we are running predictive modeling initiatives today, I realized that there are plenty of things we can do to reduce the total cost of ownership and improve the impact on business decisions. Over the last several years, I have witnessed multiple P&C insurance predictive analytics projects suffer from the following efficiency gaps:

Time to Get Modeling Data

The time it takes for the modeling teams to pull policy, claims, billing, and external data from the core systems and data warehouses to build the model far exceeds the time that the teams actually spend on data preparation and model development work. The inefficiencies around pulling partial data, wrong data, and historical data from multiple systems with completely different data formats can impact the quality of the model and the time to create the initial model.

Most teams spend at least two to three months extracting the raw transaction data. Any delays in extracting data force teams to spend very little time on the more important areas of validating target data availability, understanding sample bias, and identifying good data surrogates.

Time to Deploy the Initial Model

After the teams overcome the data challenge and even if the modeling teams are fast in developing a version 1 candidate model that is ready for deployment, the next biggest delay the modeling teams face is the long wait time it takes to get access to IT resources. In most situations, the IT team’s priorities get filled up 18 to 24 months in advance and you have the following options: just wait; demonstrate a significant ROI multiple and hope for some luck; or have the business leader with the biggest title and influence championing your project to move it up the IT prioritization queue.

You might have the most sophisticated algorithm, but if you cannot operationalize your model quickly, it delivers no benefit to the organization. Division leaders that want to move quickly to transform their business operations with analytics struggle from the IT resource scarcity situation. After teams get the necessary IT resources aligned, it typically takes about four months to deploy the initial model.

Whenever predictive models get deployed in a model hosting application silo that is completely disconnected from the core systems, the models are not taking full advantage of the real-time data stream and the opportunity to embed analytics directly into the business workflow to make better decisions. It has to be noted that carriers that are on legacy core systems are forced to choose analytics application silos because their core systems don’t provide the necessary support to run predictive modeling initiatives.

The modeling teams usually develop multiple models and test them against a holdout dataset to select the one model that performs best and is easiest to implement. Because of all the process constraints around model deployment, the teams are forced to choose just one model for the initial rollout. This approach of banking on one model for the initial rollout limits the probability of success.

You can find a number of stray models within the organization that never get deployed and integrated into the regular business workflow, but are technically considered to be in ‘production’ and are run daily from an expert’s personal desktop. This behavior is primarily driven by the process hassles around model deployment and internal system limitations.

Time to Enhance the Model

After the version 1 model has been in production for some time and based on the model’s performance in the field, the project teams will decide to enhance the model using additional external datasets, adjust cut points, overlay new business rules or adjust model coefficients. The time to pull the gap data (the data that is needed for refreshing or rebuilding the model) and scenarios of pulling the gap data from a completely different transaction system than that which was used to develop the version 1 model can impact model quality and cause delays.

Next, the project teams have to go through one more round of delays to get IT resources before deploying the enhanced version 2 model. Typically the model refresh, enhancement, and redeployment efforts can take about six to eight months to complete, but there are plenty of carriers that choose to do this once a year.

Time to Learn Model Performance Results

In most cases, the business leadership and modeling teams don’t have a view into the model’s performance in the field for several months, and in some cases, even up to a year, and I am not talking about rating models here. Modeling teams try to share as much information as possible with the business teams, but often they get stuck waiting to get access to the field performance data. The approach of rolling out one model during the initial deployment and waiting for answers extends the learning curve.

Moving from Predictive Analytics 1.0 to Predictive Analytics 2.0

I believe we have to move beyond the Predictive Analytics 1.0 phase of getting started in this transformative analytics journey and need next-generation capabilities to overcome the current efficiency gaps. Compressing the time-to-action on the four areas of a predictive modeling project that were outlined above is critical to drive success in terms of lowering the total cost of delivering a predictive analytics project and the ability to make better business decisions.

Here are some options to consider:

  • Look at secure and reliable public or private cloud-hosted analytics platforms that can be integrated into your core systems and data warehouses to overcome data extraction and model deployment delays. An instant-on predictive analytics capability is required to effectively compete in the market and your IT department leaders will be able to help you select the right solution architecture.
  • Deploy the predictive models closely integrated with your core systems to take advantage of the real-time data stream in the core systems and make your analytics more actionable.  The ability to have the right data at the right time is critical and embedding predictive analytics models within the core systems can get you closer to the optimized state.
  • Eliminate stray models that are run from an expert’s personal desktop for production usage. These are single points of failure and it is critical that these models get integrated into the mainstream business workflow. Wrong business decisions driven by human error in these suboptimal processes, can cause financial impact and unpleasant customer experience.
  • Create a data-driven testing culture and the ability to vet multiple models at the same time in limited pilot rollouts to identify the best model for wider production usage. We can take some cues from how some of the world-class e-commerce organizations are taking advantage of A/B testing and multivariate testing to quickly run experiments on customer call-to-action response to decide the best course of action to maximize results. The mindset of rapid iteration and learning is key.
  • Democratize the information on model performance results so all the stakeholders have a full view of the predictive analytics program’s performance. Failing fast and learning quickly is what we need here. Not knowing whether or not your model is working for the next several months and hoping for the best outcome is not the way to run a predictive analytics project. It is much easier to track model performance results when it is integrated with the core systems workflow. Monthly dashboards that track model performance metrics can be extremely helpful.
  • Include the right external datasets into your version 1 modeling dataset and don’t just rely on internal data and push external data evaluation to a later phase. Having the right external datasets can help boost your version 1 model’s performance, compress the development timeline, and help create a competitive differentiator. There are plenty of new data streams from smart home sensors and social media exhaust that have demonstrated predictive signal and is already coming into the carrier’s business workflow. It is critical that the right data management framework is in place to support quicker adoption of the newer datasets into your modeling initiatives.

By no means is this a comprehensive list of recommendations, but this is a start. Fixing efficiency gaps in predictive analytics projects is more about ‘getting a solid foundation in place’ before we go after bigger business problems and more powerful analytic techniques. Having the right data and the right model in the right place is key to making better business decisions. I welcome the readers to share what has worked well for you in the past and what we should be doing as an industry.

JJ Jagannathan is senior director of product management at Guidewire Software. He is responsible for product strategy for Guidewire Live, a cloud-hosted platform of instant-on P&C analytics apps. He can be reached via email at jjagannathan@guidewire.com

 


Featured articles

PS Gen MR

Hyland MR

ELECTRONIC CHAT

The Email Chat is a regular feature of the ITA Pro magazine and website. We send a series of questions to an insurance IT leader in search of thought-provoking responses on important issues facing the insurance industry.

WEB EVENTS

ITA is pleased to present the 2014 Webinar Series. We have many topics for you to choose from and attendance is open to all ITA members. The webinar topics are current and exciting — ranging from predictive analytics to telematics and will focus on the direction insurance carriers need to follow for the future. All webinars are presented by insurance IT professionals along with some of the leading analysts and consultants in the field. There is no cost to attend an ITA webinar. For more information and to register for the webinar, click the “title” of the webinar below.

BLOGS AND COLUMNS

only online

Only Online Archive

ITA Pro Buyers' Guide

Vendor Views

Partner News