Thought Leadership
Model Validation Best Practices
Background
Inaccurate or incomplete models can impair decision-making by key stakeholders, lead to significant losses by the organizations who use them, and ultimately erode consumer confidence in financial institutions. Regulatory changes over the last decade have also led to an increased need for actuarial models to be validated. Compounding all of this, actuarial transformation efforts including model conversions, mergers and acquisitions activity, and model upgrades trigger the need for validation.
Hence, ensuring the accuracy, comprehensiveness, and actuarial soundness of models should be a primary priority for institutions who rely upon their output. This is also consistent with numerous industry guidelines, regulations and standards of practice, such as ASOP 56. In this whitepaper, we set out best practice model validation techniques for actuaries, and address how AI can be used to support these techniques.
Model Validation
Model validation forms part of an organization’s model risk management framework and is the independent challenge and thorough review of an actuarial model for accuracy, completeness, compliance, ease of use, theoretical accuracy and goodness of fit.
An organization’s model governance framework will typically define the manner in which model validation should be performed. A best-practice validation program will clearly specify the validation scope and the series of steps applicable to the model under review. Unless exclusions are explicitly spelt out, all aspects of the actuarial model should be validated including its accompanying tools, topside adjustments, and documentation.
Model validation findings should also be clearly categorized, with clear distinctions for unintentional flaws, simplifications, lack of conceptual soundness, methodology misinterpretations, and maintenance and documentation shortcomings. Certain companies allow the validation team to propose recommendations for remediation of findings.
Input Review Best Practices
Best-in-class model validation procedures will place significant emphasis on input validation, particularly ensuring these are accurate, compliant and fit for purpose. The nature of inputs into an actuarial model will vary depending on the model use case and product type, but might include product or service features, valuation, economic or projection assumptions, and inforce or new business data.
Inputs should be reviewed against independent and updated company source data, such as signed-off assumptions memos, experience study conclusions, treaty schedules, admin extracts, product illustrations, credit score cards or rate grids. Inputs should also be reviewed against industry and regulatory source data, such as valuation manuals and incidence tables published by trusted institutions. The actuary needs to confirm that the difference for each cell, entry, or table populating the model nets out to 0 unless intended.
Inputs should also be reviewed for intuitive reasonableness. Identification of outliers using scatterplots and comparisons against prior period inputs are examples of basic ways to assess reasonableness. Reverse stress testing, which back-solves for the inputs needed to reach a pre-defined outcome, provides valuable insight to the model validator. Special attention should be paid to input data that has changed since the prior period due to a higher probability of unintentional error.
The actuary should exercise judgment to ensure special consideration is given to unique cases or model input types. Stochastic input data could be validated by performing martingale tests and ensuring the stochastic scenario generator output converges to the mean and standard deviation of the assumed distribution. Economic inputs should be double checked to confirm their start date aligns with the project or valuation start date. Compressed or grouped input data should be checked to ensure there has not been a loss of integrity due to grouping.
Use of AI to Improve Upon Best Practice Procedures: Certain model validation tasks can be performed by leveraging GenAI tools. Documentation version comparison, ad-hoc queries, model comparisons and data visualization dashboard analyses are practical examples. Regardless of the tool used, the responsibility for the model validation remains with the model validation team. Validation of AI or Machine Learning tools should follow the same model validation principles, with a specific emphasis on model transparency, discrimination testing and model drift.
Model Calculation and Output Validation Best Practices
Best-in-class model validation procedures will closely scrutinize model calculations and the resulting output extracts, ensuring these are accurate, actuarially sound and consistent with regulatory or intended methodology.
Calculation Review: An independently built first-principles model is the gold standard for calculation validation. The independent model should be run on a representative sample of policies or policy groupings, and the output from each policy or policy grouping compared against the model undergoing validation. If any of the policy or policy group-level outputs differs in excess of threshold, the drivers need to be traced back to source and understood.
A variation on the above approach is the use of a challenger model, which could be used to compare results against the model undergoing validation. Using a challenger model, full regression testing can be performed, assessing differences at plan code, product, subsidiary or aggregate level. An analysis of change procedure could be performed, where each change is successively updated in the challenger model until no major calculation or formulaic differences exist between the two models.
Output Review: The actuary should review the stability of the output being produced by the model – across both time periods and varying input data. Various forms of stress testing should be performed. Extreme value testing should confirm the behavior and stability of the output is reasonable under extreme scenarios. Sensitivity testing should assess reasonableness of output following adjustments to individual assumptions. Scenario testing, which involves varying multiple assumptions at a time, enables management to understand how the model performs under particular scenarios.
Both static and dynamic validation procedures should be performed. Static validation procedures ensure that all Time 0 balance sheet items input into the model tie out to the output as well as to an independent source. Dynamic validation procedures will plot the trend in historical data against projections to ensure there are no major unexpected kinks. An additional essential procedure is back-testing: the model should be run using prior period data, and compared against actual historical outcomes.
Targeted, manual checks should be layered onto the approaches set out above. Model code that is expected to have the most significant impact on results should be reviewed line by line. An experienced actuary should assess the model output by eye, ensuring that it looks intuitively correct. Random spot checks on output should be performed.
Conceptual Soundness
The actuarial model needs to be assessed for theoretical accuracy, actuarial soundness, and alignment with intent. The calculations and implied probability distributions used to project cash flows need to be based on sound statistical theory. Timing of cash flows, such as reinvestment of cash flows, needs to be realistic. The method used by the model should be checked against methodology documentation, to ensure that the approach is sound. Additional value will be derived from obtaining an independent expert’s opinion on conceptual soundness of the model, which is consistent with ASOP 56 Section 3.5.
Model Performance
Given the multitude of calculations demanded by actuarial models by financial institutions today, efficiency has emerged as a significant concern. Addressing efficiency issues often involves enhancements in coding and optimally utilizing calculation servers.
The model validator needs to ensure that unnecessary calculations are removed from the model. This represents the most direct approach to enhancing model efficiency. Generating a calculation report elucidating the execution of calculations within the model is typically invaluable, enabling the actuary to identify extraneous calculations.
The validator should understand the computing flow and software functionality. Performance optimization typically hinges on factors such as data granularity, the frequency of calculations, and the utilization of CPU cores. The validator should also perform a benchmarking analysis, comparing the performance of models during regulatory reporting runs to gauge scalability and establish benchmarks for acceptable efficiency.
Aesthetics and Documentation
Best practice model validation also takes the aesthetic and presentation of the model into account. There should be a clear and detailed model design document which outlines exactly how the model is designed and how future modifications should be designed. Key features that a best practice model design should adhere to include:
- Consistency: The model should be developed in a consistent manner, both within the model and across other models that the organization deploys.
- Formatting: The labeling of the model, color scheme, presentation, and spacing of the code should be well formatted and easy to follow.
- Coding efficiency: The structural integrity of the model should be efficient – the same code should never be duplicated.
- Model flow: The model should flow logically from the feed of inputs into the modeling infrastructure, all the way through the downstream population of reports or databases using model output.
Additionally, it is essential for model documentation to be comprehensive, accessible and up to date. This includes model user guides, coding standards, methodology and assumptions documentation, model and process maps, and model caveats, biases and limitations. Documentation should be clearly maintained and updated, ensuring older versions are archived and complex models have greater detail than simpler models.
Conclusion
A well-executed model validation framework ensures that the output of actuarial models can be relied upon for decision-making purposes, improves key stakeholders’ understanding of their business, and increases the overall integrity of the modeling infrastructure deployed at financial institutions. Having validated models also improves auditability of organizations’ models and reduces the risk of material misstatements. This, in turn, will foster greater trust in financial institutions and reduce insolvency risks, fines, and reputational damage. Hence, model validation is an essential component of financial institutions’ enterprise risk programs and the actuary’s toolkit.
The information provided in this whitepaper is for informational purposes only and does not constitute legal, financial, or professional advice. While we strive to ensure the accuracy and completeness of the information contained herein, we make no warranties or representations, express or implied, about the accuracy, reliability, suitability, or availability with respect to the whitepaper or the information, products, services, or related graphics contained in the whitepaper for any purpose. Any reliance you place on such information is therefore strictly at your own risk.