Introduction

Inaccurate or incomplete models can undermine decision-making, expose organizations to significant financial risk, and erode consumer trust in financial institutions. Increasingly stringent regulatory requirements have heightened the need for rigorous actuarial model validation. At the same time, transformation initiatives such as model conversions, mergers and acquisitions, and system upgrades further underscore the demand for thorough validation. 

Given the critical role of actuarial models, ensuring accuracy, comprehensiveness, and conceptual soundness is of paramount importance. This article explores best-practice model validation techniques, offering a roadmap for organizations to enhance model reliability. 


Model Validation is crucial - now more than ever

Model validation is a key component of an organization’s model risk management framework, involving an independent and thorough assessment of an actuarial model’s accuracy, completeness, theoretical soundness, and fit. A governance framework typically outlines how such validation should be conducted. Unless exclusions are explicitly spelt out, all aspects of the actuarial model should be validated including its accompanying tools, topside adjustments, and documentation.

Model validation procedures depend strongly on the guidelines set out by the model governance framework and model type, but below are a few common procedures performed on most actuarial models.


Trust in an actuarial model depends on the integrity of its inputs

Leading practices in model validation emphasize rigorous input assessment to ensure accuracy, compliance, and suitability for the model’s intended purpose. All inputs must be reconciled with authoritative internal sources and verified against relevant industry or regulatory benchmarks, with any discrepancies beyond a tight threshold justifiable. 

Review techniques should be calibrated to each input’s specific characteristics. Data transformations, plan code mappings, and compression techniques demand careful scrutiny to ensure the reliability of inputs flowing into actuarial models. Compressed data should be examined to confirm no material loss of integrity. Stochastic data may warrant martingale testing to confirm alignment with statistical properties.

Inputs must also undergo reasonableness checks, with heightened scrutiny on any that have changed since the prior period due to their elevated risk of unintended error. Advanced diagnostic methods, including those that back-solve required inputs for a predefined outcome, can highlight potential issues. Professional judgment is crucial for identifying unique scenarios or specialized input considerations.

Case Study: Validating Economic Inputs into an Asset Liability Model

An insurance client relying on economic inputs from an Economic Scenario Generator (ESG) sought to validate these inputs by testing whether they adhered to fundamental risk-neutral valuation principles. The validation actuary applied a martingale test to verify that the ESG’s output ensured no-arbitrage conditions. The test used Monte Carlo simulations to generate sample paths for asset prices and interest rates under risk-neutral dynamics, then checked whether the expected discounted future value of these assets matched their current price. 

The validation actuary subsequently recommended that the insurer investigate and recalibrate drift terms, refine the discounting framework, and adjust model parameters to restore the martingale property.


Validation actuary must ensure that all calculations are precise and sound

An independently developed first-principles model remains the gold standard for validating complex model calculations. In practice, this involves creating an independent model in an alternative software platform, running it on a representative sample of policies or policy groupings, and then comparing the outputs against those of the primary model. Any material divergence must be traced to its underlying source and resolved. The threshold for acceptable differences should be adapted to the granularity of the output to avoid overlooking important errors or expending time on minor discrepancies.

Where resources constrain the development of a full-scale independent model, a simplified alternative may be adopted. The simplified model should capture the principal risk drivers and features of the primary model without reproducing its full complexity. It should be noted that such a simplified approach inherently yields wider discrepancies, so the comparison threshold must be adjusted accordingly. 

Adding a reasonableness assessment on top of the approaches above adds an essential layer of calculation validation. By performing an analysis of change procedure on the target model, the validation actuary can confirm whether every modification is justified and spot potential calculation errors. Likewise, comparing the target model to a validated benchmark model—and pinpointing the drivers behind any discrepancies—helps determine whether the magnitude and direction of differences in output are expected.

Case Study: Building an independent model to validate ULSG reserves

A life insurance client engaged the Graeme Group validation actuary to assess the accuracy of a Moody’s Analytics AXIS model for a block of Universal Life with Secondary Guarantee (ULSG) products, including shadow accounts and complex product-specific guarantees. An excel model was built from first principles to validate key reporting outputs such as AG38 reserves. Additional reviews ensured the integrity of the model design and the objectivity of data input usage. The validation confirmed the AXIS model’s reliability while enhancing transparency in reserve calculations. All validation tools were delivered to the client, enabling them to perform ongoing validations in line with their model governance policy.


Validating output accuracy is central to building stakeholder confidence

The actuary must thoroughly evaluate the stability and reliability of the model’s output over time and under varying assumptions, ensuring that minor input changes do not yield disproportionate fluctuations in results. Depending on the needs of the modeling exercise and governance framework, the actuary should selectively apply techniques such as stress testing, extreme value testing, and sensitivity analysis to confirm that outputs remain logical under adverse conditions or assumption shifts. 

Best practice procedures dictate that the actuary should perform static validation by reconciling balance sheet items with both model outputs and independent sources. Dynamic validation should be used to compare historical trends against projected results, helping to detect irregularities. Back-testing provides an additional safeguard, allowing the actuary to compare retrospective model runs to actual historical outcomes and address any gaps in methodology or data that may have caused unintended output. 

Equally critical is the use of targeted, manual checks—ranging from line-by-line code reviews of high-impact modules to intuitive assessments of model output. These efforts should be tailored to each model’s nature and purpose, recognizing, for instance, the distinct compliance needs of regulatory models versus the internal assumptions guiding management models. Ultimately, ensuring that all outputs are consistently understood, securely stored and properly communicated is vital to preserving the model’s accuracy, efficacy, and alignment with professional standards.


Conceptual Soundness as the Foundation of Reliable Actuarial Models

A robust actuarial model should reflect accepted statistical principles while remaining practical. The probability distributions and underlying assumptions used to project cash flows, such as reinvestment patterns and timing, should be actuarially sound. The model approach and design should also be validated against the methodology documentation to confirm that the actual approach employed by the model is in line with intent. It is prudent to seek an independent actuarial expert review to confirm that the model’s conceptual underpinnings meet current best practices.

Case Study: Ensuring sound methodology and design for a Prophet CFT model

A leading insurance company engaged Graeme Group to perform a conceptual soundness review of its cash flow testing model within Prophet. The primary objectives were to ensure consistency in product treatment across different model sections and to maintain full compliance with regulatory requirements. During the review, Graeme Group examined ALS products, liability products, and the combined product structures in detail, verifying their alignment with the intended methodology. The model was evaluated against Actuarial Standards of Practice, NAIC Model Regulations, Standard Valuation Law, and American Academy of Actuaries guidance and differences provided as actionable recommendations to the client.


Ensuring actuarial models run efficiently without sacrificing accuracy requires a strategic approach to computation and system optimization

Ensuring that an actuarial model runs efficiently without compromising accuracy requires a comprehensive review of its computational architecture and data processing pathways. Specifically, the model validator should verify that only those calculations critical to the model’s purpose are performed, thereby reducing superfluous computations. Generating a calculation report can show the full scope of executed routines, which can then be refined or removed if unnecessary for a particular reporting run.

Equally important is a deep understanding of the computing environment—ranging from the model’s internal calculation flow to the capabilities of the underlying software—so that coding techniques or configuration settings are tailored for optimal performance. Benchmarking the model’s speed and resource usage under various scenarios allows institutions to identify acceptable efficiency thresholds.


Artificial Intelligence plays a critical role in best practice model validation

While some organizations refrain from leveraging AI in model validation due to data security concerns, this unfortunately limits their access to AI’s advantages. Automated validation checks, uncovering hidden patterns more efficiently, and faster regression tests all bolster validation rigor. AI can also optimize recommendations arising from findings, such as how best to allocate computational efficiency.


Additional considerations for best practice validation

The actuarial validator needs to ensure that model documentation and assumption inventories are complete and updated to reduce key-person risk and prevent stale data. An accurate inventory should capture all relevant models and inputs to ensure transparency and reliability. Strong governance, including change management and deployment controls, safeguards against unintended modifications and errors. Secure storage, version control, and restricted access further protect the model and its outputs. A structured escalation process and continuous monitoring framework are essential for promptly identifying and resolving issues, ensuring long-term model integrity. Actuaries must apply professional judgment to tailor their approaches based on the model complexity and materiality.


Call to Action

Strengthen your model validation program with Graeme Group’s actuarial experts. We deliver tailored solutions that ensure compliance, reliability, and alignment with industry best practices. Contact us today to explore our validation and model risk management services.