The rate cycle that defined US personal and commercial lines from 2022 through 2024 accomplished something important: it restored pricing adequacy after years of inadequate premiums relative to rising claims costs. But it also extracted a steep customer cost. Industry-wide auto insurance rate levels rose 35% between January 2022 and the end of 2024. [LexisNexis Risk Solutions, 2025] The market’s response was emphatic. Policy shopping reached an all-time high, with more than 45% of policies in force shopped at least once by year-end 2024 [LexisNexis Risk Solutions, 2025] — and by 2025, that figure climbed further, with 57% of customers shopping for auto coverage [J.D. Power, January 2026], up from 49% a year earlier. Retention has fallen five percentage points since 2021, to 78%, representing a 22% increase in policy churn. [LexisNexis Risk Solutions, 2025]

The structural lesson from this cycle is one that pricing, actuarial, and product leaders are already drawing: blunt rate increases applied across an entire book solve a profitability problem while simultaneously creating a customer problem — and they do so in a way that is not self-limiting. The customers most likely to leave in response to broad rate hikes are precisely those the carrier most wants to keep: long-tenured, multi-line policyholders who represent disproportionate lifetime value. Just 51% of high-value customers said they will definitely renew with their insurer in 2025. [J.D. Power, January 2026]

The alternative — risk-based pricing supported by analytics — allows carriers to apply rate precisely: increasing premiums where the underlying risk justifies it, holding or reducing premiums where it does not, and doing so in a way that is demonstrably fairer, more defensible to regulators, and more effective at retaining preferred-risk customers. This guide maps the data foundations, analytical capabilities, customer impact, implementation challenges, technology landscape, and regulatory framework that define the path from blunt rate management to precision pricing — the path that Perceptive Analytics helps insurance carriers navigate.

Talk with our consultants today. Are broad rate increases eroding your preferred-risk book? Perceptive Analytics helps carriers build the data and analytics foundation for precision risk-based pricing. Book a session with our experts now.

1. Define the Goal: From Blanket Rate Hikes to Precision Pricing

A broad rate increase is, at its core, an acknowledgement of inadequate information. When a carrier cannot precisely identify which segments of its book are driving loss ratio deterioration, raising rates uniformly is the only lever available. It works, eventually — but at a cost that is higher than necessary: preferred risks leave, adverse risks stay, and the selection dynamic gradually makes the underlying book worse, not better.

Risk-based pricing inverts this logic. The goal is not simply to charge more — it is to charge the right amount for each risk, based on its actual expected cost. Carriers that have moved furthest along this journey no longer think in terms of rate changes applied to a book. They think in terms of price positions by risk segment: where should this specific risk sit relative to its expected loss cost, relative to competitive market prices, and relative to the carrier’s own target profit margins? That question can only be answered with granular data, analytical models, and a rating engine capable of expressing the result in a compliant, auditable rate structure.

The Maturity Spectrum

  • Manual and judgment-based: Pricing driven primarily by actuarial tables and underwriter experience, with limited data enrichment. Broad rate adjustments are the primary correction mechanism.
  • Rules-based segmentation: Defined rating factors applied consistently, producing some differentiation but limited granularity. Classification accuracy depends on the quality of inputs, which are typically static and self-reported.
  • Predictive models: GLMs, gradient boosting, or ensemble methods applied to enriched data, producing risk scores that drive more precise rate differentiation. This is the current state-of-the-art for leading carriers.
  • Dynamic risk-based pricing: Real-time data integration — telematics, IoT, third-party signals — enabling continuous pricing recalibration based on actual observed risk behaviour, not proxies. Emerging in personal auto and commercial fleet.

Perceptive Analytics View: The strategic case for risk-based pricing is not primarily a technology argument. It is a retention argument. Carriers that can identify their best risks with precision can price to keep them. Carriers that cannot will continue losing preferred customers to competitors who can.

2. Data Foundations for Effective Risk-Based Pricing

Risk-based pricing is only as precise as the data that informs it. The carriers achieving the strongest pricing outcomes are not necessarily those with the most sophisticated models — they are those with the most complete, accurate, and timely data. Building the data foundation is the highest-priority investment in any pricing transformation.

Internal Data: The Baseline That Is Rarely Clean

Policy data, exposure characteristics, coverage elections, prior claims history, payment behaviour, and renewal patterns is the starting point. But internal data in most carriers carries embedded problems: inconsistent coding across vintages, manual fields that were rarely enforced, and silos between underwriting, claims, and policy administration systems that prevent a unified view of each risk. Before external data can be usefully integrated, internal data quality and completeness must be addressed. Perceptive Analytics’ automated data quality monitoring and data observability infrastructure practices address this baseline problem before model development begins.

  • Policy characteristics at inception and renewal — exposure details, coverage structures, and any endorsements that shift the underlying risk profile
  • Claims history by segment and line — frequency, severity, and development patterns that distinguish underwriting quality from external volatility
  • Renewal and lapse behaviour — pricing elasticity signals are embedded in who stays, who leaves, and at what price point
  • Payment and engagement patterns — correlated with retention likelihood and, in some studies, with loss experience across personal lines

External Data: Where the Pricing Edge Is Built

The differentiating data layer in modern risk-based pricing is external. Credit-based insurance scores, motor vehicle records, telematics, geospatial risk indicators, and third-party commercial data all carry risk signal that static policy data alone cannot capture. In auto insurance, more than 21 million US policyholders [IoT Insurance Observatory, 2024] now share telematics data with their insurer, enabling behaviour-based pricing that replaces blunt demographic proxies. The global insurance telematics market, valued at $6.8 billion in 2024, is growing at nearly 19% CAGR [GM Insights, 2025] — a signal of how rapidly this data layer is becoming mainstream rather than experimental.

  • Credit-based insurance scores — among the most powerful predictors of loss frequency in personal lines. Regulatory constraints vary significantly by state and line
  • Telematics and IoT signals — driving behaviour data and, in property insurance, smart home sensors enabling continuous risk monitoring and dynamic exposure assessment
  • Geospatial and climate risk data — property-level flood zone designations, wildfire exposure scores, hail frequency maps, and subsidence risk indicators that allow carriers to price at address level rather than postcode
  • Commercial financial and operational data — business credit scores, payment history, industry-specific loss indicators, and supply chain risk signals for commercial lines pricing
  • Motor vehicle and licensing records — driving history beyond what applicants self-report, surfacing violations that adjust the risk profile independent of applicant disclosure

Our Snowflake consulting and Talend consulting teams build the data integration pipelines that bring these internal and external data sources together into a unified pricing data mart — the foundation on which all downstream analytics depend. See our data integration platforms guide for the architecture principles we apply.

Data Quality as a Competitive Moat

Data quality — not just data volume — is the differentiating factor. A 2024 Willis Towers Watson study found that P&C insurers implementing predictive modelling with enriched external data experienced a 67% improvement in risk assessment accuracy and a 5.7% decrease in combined ratios [Willis Towers Watson / Decerto, 2024], with premium leakage reduced by approximately $14 million per billion dollars of written premium. That improvement comes not from more data but from better data, applied consistently through a governed analytical process.

Perceptive Analytics View: Data investment decisions in pricing are not symmetric. A point of loss ratio improvement from better risk segmentation is worth far more than the technology cost of achieving it. The carriers treating data acquisition as a cost centre rather than a pricing capability are systematically undervaluing it.

3. Analytics Capabilities Needed to Operationalise Risk-Based Pricing

Collecting the right data is necessary but not sufficient. Translating data into a filed, compliant rate structure that differentiates risk at the level required for precision pricing demands a specific set of analytical capabilities — and a governance framework that allows those capabilities to be trusted, audited, and improved over time.

Generalised Linear Models (GLMs): The Industry Standard

GLMs remain the actuarial gold standard for insurance ratemaking and are the regulatory reference point for most state insurance departments. They model claim frequency and severity separately, apply a link function to establish the relationship between rating factors and the response variable, and produce multiplicative rate relativities that integrate cleanly into rating engine structures. Their strengths — interpretability, auditability, regulatory familiarity — are also their limitations: GLMs cannot capture complex interactions without manual feature engineering. For most carriers, GLMs are the right production model for filed rates, complemented by more advanced techniques in development and validation.

Machine Learning and Gradient Boosting Models

Gradient boosting models (XGBoost, LightGBM), random forests, and combined actuarial neural networks (CANNs) can capture non-linear interactions between rating variables that GLMs miss. Research comparing model architectures consistently finds that these approaches outperform GLMs on predictive accuracy. [Deloitte P&C Pricing in the Age of Machine Learning, 2024] A study across multiple pricing datasets found XGBoost to be particularly effective for high-dimensional pricing problems, with CANNs — which layer a neural network adjustment on top of a GLM base — offering a useful bridge between predictive performance and actuarial interpretability. Perceptive Analytics’ AI consulting and advanced analytics consulting teams implement and govern these model architectures for insurance pricing programmes.

Price Elasticity and Demand Modelling

Understanding how customers respond to price changes — by segment, channel, tenure, and competitive context — is the analytical layer that prevents pricing sophistication from becoming a retention problem. Elasticity models quantify how much rate change a given risk segment will absorb before shopping or switching. Incorporating elasticity into pricing decisions allows carriers to optimise not just for loss ratio but for lifetime value: targeting retention of preferred risks through precisely calibrated price positions, rather than applying maximum achievable rate across the board.

  • New business price optimisation: Setting quote prices that balance hit ratio, risk quality, and portfolio composition objectives simultaneously
  • Renewal price management: Identifying the renewal price point for each customer that maximises long-term value — the intersection of risk cost, competitive market price, and churn probability
  • Competitive market intelligence: Monitoring competitor rate filings and market positioning to calibrate own pricing relative to the competitive set, not just internal loss costs

Our price optimisation analytics and predicting customer churn case studies demonstrate how Perceptive Analytics operationalises these models in insurance and adjacent industries.

Segmentation and Risk Scoring

Micro-segmentation — the ability to differentiate risk within broad categories — is where pricing sophistication creates its most direct loss ratio benefit. A carrier that prices all sedan drivers in a given zip code at the same rate is leaving adverse selection risk unmanaged; a carrier that can differentiate by telematics score, credit tier, vehicle age, and driving history within that cell is pricing the portfolio correctly. The technical challenge is maintaining enough segment granularity to be meaningful while maintaining enough volume in each segment for frequency and severity estimates to be statistically credible. Our Power BI consulting and Tableau consulting teams embed these segmentation views directly into the pricing workflow — so underwriters and actuaries see the risk score at the point of decision, not in a separate analytics tool.

Perceptive Analytics View: The transition from GLM to ML in pricing is not an either/or decision. The most effective pricing architectures use GLMs as the filed rating structure and ML models as the development and challenge layer — catching what the GLM misses, then translating insights back into GLM-expressible rating factors. Explainability is not optional; it is the condition on which regulatory approval depends.

4. Impact of Targeted Pricing on Customer Satisfaction and Retention

The relationship between pricing and retention is more nuanced than it appears. Rate increases drive shopping — that much is evident from the data. But the mechanism is not simply price sensitivity; it is perceived fairness. When customers understand why their premiums are increasing, they are typically far more satisfied, even in a rising-rate environment. [J.D. Power, January 2026] The information vacuum that broad, unexplained rate hikes create is what drives customers to competitors — not the price change itself.

The Customer Fairness Dynamic

Risk-based pricing, when communicated clearly, can actually improve customer satisfaction even when it results in higher premiums for some segments. A customer who receives a rate increase explained by specific, verifiable risk factors — a speeding violation, a claims history, an address-level climate exposure — is far more likely to understand and accept the change than one who receives a letter citing broad market conditions. This is the retention payoff from pricing precision: better-segmented rates are more defensible to the customer, not less.

Usage-based insurance (UBI) makes this dynamic explicit. In 2025, 17% of insurers offered UBI programmes to shoppers [J.D. Power, January 2026], with telematics-based pricing gaining renewed momentum. The customer proposition — premiums that reflect actual driving behaviour rather than demographic proxies — resonates strongly with low-risk drivers who know their risk profile is better than average. See our data-driven blueprint for insurance growth for the strategic framing of how UBI and risk-based pricing compound retention over time.

The Adverse Selection Dynamic

The retention risk from imprecise pricing runs in both directions. When preferred risks leave because broad rate increases made them uncompetitive in the market, the remaining book’s average risk quality worsens. This adverse selection spiral — higher rates cause preferred risk exits, which causes higher average loss costs, which requires further rate increases — is the structural failure mode of blunt rate management. Risk-based pricing breaks this cycle by decoupling the rate treatment of preferred and adverse risks.

What the Data Shows on Retention and Pricing

Shopping among long-tenured customers — those with 10+ years of tenure — rose 35% year-over-year [LexisNexis Risk Solutions, 2025], with high-survivability shoppers hitting 40% of total shoppers by year-end 2024. These are precisely the customers whose retention risk should concern carriers most. They are high-value, multi-line, low-churn — and they are shopping because broad rate increases have made them feel mispriced relative to the market. Our insurance sales dashboard and answering strategic questions through high-impact dashboards work gives retention and pricing leaders the segment-level visibility to identify and act on these shopping signals before they become cancellations.

Perceptive Analytics View: The customers most damaged by broad rate increases — long-tenured, high-value, preferred-risk policyholders — are the same customers that precision pricing most effectively retains. The two imperatives (profitability and retention) are not in tension; they are solved by the same analytical investment.

5. Key Challenges and Risks in Transitioning to Risk-Based Pricing

The barriers to risk-based pricing are real, and underestimating them is one of the most common failure modes in pricing transformation programmes.

Data Quality and Legacy System Constraints

The most common blocker is not analytical capability — it is data. Rating engines built on legacy policy administration systems often cannot accept the volume and variety of data inputs that modern pricing models require. External data must be ingested, matched, and stored in ways that legacy systems were not designed to handle.

Mitigation: Implement a pricing data mart — a structured data environment purpose-built for pricing analytics — that sits alongside (not inside) the legacy policy admin system. Perceptive Analytics’ data engineering consulting and modern BI integration on AWS with Snowflake and Power BI practice implements exactly this decoupled architecture — accelerating the analytical development cycle without requiring a full core system replacement.

Model Governance and Actuarial Credibility

Predictive models used in pricing are subject to actuarial standards of practice and regulatory review. Models that cannot be explained — that lack documented factor selection rationale, validation results, and performance monitoring — are not deployable in filed rate structures regardless of their predictive accuracy. [Variance Journal / CAS, 2024]

Mitigation: Establish a Pricing Model Governance Committee with cross-functional representation from actuarial, data science, compliance, and legal. Define model risk tiers based on rate impact and regulatory sensitivity. Our choosing a trusted Tableau partner for data governance guide illustrates the governance infrastructure principles we apply across analytics programmes.

Change Management Across Pricing, Actuarial, and IT

Pricing transformation is a cross-functional programme, not a data science project. Actuaries need to develop fluency with ML model validation. IT teams face material capability gaps around real-time data pipelines and API integration. Product leaders must adapt to more frequent rating plan updates.

Mitigation: Define the target operating model for pricing analytics before selecting technology. The operating model — who owns which decisions, how models move from development to production, how performance is monitored — determines the technology requirements, not the reverse. Our CXO role in BI strategy and adoption article addresses exactly this leadership alignment challenge.

Competitive and Market Timing Risk

Transitioning to risk-based pricing during a period of market softening carries the risk of losing volume in segments where the carrier’s risk-based rate is above the market’s competitive price. This is not a failure of the approach — it is a feature of pricing correctly. But it requires executive alignment on the principle that short-term volume sacrifice for portfolio quality improvement is the correct trade.

Mitigation: Develop competitive market intelligence capability alongside risk-based pricing models. Our standardising KPIs in Tableau for modern executive dashboards work helps leadership teams maintain a consistent view of competitive position by segment throughout the transition.

6. Technologies and Platforms That Enable Risk-Based Pricing Models

Rating Engines

The rating engine is the production environment where pricing models are expressed as filed rates and applied to individual risks. Modern cloud-based rating engines are designed for API-first integration, rapid rate plan updates, and compatibility with external data inputs. The ability to update a rate plan and deploy it across distribution channels within days rather than quarters is an operational prerequisite for competitive pricing in a dynamic market.

Key capability requirements: real-time data API integration for external risk signals, version control for rate plan history, A/B testing capability for rate changes, and audit trail functionality for regulatory compliance.

Pricing Analytics Platforms

Specialist pricing analytics platforms provide the analytical environment between raw data and filed rates — where models are built, validated, monitored, and translated into rating factor structures. Traditional pricing relied on annual rate reviews and quarterly loss development cycles [Insurance Thought Leadership, 2024] — modern platforms compress these cycles materially, enabling continuous pricing refinement rather than episodic overhauls. Perceptive Analytics’ Power BI development services and Tableau development services embed the monitoring and performance dashboards that keep these platforms operationally current.

Data Integration and Third-Party Data Access

The operational infrastructure for ingesting external data at policy lifecycle events — new business, renewal, endorsement, claims — is a critical enabler of risk-based pricing. This includes data APIs for credit, telematics, geospatial, and claims history sources; data quality validation and matching logic; and storage architecture that makes enriched risk data available at the point of decision without introducing latency. Our event-driven vs scheduled data pipelines guide and custom pipelines vs managed ELT brief cover the architectural decisions that govern this layer. The data integration layer is frequently the longest-lead-time component of a pricing transformation — and where Perceptive Analytics’ Talend consultants and Snowflake consultants deliver the most time-to-value.

Explainable AI Tooling

As ML models become more widely used in production pricing, the tooling for explaining their outputs to actuaries, underwriters, regulators, and customers has become a technology category in its own right. SHAP values, partial dependence plots, and factor importance tools allow pricing professionals to understand why a model produces a particular rate for a particular risk — and to translate that into rate filing documentation that satisfies regulatory requirements. More than 70% of US insurers now use or plan to use AI/ML [Baker Tilly, 2025], and the explainability requirement is no longer aspirational — it is enforced. Perceptive Analytics’ AI consulting team builds explainability frameworks into every ML-based pricing engagement from the outset.

Perceptive Analytics View: Technology selection for pricing transformation should follow operating model design, not precede it. The most expensive mistake is deploying a sophisticated pricing platform into an organisation that lacks the data infrastructure, governance framework, and actuarial talent to use it. Platform capability and organisational capability must grow together.

7. Navigating Regulatory Requirements for Risk-Based Pricing

Pricing is the most heavily regulated function in insurance, and the regulatory environment governing the use of data and analytics in rate setting is tightening. This is not a reason to slow the transition to risk-based pricing — it is a design constraint that shapes how it must be built.

The Rate Regulatory Framework

Insurance pricing operates within a state-based regulatory structure where rate filings must demonstrate actuarial justification for rating factors and rate levels. Most states require either prior approval of rates before use, file-and-use protocols, or use-and-file requirements. California’s prior approval regime for personal auto is among the most restrictive, while many commercial lines are subject to lighter-touch or deregulated frameworks. For carriers filing rates in multiple states, maintaining a rate plan that is simultaneously analytically sophisticated and navigable across different regulatory frameworks is a material operational challenge.

The AI and Big Data Regulatory Overlay

The NAIC’s Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted in December 2023 and now adopted by 24 states as the de facto national standard [Baker Tilly, 2025], imposes governance, transparency, and accountability requirements on all AI systems used in regulated insurance decisions, including pricing models. Insurers must maintain a documented AI Systems (AIS) Programme, conduct bias testing across protected class proxies, and be prepared to demonstrate explainability for adverse pricing outcomes in regulatory examinations.

  • Fairness and non-discrimination: Pricing models must not produce outcomes that are unfairly discriminatory — including through facially neutral factors that function as proxies for protected characteristics. New York’s DFS Circular Letter 2024-7 specifically requires demonstration that external data used in pricing does not proxy for protected classes. [Buchanan Ingersoll & Rooney, 2025]
  • Colorado’s algorithmic fairness requirements: Colorado SB 24-205 applies broad consumer protections against algorithmic discrimination in consequential insurance decisions, extended to private passenger auto insurance effective October 2025. [Baker Tilly, 2025]
  • Rate filing documentation: State regulators increasingly require detailed documentation of data sources, model architecture, validation methodology, and bias testing results for any AI or ML model used in rate setting. ‘Black box’ justifications are no longer acceptable in most jurisdictions.
  • Third-party data and model accountability: The NAIC’s Third-Party Data and Models Working Group is developing a formal regulatory framework for oversight of third-party data vendors and model providers used in pricing. Carriers are responsible for the outputs of vendor models regardless of how they were built. [NAIC, 2024–2026]

Using Compliance as a Competitive Advantage

Carriers that invest in governance infrastructure — documented model development standards, fairness testing protocols, explainability tools, and regulatory exam readiness documentation — find that compliance becomes a pricing enabler, not a constraint. A well-documented model with a strong validation record gets filed faster, attracts less regulatory scrutiny, and can be updated more quickly than one built without governance in mind. A 2024 KPMG compliance survey found that insurers with formal model review processes experienced 75% fewer regulatory challenges [KPMG, 2024] compared to those without them.

Perceptive Analytics View: Every rating model that cannot explain itself in plain language is a regulatory liability waiting to materialise. In a market where 24 states have adopted the NAIC AI governance framework and state-level algorithmic fairness requirements are expanding, explainability is no longer a nice-to-have in pricing analytics — it is a filing requirement.

Bringing Data, Technology, and Governance Together for Sustainable Risk-Based Pricing

The window for treating broad rate increases as a sustainable pricing strategy is closing. The customer evidence is clear: 29% of insurance customers switched insurer in 2025 [J.D. Power, January 2026], with the sharpest attrition concentrated among exactly the customers carriers most need to retain. The competitive evidence is equally clear: carriers with more granular pricing capability are systematically attracting preferred risks from those without it. And the regulatory signal leaves little ambiguity — AI and data-driven pricing decisions are increasingly subject to governance, documentation, and fairness testing requirements that reward carriers who build analytically rigorous, explainable pricing models.

The transition from blunt rate management to precision risk-based pricing is a phased programme, not a point-in-time technology deployment. The right sequence is: assess the data estate and close the most material quality gaps; build analytical capability in a bounded pilot on the highest-loss-ratio line; establish the governance framework before expanding to production; integrate analytics outputs directly into the rating engine and workflow; and measure against pre-defined KPIs from the start. Perceptive Analytics structures this sequence through our data transformation maturity framework and data engineering consulting practice — with the analytical and BI layer delivered through Power BI implementation services and Tableau implementation services.

Carriers that execute this transition successfully achieve something that blunt rate management cannot: a pricing capability that improves profitability and retention simultaneously, that is defensible to regulators, and that compounds in value as data quality improves and models recalibrate. That is the sustainable competitive position. Getting there requires investment in data, analytics, technology, governance, and organisational capability — but the alternative, relying on broad rate actions to address structural risk selection problems, is increasingly neither commercially viable nor regulatorily acceptable.

Download our risk-based pricing readiness checklist to assess your current data and analytics capability against best-practice benchmarks — or schedule a pricing analytics assessment to identify the highest-value gaps and the fastest path to precision pricing in your priority lines.

Talk with our consultants today. Ready to move from broad rate hikes to risk-based pricing that retains your best customers and satisfies regulators? Perceptive Analytics is here to help. Book a session with our experts now.

Sources & References

[1] LexisNexis Risk Solutions (2025). 2025 U.S. Auto Insurance Trends Report.

[2] J.D. Power (January 2026). Rate Pressure, Customer Retention and Digital Engagement Top Insurance Industry Challenges for 2026.

[3] Insurance Business Magazine (January 2026). Price Shocks, Digital Shifts Put Insurers’ Loyalty Playbook to the Test.

[4] III Blog / Triple-I (August 2025). Auto Premium Growth Slows As Policyholders Shop Around, Study Says.

[5] Willis Towers Watson / Decerto (2024). Insurance Software with Predictive Analytics: A Competitive Edge.

[6] IoT Insurance Observatory / Carrier Management (February 2026). Telematics and Trust: How Usage-Based Insurance Is Transforming Auto Coverage.

[7] GM Insights (2025). Insurance Telematics Market Size, Growth Forecasts 2025–2034.

[8] Deloitte (January 2024). P&C Pricing in the Age of Machine Learning.

[9] Variance Journal / CAS (2024). Towards Explainability of Machine Learning Models in Insurance Pricing.

[10] Insurance Thought Leadership (December 2024). Dynamic Pricing Gives Insurers a Competitive Edge.

[11] Baker Tilly (August 2025). The Regulatory Implications of AI and ML for the Insurance Industry.

[12] NAIC (December 2023). Model Bulletin: Use of Artificial Intelligence Systems by Insurers.

[13] NAIC Third-Party Data and Models Working Group (2024–2026). Ongoing framework development for third-party AI data and model regulatory oversight.

[14] Buchanan Ingersoll & Rooney (October 2025). When Algorithms Underwrite: Insurance Regulators Demanding Explainable AI Systems.

[15] KPMG (2024). Compliance survey — insurers with formal model review processes experience 75% fewer regulatory challenges.

[16] AgentTech / NAIC AI Bulletin Analysis (September 2025). How the NAIC AI Bulletin Signals a New Era of Regulatory Accountability in Insurance.

[17] Capstone DC (January 2026). Insurance 2026 Preview — Colorado SB 24-205 algorithmic discrimination provisions extended to auto insurance, effective October 2025.

[18] Verisk / APCIA (2025). Strong 2025 Underwriting Income Masks Persistent P/C Insurance Pressures — combined ratio 92.9%.

[19] Capgemini World Property and Casualty Insurance Report. Referenced in Vantage Point, 2026.

[20] Consumer Intelligence (September 2025). Machine Learning and AI for Insurance Pricing.


Submit a Comment

Your email address will not be published. Required fields are marked *