Why Your Underwriters Don’t Trust Analytics: How Data Engineering & Governance Turn Models Into Decisions
Insurance | April 28, 2026
I. Executive Summary: The Underwriting Analytics Challenge
In 1693, Edmond Halley built the first life tables from raw demographic data, creating the foundation of actuarial science as we know it today. Now, the insurance industry is on the threshold of a similar revolution, thanks to Artificial Intelligence (AI) and Machine Learning (ML).
At the same time, pressure from policyholders is intensifying. As inflation continues to strain household budgets, customers are demanding greater affordability, simplicity, and transparency from their insurers. However, many underwriting processes remain cumbersome. 42% of policyholders report that underwriting processes are complex and lengthy, and 27% have switched providers in the past two years, primarily in search of lower premiums (60%) and better coverage (53%). (Capgemini)
In response, insurers are investing heavily in predictive analytics. 83% of insurance executives believe predictive models are critical to the future of underwriting, yet only 27% of firms currently possess the advanced capabilities required to leverage them effectively.
The market for predictive analytics is set to expand at a compound annual growth rate of 24% between 2024 and 2029 (Capgemini), which means these models are not only a performance upgrade but necessary to remain competitive in the market. Despite its importance, new challenges are emerging inside underwriting organisations: analytics capabilities are advancing rapidly, but trust in those analytics is lagging behind.
The trust gap is evident in AI adoption. The Roots State of AI Adoption in Insurance 2025 report shows that 82% of insurance executives have AI as a top strategic priority, while only 22% have actually implemented the solution in production environments, with 45% still in the exploration and vendor evaluation phases. (Roots)
In practice, adoption remains uneven. Only 43% of underwriters regularly accept automated recommendations generated by predictive analytics tools. This hesitation reflects practical concerns rather than resistance to innovation. Two-thirds of underwriters 67% cite model complexity as a concern, while 59% are concerned with the integrity and reliability of the underlying data that informs the model. (Capgemini)
If underwriters cannot understand or confidently explain a recommendation to a broker, regulator, or policyholder, they are not likely to act on it. AI adoption in underwriting currently stands at roughly 14%, yet insurers expect it to reach nearly 70% within the next three years. (Hyperexponential) Whether that transformation succeeds will depend less on the sophistication of algorithms and more on the confidence professionals have in the systems supporting them.
The Strategic Imperative
Closing the underwriting analytics trust gap is necessary for insurers to bring the advantages of predictive analytics from the boardroom to the underwriter’s workflow. At Perceptive Analytics, we approach this challenge through a combination of data engineering, data governance, and Explainable AI (XAI) building the transparency, traceability, and data integrity that turns uncertainty into a trusted decision support system. Our data-driven blueprint for insurance growth and advanced analytics consulting practice provide the strategic and technical foundation for this transformation.
Talk with our consultants today. Are your underwriters ignoring model recommendations because they don’t trust the data behind them? Perceptive Analytics builds the data engineering and governance foundation that closes that gap. Book a session with our experts now.
II. Barriers to AI in Underwriting: Trust in Data, Models, and Outputs
At Perceptive Analytics we have observed that the reluctance of underwriters to rely on predictive analytics is often misinterpreted as an unwillingness to change. In reality, however, much of the underwriter’s reluctance is justified. The majority of the current challenges faced by underwriters are not the result of the analytics itself, but the implementation of these systems which is often done in an environment of fragmented data, unclear models, and poor governance. Until these underlying concerns are addressed, trust is unlikely to follow.
Data Fragmentation
At the core of the trust problem lies a persistent issue in insurance organisations: fragmented and inconsistent data. According to the Roots State of AI Adoption in Insurance 2025 report, 40% of insurers cite data challenges as a major barrier to adopting AI initiatives. AI models are only as effective as the data used to train and operate them, yet insurance data is rarely centralised or standardised.
Most insurers rely on a wide range of disconnected systems including policy administration systems (PAS), claims platforms, broker submissions, emails, and PDF documents. Critical risk information often resides in unstructured formats such as inspection reports, medical records, or loss run documents, making it difficult for analytics systems to process effectively.
For underwriters, the implications are immediate: recommendations produced on incomplete and inconsistent data are not reliable, and underwriters quickly learn this. Manual reconciliation becomes the default before any final decision is made. Our automated data quality monitoring and data observability as foundational infrastructure practices are designed to eliminate this manual reconciliation burden at the source.
The other challenge is the increasing concerns related to data privacy, security, and compliance, which have hindered the democratisation of data access within insurance companies. These are valid concerns however, they have also hindered the creation of the environment essential for effective AI implementation. Governance must therefore be embedded in the software development lifecycle of digital transformation initiatives, ensuring that data policies are enforced and monitored in real time.
The Explainability Crisis
While data may exist, many analytics tools are unable to offer the transparency underwriters need to trust their outputs. Industry reports indicate that 62% of insurance executives think AI and ML are improving underwriting quality and helping reduce fraud (Capgemini) yet this executive confidence is not fully shared by the professionals using the tools daily.
Only 43% of underwriters regularly accept automated recommendations generated by predictive analytics models. A major reason for this hesitation is complexity: 67% of underwriters report that analytics tools are too difficult to interpret or explain.
Underwriting decisions must often be justified to brokers, regulators, and customers. If an underwriter cannot clearly explain why a policy was declined or why a premium changed, relying on an algorithm is a professional risk. Many underwriters therefore default to their own judgement rather than tools they cannot fully interpret.
Model Volatility and Data Integrity
Trust in analytics also erodes when models behave unpredictably. Underwriting professionals encounter situations where risk scores shift suddenly without clear explanation as a result of changes in data pipelines, feature engineering, or third-party data provider updates. When outputs change overnight without transparency, underwriters begin to see the system as unpredictable rather than reliable.
This is compounded by ongoing concerns about data integrity: 59% of underwriters have concerns about the quality, consistency, and accuracy (Capgemini) of the data input into predictive models, especially when sourced from multiple external providers. Our why data integration strategy is critical for metadata and lineage article addresses precisely this challenge making data provenance visible and auditable.
Skills and Resource Constraints
While the majority of organisations recognise the potential of AI, they may not have the internal capabilities to successfully deploy it. Industry research indicates that the single greatest barrier to AI adoption, cited by 52% of insurers, is the lack of skills and resources. (Roots)
A successful AI initiative requires a combination of data scientists, machine learning experts, governance specialists, and insurance domain experts. The absence of such capabilities keeps AI projects in experimental phases instead of evolving into production-grade capabilities that underwriters can rely on. This is where Perceptive Analytics’ AI consulting practice fills the gap providing the cross-disciplinary team that most insurers cannot assemble internally.
III. The Trust Bridge — How Data Engineering Fixes the Gap
Closing the trust gap between analytics and underwriting does not start with more sophisticated algorithms it starts with stronger data foundations. Confidence grows when underwriters stop viewing predictive models as opaque “black boxes” and instead see them as verifiable assistants built on transparent and reliable data systems. Modern data engineering provides the infrastructure that makes this possible.
Unified Ingestion: Creating a Single Source of Truth
For insurers, data is still fragmented across various policy admin systems, claims management tools, broker submissions, and unstructured documents. This fragmentation is one of the main reasons underwriters don’t trust analytics: inconsistent or incomplete data leads to inconsistent model outputs.
The solution is a unified data ingestion layer that combines various sources of data into a centralised location. When underwriters see the same complete data set being used in core systems and predictive models, a basis of trust is established. Perceptive Analytics builds these unified ingestion layers using our Snowflake consulting and Talend consulting practices — ensuring the data flowing into underwriting models is governed, validated, and consistent across every source system. See our modern BI integration on AWS with Snowflake and Power BI framework for how this architecture is deployed in production.
The operational impact is significant. Streamlined ingestion pipelines can reduce manual data entry by up to 80%, allowing underwriters to focus on risk evaluation rather than data validation. Insurers that implement unified data layers also report up to a twofold improvement in quoting outcomes and an 89% reduction in time-to-quote — a measurable competitive advantage.
Data Lineage and Traceability: “Seeing Is Believing”
Trust deepens when underwriters can clearly see how a model arrived at its recommendation. Data lineage and traceability enable this transparency by linking each model output directly to its original data source functioning like “AI citations.” Every risk signal can be traced back to the exact document or dataset that produced it, whether a medical report, inspection record, or building permit.
For underwriters, this visibility transforms how models are perceived. A risk score is no longer an unexplained output; it becomes a conclusion supported by verifiable evidence. Our data observability as foundational infrastructure practice and automated data quality monitoring case study demonstrate how this traceability layer is built and maintained in practice.
Governed Feature Stores: Ensuring Consistency
In many organisations, different teams define risk variables independently, leading to conflicting insights for the same risk. A governed feature store solves this by standardising key variables across all models definitions of flood zones, driver risk levels, or building classifications are defined once and reused consistently.
This ensures every model operates using the same definitions and data structures. When models “speak the same language” as underwriting processes, confidence in analytics follows naturally. Our data transformation maturity framework provides the governance roadmap that keeps these definitions consistent as the model landscape evolves.
IV. Building Trust Through Explainable AI (XAI)
While the data foundation may be sound, the predictive model still needs to win the trust of the professionals who operate it. Analytics needs to go beyond prediction and provide insights that are easily understood and defended. Across regulated industries like insurance, Perceptive Analytics has implemented various Explainable AI techniques that lift the lid off the black box showing the logic that underlies every AI-generated recommendation.
Feature Importance and Local Interpretations
Feature importance analysis highlights which variables most strongly influence a model’s decision. In underwriting contexts, this might include factors such as driver age, claims history, or property characteristics. For example, in fraud detection a model may show that claim amount contributed 45% of the overall risk score, while prior claim history contributed another significant share clearly visualising what drives a prediction.
Using techniques such as LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) offers explanations for individual predictions. (ResearchGate) While global techniques explain how the model works as a whole, local explanations give the underwriter exactly what to tell the customer about a particular policy critical for handling customer disputes and appeals.
Our AI consulting team implements SHAP and LIME explainability frameworks as a standard component of every ML-based underwriting engagement. See our insurance sales dashboard work for an example of how these model outputs are surfaced in operational dashboards that underwriters actually use.
Rule Extraction
Rule extraction converts complex model outputs into human-readable decision rules that align with traditional underwriting logic. Rather than a statistical prediction, the model produces understandable rules such as:
“If claim amount > $10,000 and claimant has prior fraud history, then flag for review.”
This enables underwriters, actuaries, and claims experts to validate that model results are consistent with traditional risk theory.
Counterfactual Explanations
Counterfactual explanations provide “what-if” insights showing the smallest change required to alter a model’s decision. For example, an underwriter could explain to a health insurance customer: “If your BMI had been lower and there was no smoking history, your premium would have been approximately 20% lower.” These explanations make AI-driven outcomes easier to communicate and justify, directly supporting the transparency customers increasingly demand.
Human-in-the-Loop (HITL)
Explainability should be accompanied by human oversight. Developing human-in-the-loop systems ensures underwriters remain the ultimate decision-makers. This approach aligns with the top priority cited by 71% of insurers: maintaining decision accuracy (Roots) and allows AI to improve upon human judgment rather than replace it.
V. Governance as a Success Factor
Even the best analytics solutions may lose their credibility and effectiveness without proper governance. Effective strategic governance is the safety net that maintains trust ensuring that models remain trustworthy, compliant, and consistent with underwriting practices.
Model Risk Management (MRM)
Model Risk Management frameworks provide an organised structure for the management of AI systems — explaining how models are identified, validated, and classified based on risk. For underwriters, this validation process builds confidence: when a model has been formally stress-tested and documented, its outputs carry greater credibility. Effective MRM frameworks can also reduce regulatory response times by up to 50% (KPMG), helping insurers demonstrate compliance more quickly when regulators request model transparency.
Our choosing a trusted Tableau partner for data governance guide outlines the governance principles we apply across analytics deployments, and our Power BI consulting and Tableau consulting practices embed these MRM-aligned governance standards into the dashboards underwriters and actuaries use daily.
Drift Monitoring and Observability
As data patterns change over time, models may begin to drift from their initial behaviour. Today’s observability solutions monitor data input and model output in real time, raising an alarm if unusual patterns are detected. This allows insurers to adjust models before inaccuracies affect decisions, keeping tools reliable and accurate. Our data observability infrastructure practice implements exactly this continuous monitoring layer treating model drift as an operational risk to be managed, not a technical curiosity.
Change Management
Governance of AI is not just technical it is organisational. To successfully implement AI, the technology needs to be aligned with actual operator processes in insurance, that means the underwriter. Involving underwriters in model development ensures analytics tools reflect actual underwriting processes rather than pure statistical relationships. This improves both model quality and user acceptance, making analytics a supporting element of underwriting expertise rather than a competitive threat.
Our CXO role in BI strategy and adoption guide addresses the leadership alignment that makes this organisational governance sustainable, and our standardising KPIs in Tableau for modern executive dashboards work demonstrates how consistent definitions are embedded into the tools underwriters and executives share.
Conclusion: Trust as the Real Competitive Advantage
The future of underwriting will not be determined by the insurer that develops the most sophisticated algorithms. Predictive analytics, AI platforms, and advanced modelling capabilities are becoming widely accessible. What will increasingly differentiate leading insurers is something more fundamental: trust.
Trust must be embedded at every stage of the underwriting process. This requires strong data foundations, transparent analytical models, and robust governance frameworks. When underwriters can clearly understand the inputs, methodologies, and outputs behind the analytics they rely on, these tools evolve from experimental technology into trusted decision-support capabilities.
This foundation is particularly important in today’s environment of heightened risk volatility. Insurers are facing an unprecedented volume of data, increasingly complex exposures, and rapidly shifting market conditions. Leading insurers will succeed by finding the right balance between human expertise and machine intelligence where analytics enhances the judgement of underwriters rather than replacing it.
Bridging the gap between analytics capabilities and underwriting expertise is therefore not only an internal transformation. It is essential to delivering the affordability, transparency, and speed that insurance customers increasingly expect in a competitive market.
For insurers ready to move beyond experimentation, the next step is clear: building AI not as a standalone technology initiative, but as a trusted capability embedded within the underwriting function with Perceptive Analytics as the partner that provides the data engineering, explainability, and governance infrastructure to make it stick.
Talk with our consultants today. Ready to build underwriting analytics your underwriters will actually trust? Perceptive Analytics is here to help. Book a session with our experts now.
Sources & References
[2] Capgemini — Embracing the power of predictive analytics: are your underwriters ready for change?
[3] Roots — State of AI Adoption in Insurance 2025
[4] Hyperexponential — How AI and data technology are transforming the insurance industry
[5] ResearchGate — Explainable AI in Decision-Making: Building Trust in Insurance Algorithms




