This post is very late coming, with the conference now almost six months ago, but I wanted to get my thoughts down as a lot of the content doesn’t go out of date too quickly, and better late than never. I had the opportunity to attend a three-day GIRO conference back in November. It was my first GIRO, and I was looking forward to hearing about a broad range of topics and meeting other actuaries. I wrote about the Insurtech Insights conference some time ago, and I was keen to compare the two and to see what a group predominantly comprising actuaries has been thinking about lately.
It was hosted in the ACC at the Liverpool docks, next to the M&S Arena (where Peter Kay was apparently performing on the second evening; alas we were instead treated to a Beatles tribute band during the dinner).
Split over three floors, with a mixture of smaller rooms and a main hall, the presentations (plenaries, hot topics, workshops and other names for listening to people talk on panels) were a pick-and-choose situation so seeing everything was not an option. The event was opened by Dave Fishwick of ‘Bank of Dave’ fame and closed by Dame Katherine Grainger, the five-time medal winning Olympic rower. Both had inspiring speeches about determination and perseverance which was a nice tone to start and end the conference.

The food was mediocre and the venue was a bit shabby and either too cold or too hot at times. It was buffet food that didn’t compare to the Insurtech Insights offering, and there were definitely not enough snacks to keep me going between talks. Perhaps I was just grouchy after a five hour train journey to get there. It was cold too. OK, that’s the complaints out of the way. Here’s what I found notable. And Liverpool is a stunning city by the way.

I’ve digested the talks I attended into a few high-level themes and topics, which I’ll try to expand on below:
- AI and model governance
- Climate change and flood insurance (and electric vehicles, which I’ll tenuously link in here)
- Fair value and discrimination
- Actuarial career development
It was inevitable I’d have to write something about AI.
1. From best practice to standard practice – how commercial P&C underwriters and actuaries are applying AI from submission to decision
This was a good opener to the conference for me. A chance to see some survey results about the market’s use of AI and how it’s changed in the last year.
From my perspective, nothing unexpected. Collaboration between actuaries and underwriters is improving, but still viewed as weak. Manual rekeying and data entry has dropped sharply: in the previous year’s survey 82% of underwriters spent approx. 2 hours per day doing this. A year later thankfully it’s far less.
Are we still going to lose our jobs?
Fear of AI replacing us has fallen significantly (from around 68% worried in the previous year to around 48% worried now). This may simply be because we’ve seen how poor it can be at times. It’s always getting better though.
The concern now is being replaced by someone who uses AI better, so we need to make sure we keep up with the latest use cases and how to get the most out of the tools to support our decision making and make us more effective.
How has the market been using AI tools?
Underwriters expect to consume AI tools, not build them. They expect to be able to use them to get early and easy access to data by querying databases directly, before the main suite of reports is available, to answer their specific questions, and without having to bother actuaries and developers to update a dashboard view.
Underwriters in commercial lines are also seeing benefits from using AI to generate datasets for classes where data is limited or comes in an unstructured form, ingesting, standardising and enriching this data without manual entry.
Automated document summarisation is pretty standard now, and much focus has turned to portfolio insights and specific concerns like accumulation hotspots that are hard to notice if the data isn’t available in a standardised form. And of course there have been impressive computer vision advances that make aerial imagery classification a valuable new underwriting resource.
This last one crosses over with the actuarial use cases too, where faster model builds, new enrichments, and coding assistance are more relevant benefits of AI. Coding and AI skills are in the top 3 priorities for actuaries to develop according to the survey, and interestingly (and reassuringly) emotional intelligence ranked highly too.
Why AI development in insurance is hard
The usual barriers to proper implementation remain: legacy systems, poor data, lack of trust in the models, explainability requirements, teams using the tools inconsistently, and security concerns in implementing models that have access to sensitive company data.
There’s also a practicality issue. It’s hard to turn clever AI demos into production-grade tools. It may be fast to get it 80% working, but painfully slow to reach 100%. Apps can now be built with “vibe coding” and little understanding of the nuances, but scaling still demands rigour and good design, not just a bolt-on to an existing process. This may be the reason that according to the panel only around 25% of AI implementations achieve their expected ROI.
Ideas for the future
The panel suggested “Underwriting/Actuarial Agents”. These are conversational AI tools that answer nuanced questions and support deep research, able to connect several disparate systems and support knowledge-workers to be more effective.
Insurance products may of course emerge to cover risks created by AI systems themselves. AI systems are just tools and like any other tool can be misused and cause harm if used improperly.
How can we use AI and reduce the risk
The surveys and feedback suggest everyone is saying it, and customers and employees pretty much demand it: human-in-the-loop review is essential. No-one should be trusting AI to make decisions for them.
There are several ways in which these risks can be mitigated in an actuarial context. We can get the AI model outputs to incorporate confidence intervals so we can communicate the uncertainty and risks better. We can also do better prompting and parameter tuning to reduce hallucinations.
Users need to be made aware of the limitations of these models and should be given training on, at the very least, how to ask the right questions to get a more reliable and useful response.
2. Smarter governance for smarter models – Reimagining pricing governance in a machine learning world
Despite potentially vying for the title of most boring subject of the conference (though some of the reserving topics don’t sound thrilling I can’t attest to whether this is actually true), this was my opportunity to see how the rest of the industry does model governance. It would appear it generally does it badly.
Why should we care about model governance?
Strong governance is essential for safe, repeatable model deployment. It’s about improving consistency of approach, and sharing knowledge of best practice between teams. It helps us build better models, and get better buy-in from senior leaders and all stakeholders, which means the modelling project is more likely to succeed.
How do GBMs complicate model governance?
GBMs (Gradient Boosted Models) outperform GLMs (Generalised Linear Models) but are less forgiving of poor data and harder to adjust post-deployment.
The key risks of using GBMs are overfitting, model degradation over time, deployment errors, conflicting stacked models, non-compliance, and loss of trust among stakeholders.
What good governance looks like
The teams involved in the model creation and sign-off should create technical and non-technical governance committees. These teams should focus on methodology, principles and best practice, not on getting consensus on the numbers.
There should be mandatory code reviews and full version control using something like Git. Ultimately the governance teams need to ensure the deployed rating tables match the modelling outputs.
Key validation and review checks
There are lots of tests that have traditionally been used to validate GLMs and some of these are still valid for Machine Learning models but these ML models introduce complexity and can be hard to interpret and explain features. A different set of checks is needed, especially as the audience may not be familiar with GBMs. We have all become comfortable with the intuitive nature of GLMs which have developed over many years and have become more or less ubiquitous in Pricing.
I won’t go into details here but the presentation highlighted plenty of potential options. For example monotonicity checks, checking for unexpected groups, doing out-of-sample tests, ALE/SHAP plots, policy scenario runs on a few thousand policies, price change distributions, understanding the average movement, producing granularity tables, and doing high-risk checks (e.g. on specific vehicles). This should also be supplemented by providing the full model files for review by all members of the committee.
For model builds to be effective we must involve underwriters early for proper feature engineering, and bring senior leaders along at each stage, explaining GBM principles clearly so the sign-off is likely to be much easier.

The next category I’ve identified from the presentations I attended is climate change, flood insurance, and insurability risk
3. Assessing climate risk – From the past to the present and into the future
The presenters started by defining the types of climate risk. These are:
- Physical risk – changes in frequency & severity of extreme weather and catastrophes.
- Transition risk – regulatory, economic and market impacts from shifting to a low-carbon economy.
- Liability risk – e.g. D&O exposure if firms fail to adapt or disclose climate risks adequately.
Next, they stressed the importance of understanding what aspects of the risk are involved and what can be modelled. Risk = Hazard × Exposure × Vulnerability, and insurance modelling must consider all three, not just hazard changes.
Natural variability and climate change can be easily conflated. To understand what’s driving the larger weather losses attributable to climate change, we need to be able to strip out any other effects. Using insights from ISO (Verisk), PCS loss data (USA), and AXA XL they concluded:
- Only ~1% of the 5–7% annual increase in nat-cat losses since ~1995 is attributed directly to climate change so far.
- Other drivers include 2.5-3.5% inflation, 1% due to increased number of structures, and 1% due to wealth effects.
Challenges with current models
Some cat models rely on outdated storm-surge baselines (e.g. some are from 2008). NASA’s sea-level-rise tools can adjust AALs to correct this.
There is a key difference between basin-wide storm frequency and landfall flood losses. Scientific models don’t map directly to insurance exposure, so further adjustments are needed.
Sensitivity analyses show AAL changes vary widely under different temperature pathways.
Emerging climate science to monitor
Research conducted at the University of Exeter on several global climate tipping points and the serious effects they could have may mean insurers need to change their long-term assumptions. There’s a lot of detail and complexity I’ll skip over here but for example the AMOC weakening (Atlantic Meridional Overturning Circulation) could have potentially major impacts on UK weather and insured portfolios that current assumptions would not allow for, and different temperature scenarios shift both the severity and time horizon of risk.
Climate modelling resources
The presenters pointed to several good sources of data and things that would help insurers set and validate their assumptions:
- Climate Action Tracker – an aggregation of COP pledges and outcomes
- Knitson et al. – scientists reviewing the key climate science research
- Cambridge Centre for Risk Studies focusing on systemic-risk modelling
- Lambda Industry Consortium – cross-industry climate-risk work
Practical tools for insurers
- Use scenario-based frameworks to evaluate climate risk e.g. a 3×3 impact grid with low/medium/high impact × short/medium/long time horizon
- Retest portfolios under updated hazard assumptions
- Understand macro-links such as how GDP changes drive premium volumes, shifting exposure globally
The main takeaway was that climate risk isn’t only about worsening hazards. It’s about understanding how hazard, exposure and vulnerability evolve, and how emerging science should influence how insurers set their assumptions.
4. UK flood insurance – Shored up, treading water or under water?
The current state of Flood Re
The panel highlighted a few features of Flood Re which I think are interesting. Firstly, uptake on the Build Back Better initiative has been lower than expected, where the focus is on creating more resilient buildings with the aim to reduce future losses and disruption following flooding.
Frustratingly, 8% of new homes in the past decade were built in a flood zone, and even though these properties would not be covered by Flood Re, this seems short-sighted.
More policies are being ceded to Flood Re, so the average ceding premium and levy per policy is falling.
Key questions for the future of Flood Re
Flood Re is a time-limited scheme, designed to end in 2039, but it’s remit is evolving and reviews have suggested various ways the scheme could change to meet the needs of insurers and the public.
For example, should Flood Re expand to cover some commercial risks? Should customers be told they’ve been ceded to Flood Re? This would improve transparency but complicate communication. The members of the audience pointed out that customers should know their flood risk at the point of sale or purchase of their house, and at the point of buying insurance anyway, so the information is available.
We need to upgrade existing buildings and manage new developments better, incentivising households to invest in flood-resilience measures. There are only a few major flood events left before Flood Re ends in 2039, limiting the time to demonstrate the need and benefits. It was proposed that we could link the ceding premiums to resilience measures rather than council tax band alone, to encourage better resilience.
Practical problems with measuring flood resilience
It’s all very well saying we should link the price of Flood Re insurance to the level of resiliency measures put in place, but how do insurers verify resilience measures at point of quote? One idea is to create a Flood Performance Certificate (FPC) similar to EPCs for energy efficiency. Another issue is that surface-water flooding is harder to predict and difficult to price/cede effectively.
Funding and governance options
There are several options for a future iteration of Flood Re, aside from just keeping the current levy and ceding premium structure and letting the scheme end in 2039. Instead they could extend the scheme permanently, privatise it, make the government a formal stakeholder, and/or replace the levy with a tax-based system. Each has its merits.
It’s clear that Flood Re has helped to stabilise the residential flood insurance market, but long-term resilience, transparency, and funding reform are needed before the scheme ends in 2039.
5. Plugging into the future – Current challenges of electric vehicles
Government targets and market structure
The panel started by sharing a few (perhaps surprising) statistics on the EV market and the government’s targets. The UK is aiming for 80% EV new-car sales by 2030 and 100% by 2035. This will be achieved via a credit system where manufacturers can buy/sell credits based on the share of EVs they produce. (e.g. If VW are producing a lot of EVs and are above their targets they can sell credits to lagging manufacturers).
Much of EV growth is driven by fleet and leasing (this is tax-efficient, and often done via salary sacrifice). Retail take-up remains weaker.
Barriers to switching
At the time of the data being shown at the conference, EV prices are higher than ICE (Internal Combustion Engine) vehicles, though they are falling. Charging issues continue to deter customers from switching. There is insufficient public infrastructure despite seven times growth in rapid chargers (according to Osprey). Charging is still perceived as slow, inconvenient, and uncertain.
There is also the perceived unfairness of advertised fast-charging rates which is causing a problem (e.g. it may charge at 400kW for 10 mins then drop to 200kW or less to protect the battery). However, the LEVI scheme (on-street/lamp-post charging) launching soon, and new tech including 1MW chargers are now possible and will change this perception over time as more owners are exposed to its benefits.
Changing customer demographics
There is more EV uptake among under-30s and lower socioeconomic groups. Growth has been influenced by incentives and the new Carbon Credit Trading Scheme (CCTS).
Insurance experience and risk trends
EV premiums are moving back towards parity with ICE, having been higher since introduction. EVs do have some potential risk advantages, including that batteries may reduce BI severity as they offer some structural protection. Regenerative one-pedal braking can also mean safer driving behaviour. Norway’s EV data suggests they are seeing a lower claim frequency on a per-mile basis, but EV drivers often drive shorter, urban journeys, making comparisons difficult.
Claims and repair challenges
Repair inflation is currently similar for EVs vs comparable ICE models (according to Gecko data). But there are operational complexities including charging cables being trip hazards when trailed across public pavements (and it’s unclear whether a motor or home policy responds to these claims). Tesla and other EV brands are now showing lower repair hours and labour costs as expertise matures, however paint costs are rising sharply across the sector.
Market dynamics
The panel made a few observations about the market and how the situation may develop. They expect ICE vehicle values to increase post-2030 (and this is already happening). The uncertainty and variability of the price of charging is seen as unfair by customers.
Unfortunately recycling and end-of-life concerns for the vehicle and battery remain a low priority for insurers and consumers. Chinese EV manufacturing dominance and protectionism partly drove the UK’s policy I mentioned above. Insurance for EVs is still too expensive to fully support the transition.
EV adoption is accelerating, but pricing, infrastructure, charging fairness, and repairability remain major barriers and insurers must prepare for a very different risk landscape.

6. From proxy shadows to fair premiums – measuring indirect discrimination in general insurance pricing
Why this matters
Some rating factors are unintentional proxies for protected characteristics. For example credit score can be used to predict immigrant status; postcode in some countries correlates strongly with race such as in South Africa, and similar but less visible issues may occur elsewhere.
Currently insurers may hope that “if we don’t include protected data, we won’t discriminate”. However this is naive.
The limits of the “unawareness approach”
Simply excluding sensitive variables (in the presentation they used a smoker/non-smoker analogy) does not eliminate discrimination. Without collecting and analysing the sensitive data, we cannot detect or correct for proxy effects.
Why we need data on protected characteristics
To measure discrimination, you need the sensitive characteristics. If unavailable, proxies can be imputed (e.g. in the US insurers can use race-imputation methods from proxy data) but direct data is better. We can still analyse fairness even with samples where protected data is available.
Measuring indirect discrimination
The key concepts the panel explained are Proxy Discrimination (PD) and Unfairness (UF). Unfairness is defined as:
A UF of zero implies there is no predictive variance attributable to the protected group.
The panel suggested using sensitivity analysis, and curve comparison tools, which are described in their paper. The “Discrimination-free price” lies between group curves, but real models typically bias one way at low risk and the opposite at high risk.
Where discrimination enters
Discrimination can enter not just in technical pricing but also in Street pricing and optimisation, and conversion-based uplift/downlift models. If optimisation is correlated with protected groups, discrimination persists even if technical pricing is fair.
Professional and legal considerations
Ultimately, solutions need to stand up in court. The actuarial profession should lead on this issue; otherwise governments may impose restrictive price-control regulation. Parallels exist with with-profits life insurance fairness frameworks. There is a debate between professional responsibility (i.e. doing what’s right) vs legal responsibility (i.e. minimum compliance).
To eliminate unfair discrimination, insurers must measure it, and to measure it they must collect or infer sensitive data, then adjust models using robust fairness techniques.
7. Fair Value & the Elusive Price of Peace of Mind in Insurance
What “peace of mind” means in pricing
Historically, customer peace of mind has been used to justify low loss ratios and high margins on some products. In fact fair value is a combination of tangible economic value and intangible peace of mind. But if a policy isn’t used, customers may feel buyers’ remorse, questioning whether peace of mind was worth the cost. Oddly to some people not having a claim can make a policy feel like a waste of money.
Regulatory perspective
The FCA halted the sale of products like GAP insurance because they offered poor value. The Consumer Duty regulation focuses on claims paid and tangible benefits, but peace of mind matters too. It’s difficult for customers to compare products (e.g. IPIDs aren’t well understood). Smaller insurers struggle with regulatory costs, affecting competitiveness.
How to judge fair value
The insurer’s loss ratio gives context but isn’t the full story. The key question is “would a reasonable, well-informed consumer buy this product?” It’s further complicated because peace of mind differs by product e.g. motor vs niche covers have very different uses and customers. Fair value at the distributor stage is also often overlooked by actuaries.
Behavioural and market insights
Peace of mind is sometimes a reverse-engineered justification for higher prices. Vulnerable customers may overpay more easily, and high-net-worth customers value peace of mind differently compared with other customers. NPS (Net Promoter Score) varies substantially by product line and may reflect perceived fairness.
Consumer psychology
Customers increasingly assume regulation protects them from bad products. They believe insurers “wouldn’t be allowed” to sell poor value products, reducing personal responsibility. However, past issues like PPI show how cross-subsidisation masked unfair value.
Brand and trust
Education about peace-of-mind value is key, and this includes brand trust, capital adequacy and the reliability of the claims service. These intangibles are part of value, but must be clear and defensible when justifying the price.
Fair value must balance real economic benefit with genuine peace of mind, not use peace of mind as a retrospective excuse for overpricing.
8. Rethinking the actuarial career path – Insights and new directions and Chief Actuaries Panel
The final day ended with a couple of discussions with senior actuaries to talk about their careers, how they see the actuarial market changing, and advice they would give to other actuaries. One panel comprised chief actuaries from three different companies and industry sectors or markets, the other a dedicated panel discussion on career paths.
Future roles and AI
There was consensus that AI won’t replace actuaries, but will change the types of tasks we are involved in. Junior actuaries must still understand the underlying processes to validate AI outputs. AI presents an opportunity for actuaries to lead in AI development, orchestration, and training. And of course agentic AI systems will be orchestrated and validated by humans, and actuaries are ideal for this.
Career development and mobility
Career progression is now a lattice, not a ladder. The panel advised moving around, saying yes, getting involved in many areas. Early exposure to commercial perspectives helps with understanding portfolios and underserved areas. Hence we should learn about different business areas as much as possible (Finance, Underwriting, Reserving, Exposure Management, etc.) and be adaptable and curious. You never know how it might be relevant. Diverse experiences pay off later.
Core skills for the future
The combination of technical and domain knowledge gives actuaries an advantage over pure data scientists. Key skills in the age of AI will be storytelling and communicating insights clearly, resolving conflicts and influencing stakeholders, calm, consistent messaging and trend identification, and reading policies, accounts, P&L statements thoroughly. You can see that many of these are human and social skills built on top of a technical grounding.
On-the-job development
The panels advised getting hands-on with capital modelling, exposure management, and portfolio analysis. Potentially taking roles even if you lack prior experience. Learning comes through doing.
They stressed the importance of building credibility with underwriters and other stakeholders early; this helps influence during stress periods or soft markets. For people managers, being a good leader means ensuring your team grows and learns even when you are away.
The current landscape
In terms of the technical tools being used used today there is a familiar split: Excel (19%), vendor tools (37%), SQL (11%), Python (11%), R (9%).
Soft markets shift focus to expenses, automation, and operational efficiency, and actuaries can add value here. Early curiosity and continuous learning are essential to identify trends and act on opportunities.
The actuarial career of the future is flexible, cross-functional, and AI-enabled. Success depends on curiosity, adaptability, strong communication, and combining technical skills with business insight.

GIRO offered up a huge amount of information and insight, plus lots of time networking with colleagues from my company, some old familiar faces from a few years back, people at the exhibitor stands and lots of impromptu and random conversations in the audience at each of the talks. Finding out what other people are working on, what they’re thinking about, and where the interesting topics and minds are located was a valuable experience and what it’s all about.
There was of course a lot more content on offer, not just from the talks I went to, but from the many others I couldn’t, but hopefully this gives a idea of what you might expect at another of these events. Thanks for reading. Hope you can join me for the next one.

Leave a comment