« The Current State of Open Innovation | Main | The 2019 Artificial Intelligence Index »

March 23, 2020


robert anderson

Interesting article. I remember when I was working at an insurance company there was a push to put automated underwriting into place. There was enormous push back due to expected job loss. So there was a multiple year period of algorithmic underwriting decision suggestion that was then reviewed by real underwriters to see whether human or machine underwriting choices were more accurate given the guidelines. The algorithms won by a significant margin. But three challenges remain. 1. The validity of the data values chosen and the consistent accuracy of their prediction of future events is typically only reviewed if the actuaries see significant economic deviation making the prediction generally good enough for profitability and regulatory scrutiny , but not always "fair". 2. There is no repeat challenge of the machine prediction against human underwriters on the presumption that the machines won. 3. I believe, most importantly, there is no systematic effort to put into place algorithmic back tests that test "if this algorithmic decision is correct, then these things should be true" especially for large quantity of decisions made over time eg. auto insurance underwriting decisions over 1-2 year periods. This would be utilized not so much for "catching a mistake" as watching for drift in expected predictive accuracy to prompt a human review. This would seem to be an area where the tension and potential misapplication of statistics between correlation and causal statistical significance is significant.

The comments to this entry are closed.

My Photo

June 2020

Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        
Blog powered by Typepad