Replies: 1 comment 1 reply
-
I haven't had a chance to look too deeply yet at this and might not for a bit, but I suspect this has to do with the precision of each of the different solvers between chainladder-python (uses scipy under the hood), the TFWP paper (some older version of Excel), and your sample. Since solvers only provide numerical approximations, I would expect different implementations to have different values within rounding of each other. I think the only way to validate this hypothesis is to test each of the intermediate values of the overall bondy algorithm between the two. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to confirm/document the generalized bondy tail. It seems like there might be a different level of precision between the calculations in Python and Excel. I would like to show the same CDFs between the two. Any suggestions are appreciated. Thank you!
Python/CL code
triangle = cl.load_sample('tail_sample')['paid']
dev = cl.Development(average='simple').fit_transform(triangle)
tail = cl.TailBondy(earliest_age=12).fit(dev)
tail.cdf_
Excel version
TFWP_Appendix_August2013_CL.xlsx
Beta Was this translation helpful? Give feedback.
All reactions