ThreeBearsBalancedWithText_
ContactDiscRateEnd2019
SiteMapDiscRateEnd2019
random_numbers
[discount_rates_problem] [random_numbers]

Any visitors to this website are likely to know enough about the subject that I don't need to explain the basic theory; let me know if I'm wrong; the “general fitting” page has a few points. What I do need to cover is how I fitted the variables and what those variables are. One point to make is that none of the variables are Normally distributed. Why "chi squared" is insufficient is explained here (see “fitting distributions to data” and request the document).

While I could have generated the random numbers within the model on the fly, I had 3 reasons for not doing so, the first being speed within Excel. Secondly, I wanted the experiment to be repeatable. Third, I wanted to control extreme values.

The results exclude correlations because the differences were not that significant for earlier work. So that “1 in 200” would be based on 50 cases, I have used 10,000 scenarios. Very much faster to run , I have also used the first 2,000 of 10,000.

Financial parameters are not best modelled as Normal or log-Normal and “near best fits” have been used taken instead. It should be borne in mind that current financial conditions (early 2020 and for several prior years) are “lower than normal” so that the random numbers used are atypical of “now”.

The financials have been modelled annually from end-1953 until end-2014 (“2014” used for 2017 DWP submission) and until end-2018 (“2018a”). The 2014 random data were benchmarked to the whole experience.

For 2018a, the data were split into “early [<1985]” and “later [> 1984]” for 2018, taking account of when index-linked gilts became a more mature market. The raw data were randomized across the 2 separate intervals and weighted 75% newer and 25% older, namely a simplified “twin regime approach”, and then benchmarked to the whole end-2018 experience.

It seemed to me that it would be better to benchmark the raw data to each separate interval and for the twin regime to be applied to benchmarked random data. While I think this approach (“2018b”) should lead to far more robust results than 2018a, I ran out of time to show the results during 2019, with a clear understanding of the inferences to be drawn. So I shall revisit this when I look at data until end-2019.

The distributions finally used are listed here and the summary statistics are listed here.