[discount_rates_problem] [random_numbers]

Any visitors to this website are likely to know enough about the subject that I don't need to explain the basic theory but do let me know if I'm wrong; the “general fitting” page has a few points. What I do need to cover is how I fitted the variables and what those variables are. One point to make is that none of the variables are Normally distributed. Why “chi squared” is insufficient is explained here (see “fitting distributions to data” and request the document).

While I could have generated the random numbers within the model on the fly, I had 3 reasons for not doing so, the first being speed within Excel. Secondly, I wanted the experiment to be repeatable. Third, I wanted to control extreme values.

The results exclude correlations because the differences were not that significant for earlier work (I will use correlated numbers for 2019). So that “1 in 200” (no longer being shown) would be based on 50 cases, I have used 10,000 scenarios. Very much faster to run, I have also used the first 2,000 of 10,000 (which converges pretty well) and I shall be using 2,000 for 2019.

Financial parameters are mostly not best modelled as Normal or log-Normal with “near best fits” taken instead. It should be borne in mind that current financial conditions (early 2020 and for several prior years) are “lower than normal” so that the random numbers used are atypical of “now”. When I first published the 2018 results in January 2020, I knew nothing of Covid19 and nobody knows where that will lead over the long term.

The financials have been modelled annually from end-1953 until end-2014 (“2014” used for 2017 DWP submission) and until end-2018 (“2018a”). The 2014 random data were benchmarked to the whole experience.

For 2018a, the data were split into “early [<1985]” and “later [> 1984]”, taking account of when index-linked gilts became a more mature market. The raw data were randomised across the 2 separate intervals and weighted 75% newer and 25% older, namely a simplified “twin regime approach”, and then benchmarked to the whole end-2018 experience.

It seemed to me that it would be better to benchmark the raw data to each separate interval and for the twin regime then to be applied to benchmarked random data. While I think this approach (“2018b”) should lead to far more robust results than 2018a, I ran out of time to show the results during 2019, with a clear understanding of the inferences to be drawn; I shall revisit this when I include 2019 data.

The distributions finally used are listed here and the summary statistics are listed here.