SWTCH by Pigment
Three days of predictions, insights, and advice from leaders in finance, sales, HR, supply chain and more
Register now here
SWTCH by Pigment
Three days of predictions, insights, and advice from leaders in finance, sales, HR, supply chain and more
Register now here
By Steve Morlidge, Business Forecasting thought leader, author of "Future Ready: How to Master Business Forecasting" and "The Little Book of Beyond Budgeting"
The average level of MAPE for your forecast is 25%.
So what?
Is it good or bad? Difficult to say.
If it is bad, what should you do? Improve…obviously. But how?
The problem with simple measures of forecast accuracy is that it is sometimes difficult to work out what they mean and even trickier to work out what you need to do.
Bias, on the other hand, is a much easier thing to grasp.
Systematic under- or over-forecasting is straightforward to measure – it is simply the average of the errors, including the sign, and is clearly a ‘bad thing’. Whether you are using your forecasts to place orders on suppliers or using them to steer a business, everyone understands that a biased forecast is a bad news. Also, it is relatively easy to fix; find out what is causing you to consistently over - or underestimate and stop doing it!
That is the theory, and it is right up to a point. In reality, however, what seems like a straightforward matter in principle can get surprisingly tricky in practice.
There is always a chance that you will get a sequence of forecast errors with the same sign, purely by chance.
It's like flipping a coin; there is a good chance that you will flip two heads in a row. It is always possible, purely by chance, that you will get three heads in a row, and four and, although it is much less probable, five in a row. In fact, you can never be 100% sure that your coin is biased; that it has two heads or two tails.
By the same token, you can never be 100% sure that your forecast process is biased. It is a question of probabilities and the balance of evidence.
So one approach to spotting bias is to choose a given probability level and not act until you have evidence that gives you that level of confidence.
For example, while you don’t want to deal with many false alarms, if you have a monthly forecast process, you clearly don’t want to wait a year to find out that you have a problem. How do you decide when to hit the alarm button?
The math for this is really simple.
Because the chances of flipping a head (or over-forecasting) purely by chance are 50%, the chances of two heads in a row are 50%/2 = 25%, three heads 25%/2 = 12.5%, four heads 6.25% and so on. So the chances of getting a sequence of either heads or tails are double these odds: 2=50%, 3=25%, 4=12.5%, 5=6.25%.
For a monthly forecast process, I would normally recommend sounding the bias alarm when you get four errors of the same sign in a row, giving you roughly 90% confidence that you have a problem. Put another way, you are likely to have an average of only one false alarm a year - a level of risk that seems to be about right.
This sounds reasonable, and it is a particularly effective way of quickly detecting changes in the pattern of bias. But what happens if you have, say, two errors in a row which are very large? Shouldn’t you sound the alarm sooner? And a small error of the opposite sign in the middle of a sequence of systematic errors doesn’t mean that a bias problem has gone away; we know that this kind of thing can happen purely by chance.
This demonstrates that to spot bias reliably and quickly, you need to take into account the size of errors and not just the pattern. But how should the alarm triggers be set? For example, the level of variation in the forecast has a bearing on the level of statistical confidence that you have in a given level of average error, as will the size of the sample, i.e. the number of periods that have been used to calculate the average.
So it is clear that the simple approach of setting an arbitrary target for forecast bias won’t work; it will miss many problems and trigger false alarms, which, at best, will lead to wasted effort and, at worst, will trigger inappropriate ‘corrective action’ which will make matters worse, not better. What is needed is a way of setting confidence levels for error, which takes into account both the number and level of variation in the forecast errors.
Fortunately, this is the kind of problem that is meat and drink to statisticians, and there is a wide range of available solutions. My recommendation would be to use a simple tracking signal, as the approach has been around for many years and is relatively easy to understand and calculate.
Bias is bad, and it is usually the first thing that businesses looking to improve the quality of their forecasts choose to target, with good reason. But while most people intuitively believe that it is easy to spot bias, on closer examination, ‘common sense’ approaches based on targeting error are flawed because they cannot reliably distinguish between bias and chance variation.
What is needed is a probabilistic approach to spot bias, and this post has outlined two ways of doing this:
Using both methods in tandem minimises the chance of false positives – acting upon which will misdirect resources and degrade forecast quality – and false negatives, which perpetuate waste.
This approach is both robust and practical...at least when we are analysing individual forecast series. But this is only half the battle. Many demand managers are responsible for thousands of forecasts produced on a rapid cycle, and this brings with it a whole new set of challenges that will be addressed in the new post in this series.
Steve Morlidge is an accountant by background and has 25 years of practical experience in senior operational roles in Unilever, designing, and running performance management systems. He also spent 3 years leading a global change project in Unilever.
He is a former Chairman of the European Beyond Budgeting Round Table and now works as an independent consultant for a range of major companies, specialising in helping companies break out of traditional, top down ‘command and control’ management practice.
He has recently published ‘Future Ready: How to Master Business Forecasting’ (John Wiley 2010), and has a PhD in Organisational Cybernetics at Hull Business School. He also cofounder of Catchbull, a supplier of forecasting performance management software.
One can find many definitions of financial analysis. Investopedia defines financial analysis as “the process of evaluating...
One of the realities that FP&A professionals need to realize is people tend to be too...
Being critical of one’s own work, is even more important for the financial doing the forecast...
This article highlights some of the common cognitive biases that the author has come across in...
We will regularly update you on the latest trends and developments in FP&A. Take the opportunity to have articles written by finance thought leaders delivered directly to your inbox; watch compelling webinars; connect with like-minded professionals; and become a part of our global community.