The financial planning and analysis (FP&A Board) Board of senior practitioners recently met in London, UK, to discuss the pros and cons of rolling forecasts, how best to introduce it, and to hear a case study from Maersk Group about how the shipping, transport and oil firm has benefitted. “We’ve abolished the annual budget completely and only use rolling forecasting (RF) now,” said Matthijs Schot, head of performance & analysis at AP Moller Maersk, as he shared his company’s implementation four years ago of an RF process and the lessons they’ve learnt.

What are the basic ingredients of advanced FP&A analytics? Getting the discussion underway the Board’s founder and managing director, Larysa Melnychuk, suggested that it should be proactive, forward-looking, agile, available in real-time, multidimensional and integrated – although these elements are no more than the basic essentials. The combination should be enough to provide the business with good quality information that enables better business decision-making in a timely manner, while also providing unique insights that make a difference to that process.

Quality of Business Forecasting: How to Find the Needle in the Haystack

As we know a simple matter of spotting bias – systematic under or over forecasting – can get surprisingly tricky in practice if our actions are to be guided by scientific standards of evidence – which they need to be if we are actually going to improve matters.  

Reliably identifying systematic forecast error requires that we take account of both the pattern and magnitude of bias using approaches that explicitly take account of probabilities. 

How to find the needle in the haystack 

Let’s assume that you have a method for reliably detecting bias in a single forecast. How can this be deployed at scale in a large company where forecast are mass produced? In these types of businesses, a single demand manager will typically be responsible for upwards of a thousand forecasts, every one of which might be reforecast on a weekly basis, any one of which might unexpectedly fail at any time if the pattern of demand suddenly changes. 

This kind of forecaster is a master craftsman carefully selecting the right forecasting method and polishing the result until it is ‘perfect’. Instead, they are managers of a forecast factory churning out thousands of ‘items’ at a fast rate, none of which will be as perfect  as those produced by a master craftsman, but all of which need to be fit for purpose; ‘good enough’. 

Clearly it is important that the demand manager continuously reviews the performance of every forecast every period so that defective ‘products’ enter the supply chain. But when they have such a large portfolio is it realistic? 

Probably not. 

The ‘obvious’ solution to the complexity facing practitioners that most companies adopt is to calculate bias at a high level in the hierarchy and investigate further only when there is evidence of a problem.  

The flaw this approach is that it is extremely unlikely that every forecast in a portfolio or a category is biased in the same way. And when they are not, the errors for those items that are over forecast will be offset the under forecast errors to a greater or lesser degree, with the result that chronic bias at the low level is hidden. And it is the bias at this low level that is important because the replenishment process is driven by these granular forecasts, not high-level aggregates. 

The bottom line is that even if high-level bias measures are calculated in a statistically intelligent way (as described in part 1 of this series) they are a completely unreliable guide to the level of bias at the level where it counts – the lowest level.  

And the degree of the problem can be considerable; in practice, it is very common to find average errors calculated at a high level underrating low-level bias by many orders of magnitude.  For example, it is quite common to find a product category with an average level of bias of say 2%, which most people would consider to be acceptable, being the result of some SKU’s being over forecast by an average of 20% and the rest being under forecast by 18%. 

This is one important reason why companies may experience customer service failure despite having high total inventory levels and apparently good forecast performance metrics.  

The solution 

So how do we reconcile the need to track forecast performance on a very frequent, highly granular level with the apparent impossibility of doing so? 

The answer is to measure low level under and over forecasting separately and to test these measures for evidence of statistically significant high levels of bias, in the manner described in the last post. Then use these alerts along with measures of the scale of the problem will direct the attention of forecasters to the relatively small number of failing forecasts that matter. 

Bias is the most treatable symptom of a failing forecast process. Even if we cannot track the subtle changes in the pattern of the demand signal it should be possible to get the estimate the level reasonably easily. If we do start consistently under or over forecasting it should be straightforward to detect, and correction usually requires no more than a simple recalibration of our models or our judgement.

But like many things that are simple in theory dealing with bias can become a more intractable problem given the scale and pace at which forecasting is conducted in practice.

 So when it comes to driving out bias, size matters.

Steve Morlidge is an accountant by background and has 25 years of practical experience in senior operational roles in Unilever, designing, and running performance management systems. He also spent 3 years leading a global change project in Unilever.

He is a former Chairman of the European Beyond Budgeting Round Table and now works as an independent consultant for a range of major companies, specialising in helping companies break out of traditional, top-down ‘command and control’ management practice.

He has recently published ‘Future Ready: How to Master Business Forecasting’ (John Wiley 2010), and has a PhD in Organisational Cybernetics at Hull Business School. He also cofounder of Catchbull, a supplier of forecasting performance management software.

Is your Planning System a ‘Measurement system’ or a ‘Management system’?

How do you think performance is measured at the Olympics Games by the teams involved?  The number of gold medals?  The number of world records?

Now come back a few years to when a team is preparing for the games. How is performance going to be managed?   You can be sure it won’t be in terms of the number of medals they hope to win. Instead, the focus will be on the type of training being given, the diets being prepared, the way in which equipment and facilities are being used. To ensure these activities can take place, budgets and other resources are allocated to the appropriate activities. In short, the focus is on the process of preparing the athlete and not on the outcomes they hope to achieve.

Budgets and Performance Management

Now compare this approach to the way in which organizations typically plan and budget performance. Most tend to focus on outcomes – the ‘gold medals’ of profit to be made, the total amount to be spent – while the actions required to produce the ‘gold’ are left as a note in an operational plan document, only to be forgotten when actual results are produced.

Performance measurement focuses on results and allows users to analyze those results through charts, grids and trends and by drilling-down to even greater depths of detail. However, what they don’t reveal is the process that individual managers went through in setting the initial targets, the actions that were going to be required, the anticipated state of the business environment, whether or not the required actions were actually carried out and if they actually contributed to success. Performance management, by contrast, is all to do with the processes and actions that lead to strategic goals. This includes how management chooses a particular course of action in a given business environment, as well as to how those actions relate to other departments and the overall achievement of company strategy.

Interestingly the setting and monitoring of budgets or forecasts cannot ensure that these activities will be carried out or if they are proving to be effective. And yet this is where most organizations spend their planning time. For the average company it can be as long as 6 months. Are we really saying that this is the most important planning activity or have we embraced a ritual that doesn’t make much sense in this unpredictable day and age?

Attributes of a Performance Management System

Performance management systems encompass the setting up of an operational plan that is tied to strategic goals. It allows managers to collaborate with others on the setting up of initiatives to which resources can be allocated that will eventually form part of the departmental budget.

Performance management systems allow these initiatives to be assessed in various combinations so that the best can be selected as part of the agreed plan. The system will then go on to track the implementation of agreed initiatives and warn users and appropriate managers if activities have not been completed or if they are not having the desired effect on forecast strategic goals. Where goals are not being met, a performance management system will allow managers to propose changes and try out alternative scenarios to put the plan back on course, which once agreed, will then be tracked by the system.

In short, performance management systems encompass performance measurement systems, but not the other way around. To achieve effective performance management, systems must possess sophisticated process control capabilities that constantly track organizational activities and invoke user involvement as required.

So do you have one? Is your planning system a ‘measurement system’ or a ‘management system’?

Oh – and in case you were wondering, the image is definitely from a performance measurement system!

In this blog,  the author offers his insights on some of the challenges faced by practitioners in making sense of data and communicating it within organisations, and his ideas on how these shortcomings can be overcome. A few decades ago everything seemed so straightforward. When it came to planning and controlling businesses annual budgets were the only show in town. Things are very much different now.

Pages