How To Improve Company Performance With FP&A Business Partnering

By Anders Liu-Lindberg, Head of Global Finance PMO at Maersk Transport & Logistics 

Business Partnering Companies are constantly looking for ways to improve their performance and there’s no doubt it’s easier if different functions work together on finding improvement ideas. It’s clear that certain functions execute the ideas while others support on the sideline and manage the outcomes of the ideas. FP&A is such a support function that typically works at the corporate or HQ level with strategic concepts ideally supporting senior leaders or top management even with driving their agenda. So, how can FP&A teams most effectively help their stakeholders improve performance? The answer is Business Partnering and let us expand on that.

Business partners help drive company performance

As a business partner, the job of the FP&A team is to participate in strategy meetings where different options are discussed to drive the company forward. The FP&A team should help qualify each option so that the best choice can be made. The team should even give a recommendation based on what the analysis of the options shows. Once a decision is made it filters down to the operational units for execution and the FP&A team will keep track of the overall performance. To really drive the performance, however, the FP&A team should stay in close dialogue with the operational units and help the line managers as well as the local finance teams better understand the strategic choices made and how to translate them into execution. In this case, Business Partnering goes both up and down in the company hierarchy where first the FP&A team helps senior management make the right strategic choices and later assists the operational units to understand the choices and translate them into local initiatives. Without this link, there’s a very high likelihood that the translation of strategic choices is misinterpreted and company performance suffers. To visualize how FP&A can use Business Partnering to improve company performance we can look at below flowchart.


This chain of actions can essentially be repeated over and over as a continuous improvement cycle of company performance. It “only” requires that the FP&A team is treated as a trusted partner by both senior management and the operational units.

How is this different from what FP&A has always done?

Surely, there are pockets of excellence here and there where a FP&A team has always been involved in making strategic decisions and translating them to the frontline. However, FP&A has been focused on getting strategic choices handed over to them to forecast the effect of same and prepare a budget that would show the total outcome. Then a message was sent to operational units to hit those budgets and every time they would fall short the line manager would receive an e-mail (not even a phone call) to explain what actions (s)he would take to improve performance. Not a lot of team-work or collaboration is involved here and often the FP&A team would fail to understand why the operational units didn’t just follow the plan! In simple terms, they never understood the plan (and you must wonder if the FP&A team did either) hence no wonder why performance didn’t follow.

One example could be that due to a bad year management decides to tell all units to cut 10% from their spending. No direction is given as to what actions should be taken. No patience is given to non-performance. Perhaps an initiative to cut 10% might not be a strategic action but still, the FP&A team needs to act as a partner to management here and help them make some smarter choices. FP&A should know where there are improvement opportunities both on revenue and costs to reach the same impact as a 10% cut on costs across all units would yield. At the same time, FP&A can easily help translate the strategic action to operational units because they have been deeply involved in the decision. Instead of a one-sentence order to cut 10% FP&A can pass on suggestions as to initiatives the operational units can take to reach the desired results. This will make it easier for everyone to deliver and FP&A is at the center of this.

In conclusion, Business Partnering is what will make not only the FP&A team more successful but more importantly also improve the overall company performance. So, what’s stopping you from getting started right away?


The article was first published in prevero Blog

Data Driven Planning: The 7 Key FP&A Models

In this article, I want to make the case for data driven planning to describe the 7 key FP&A models that every organisation needs to plan, resource and monitor business performance.

Seven Key FP&A Models

Key planning questions

From a planning and review perspective, there are 7 key things that management needs to know about its business processes, each of which can be assessed in a range of analytical models:

  • How efficient and effective are the organisation’s business processes? (Operational Activity Model)
  • What trends are ‘hidden’ in the detail? (Detailed History Model)
  • What long-range targets should be set given where the market is heading? (Target Setting Model)
  • Where is the organisation heading if it continues with its current business model? (Detailed Forecast Model)
  • What could be done differently to better meet long-range targets and how much would it cost? (Strategy Improvement Model)
  • What choices/risks do management face and what would be the impact on corporate goals? (Scenario / Optimization Model)
  • How much funding is required to implement the plan and where will it come from? (Cash / Funding Model)

The models that answer each of these questions have different content, structures and are used by different people at different times.  However none can be omitted or ignored, and all need to operate as a single, data-driven management system.
From a planning systems point of view, this network of models can be visualized as follows:

Schematic overview of the 7 key models required to manage performance

As can be seen, these models are fed with data from internal systems such as the general ledger, and from external sources such as market information, along with data supplied by end-users. Whether these models are separate entities will depend on the size and complexity of the organisation.  Some may be combined, while others may need additional supporting models to enable them to function effectively.  For now, we will consider them to be separate but linked models, which from a user view operate as a single system. 

1. Operational Activity Model

The Operating Activity Model (OAM) is central to organisational planning that, as the name suggests, has a departmental activity focus.  Its purpose is to monitor business processes with a range of measures that allows management to evaluate their efficiency and effectiveness.
In particular, it can be used to:

  • Compare and contrast resources, workload and outputs both now and in the past
  •  Assign budgets for ‘business as usual’, i.e. assuming that there are no strategic initiatives 

The model holds different versions of data, some of which flows from the other planning models.  These versions include:

  • Target – contains the high-level goals set during the strategic planning process
  • Budget – contains the allocation of resources for the current year/period.
  • Forecast – contains the latest ‘best estimate’ of future performance for the next couple of months
  • Actual – contains past results

The model is multi-dimensional in nature and uses ‘attributes’ that allow measures to be assigned to specific business process activities and that categorizes them as being:
An Objective – these define what the organisation is trying to achieve in the long-term; 
A Business Process Goal – these measure the success of the organisation’s core business processes and support activities that directly lead to the achievement of objectives;
An Assumption - these monitor key assumptions made about the prevailing and forecast business environment that relate to the value set for the Business Process Goals;
A Work measure – these describe the volume (and sometimes the quality) of work performed by a particular department.   e.g. the number of mailings sent out by the marketing department as part of its lead generation process;
An Outcome measure – this measure what an activity should directly achieve, e.g. the number of people that respond to a mailing; 
A Resource measure – these track expenditure that flows out of the organisation.

By using these attributes, the model is able to display data by the department but in relation to activity, outcome and resources used.  Below is an example of the types of report that can be produced. 

The first report shows the corporate objectives and business process goals for a selected period.  This contains a mixture of outcome, work and resource measures for both the budget and actual performance, as well as for last year.  Icon indicators are used to display whether results are getting ‘better’ or ‘worse’ than target. 

Sample report showing Corporate Objectives and how they are supported by business process goals.

The next report shows outcome, work and resource measures for a selected department.  As with the last report, actual performance is contrasted with the budget, while the end of year forecast is compared to the annual target.  From this management can assess the relationship between workload and outcomes to judge whether the focus is on the right activities.

Sample report showing department work, outcome and resource usage

By using attributes as a filter, the above report is able to automatically display the appropriate measures as they apply to the selected department.  

These two reports just touch the surface of what can be displayed from the OAM.  Interestingly, most organisations have much of this data already, although it is typically shown in separate budgeting and scorecard/dashboard reports, without making a connection between them. When treated in this way, the data can’t be used to model organisational value and so much of its worth is lost.

2. Cash / Funding Model (CFM)

Closely linked to the OAM is the cash/funding model (CFM) that is used to assess the organisation’s need for financial resources. Some of those resources will be used to support operating expenses, and others will be required for capital investment or strategic initiatives. This model is linked to the operational activity model (OAM) that contains budgets and forecasts in order to predict future cash flows.  It then goes on to help management assess the best source for any cash shortfalls.

Although it is true that most internal financial systems hold data relating to the flow of cash, what they do not allow is for management to model the data from a planning point of view.  For example:

  • to see a revised cash flow based on new supplier credit terms or a change to customer payment profiles; 
  • to consider the cost of funding an increase in production capacity to meet the projected demand for new products; or 
  • to assess the impact on resources by outsourcing a particular function.

Similarly, financial systems do not hold the key assumptions that affect cash flow. For instance, inflation has a major impact on cash resources, yet the underlying data supporting any inflation assumptions is not contained within those systems.

Modelling cash flows and balances require different sets of information, as shown below:

                               Cash / Funding model content

The dotted line indicates information stored within the CFM, while the bold lines indicate the data flows from other models in the planning framework. Data held can be summarised as follows:

  • Customer and supplier payment terms. The CFM contains details about each major supplier and customer where the cash flow effect is to be calculated. Depending on how payment terms are defined (for example, in weeks or months), the time intervals in this model may be at a shorter increment than that of the OAM.
  • Cash supply. Cash is modelled for budgets and forecasts. The supply side of cash is taken from the OAM where the level of detail allows individual supplier or customer movements to be identified so they can match up with the appropriate customer details.
  • Cash demand. Similarly, the demand side for cash is also taken from the OAM and takes into account all operational expenses, which for a manufacturer would include the supply of raw materials and manufacturing costs. It also includes any cash flows that arise in relation to capital expenditure. As with supply, these outflows are at a level where they can be linked to the payment profiles held within the CFM. 
  • Net funding requirements. Rules within the CFM are used to ‘time-shift’ the imported cash supply and demand data into the time periods in which cash will flow in and out of the organisation’s treasury bank account(s).  To this, other cash consumers and income streams not covered are added. This may include items such as interest payments and dividend accruals. These details are entered directly into the CFM. By subtracting the demand for cash from the supply, management can review the financial resources required.

To address any cash shortfall, or to reduce the amount of borrowings, budget and forecast data within the OAM can be reassessed to see which activities could be changed. The model also allows management to gauge the impact of changing customer and supplier payment terms. Assuming this has been done, the model can now be used to assess how any cash shortfall should be financed with the two obvious financing sources being debt and equity.

When reporting actual results, much of the data within the cash flow model will be loaded directly from the underlying transaction systems, so there is little need for modelling other than to produce a comparison between budget and forecast versions.

3.  Detailed History Model (DHM)

The role of the Detailed History Model (DHM) is to back up the Operational Activity Model (OAM) described in the last blog, when comparing actual results with budget.  The DHM has a much lower level of detail than the OAM and supports investigations into past performance. This will almost certainly include income and resources, as these will be made up of transactions held within the general ledger. Some of the workload and outcome measures may also have further details, which can be used to analyse business process activity.

There is likely to be more than one DHM, with each focusing on a specific area of performance such as revenue or production costs.  However, it is not desirable to create DHMs for every measure, as this could distract management from what is important. Instead, DHMs should be created for those measures whose values play a significant part in either directly resourcing or monitoring a business process.

When defining a DHM, the question should be asked, ‘What information do I need in order to understand the actual results being presented in the OAM?’  The answer determines the level of detail, the analyses that are required, and the type of history model that will meet those needs.

DHM’s can be of different model types that include: 

Transaction data set. These are tables of data that can be queried and summarised. An example of this type could contain the general ledger transactions behind each account code. These would be loaded from the General Ledger (GL)  on a regular basis and could consist of date, department, account code, supplier, and amount. Capabilities within the DHM would summarise this data by department, month, and account codes that are then fed into the appropriate place within the OAM.

The DHM could then be to support expense queries. For example, from a variance in the travel budget, a user would be able to drill down into the supporting DHM to see the transactions that made up the actual result. They could then issue another query that extracts transactions for a prior month to see if any expenses had been held over and hence had caused this month’s variance.
As with the other types of DHM, the ease of use and capabilities provided to an end user will depend on the technology solution being used. As a minimum, this type of DHM should support the following examples:

  • Filters.  E.g. list all transactions making up a particular account code.
  • Summaries. E.g. total all transactions for a particular account code and over a selected period.
  • Sorting and ranking. E.g. show the top 10 departments as ranked by travel expenditure.

Multi-dimensional model. This type of DHM allows users to produce cross-tabular analyses. Data is stored and referred to by its business dimensions. Users then have free access to the way in which data is presented, which can incorporate charts, additional calculations, and colour-coded exceptions. Examples of this type of model include sales analyses that could include types of customers, products sold, discounts provided, returns, and shipping costs.
Unlike the transaction data set, a multi-dimensional model is able to provide the following:

  • Multiple views of the data. E.g. show sales revenue by product and customer, customer profitability, returns by product and location.
  • Trends. E.g. calculate a rolling 12-month average and show this by month for the current year versus last year.
  • Exceptions. E.g. show all customers whose year-on-year growth has been negative.

Unstructured data model. This final type of model provides support for non-numeric data, such as notes, news reports, social media discussions, and competitor product videos. By linking these into the OAM, qualitative information can be provided that can make a substantial difference in the way results are perceived.
For all the DHM types shown above, a security system is required that will automatically filter out data that the user is not allowed to see.

4. Detailed Forecast Model (DFM)

Detailed Forecast Models (DFM), like the DHM, are typically used in conjunction with the OAM to collect forecasts that operational managers believe they will achieve in the short term. This can include a range of measures including workload, outcomes, and resources.

Although the OAM can collect forecasts at a summary level, there are measures that benefit from having this at a more detailed level. For example, revenue for a manufacturer can come from a range of customers and products, each of which has their individual profitability profile. As a result, the product mix can have huge implications on total revenue and costs. Therefore, to predict profitability with any degree of accuracy requires detailed knowledge of what is being sold, its volume, and to whom.

Similarly, sales of high value items or those that relate to a project are often dependent on timing. In these cases the sales process may be long and when the business is won, the resultant impact on costs and revenues in a particular time period can be significant. Without knowledge of the detail, it is easy to jump to the conclusion that an over or underperformance is exceptional rather than expected.  For this reason, collecting information concerning the sales order pipeline and using this to populate the sales forecast not only improves accuracy but also provides insight should any variances occur.

As with DHMs, different measures can have a wide range of supporting detail and so there are likely to be multiple forecast models where each has a focus on a particular measure.  Again, not every measure warrants its own forecast model. Ideally, they are only created for measures where the underlying mix of detailed transactions can have a large impact on results when compared to plan.

Providing detail behind a forecast so that informed assessments on accuracy can be made 

DFMs will typically hold just a forecast version of data, as actual results will be held in the detailed history model. (Remember, we are using the word model in a logical sense; the actual implementation may combine these into one physical model.)  For some measures, data may exist in another system (for example, many companies use to collect sales information). If this is so, then the DFM may simply be a place where the latest data is stored that is then cleared out and repopulated each period. Alternatively, the DFM may be a system in its own right that is used to hold and track forecasts.

DFM’s can hold a range of data, not all of which is numeric.  For example sales forecasts may include the following fields:

  • Date that the sales forecast was entered
  • Region responsible for the sale and where any revenue will be credited
  • Sales executive involved
  • Company being sold to
  • Sales type (for example, whether the sale is to an existing customer or a new prospect)
  • Product(s) being sold
  • Value of the order
  • Date contract is due to be signed and revenues recognised in the P&L summary
  • Percent chance of the deal going ahead
  •  Any notes to describe the current situation

As with DHM’s, data within a DFM can sorted, summarised and reported. For example, show all sales due in the next three months ranked by the percentage chance of them being signed. This enables management to look in detail at a forecast so they can form their own opinion as to what could happen and to take remedial action should they fall short of what is expected.

As an option, a sales DFM could apply the per cent chance measure to the value of each sales situation to produce a modified forecast value within the OAM, or the OAM could contain two measures—one holding a value that assumes all sales opportunities will materialise as held, and the other using the per cent chance. This provides a range of values that could be used to assess future performance.

It is also worth storing prior forecast versions so that over time, a picture can be built up on the reliability of forecasts. For example, which sales people are able to forecast with an accuracy of 5 per cent three months in advance? Which measures produce the most variability when viewed six months in advance? 

Knowing how trustworthy a forecast is can help determine which measures need regular inspection and the level of caution required when making decisions based on them.  Also, if managers are aware that forecasts are being monitored closely, then they are more likely to pay attention to the values they submit, which in turn are more likely to be trusted.

5. Target Setting Model (TSM)

The Target Setting Model (TSM) is a mathematical model that allows management to simulate different business environments as well as the way in which it conducts its business processes.  Its purpose is to generate target values that will challenge the organisation as to what its performance could be in the future.

The TSM typically relates the outcomes of organisational business processes (for example, products made, new customers acquired, and customers supported) to long-term objectives and resources. In many ways, it is similar to the Operational Activity Model (OAM) described in blog 3, except its rules are used to generate targets from a range of base data. This is also known as driver-based modelling. In effect, there are a few independent variables, such as forecasted unit sales volume, which are driven by dependent variables (e.g. price, material unit cost), which are based on assumptions about the business environment (e.g. market size and growth).

Measures for these models can be selected by taking long-range targets and determining what drives their value.  The answers to these are then subject to the same question and so on until a base ‘driver’ is encountered, i.e. a measure whose value determines the targets it supports.

More sophisticated models recognise constraints, such as production volumes, the impact of discounts, late delivery penalties, or that more staff will be needed at certain levels of sales. They also recognise that there is nearly always a time lag between the driver and the result it creates. 

It should also be noted that these models only work for those measures that can be directly related to drivers, such as costs and revenues. Other data, such as overheads, will still need to be included to produce a full P&L summary.

Sample relationship map on what drives sales growth.  Measures on the right-hand side are drivers.

Because of their simplistic nature, driver-based models are not able to take into account unpredictable external influences, such as the unexpected market growth or changes in government legislation that impact taxes. This is where versions come into play. To see the impact of uncontrollable influences, the TSM is set up to hold a variety of scenarios where management can re-run the calculations with different driver values that simulate changing assumptions. For example, the model can be run with different sales conversion rates or unit costs, each of which will generate a new version of the P&L summary. These can then be displayed side-by-side so management can see the impact of each change. 

The aim in doing this is to allow a range of options to be evaluated concerning the future. These options will revolve around business drivers, which, if based on business process outcomes, will cause management to rethink how these are conducted and what could be improved. The end result of the TSM is a scenario that management believes will give them the best outcomes for the available resources. These values are then used to set top-down targets within the OAM that can be referenced by individual departments during the budget process.

6. Strategy Improvement Model (SIM)

The Strategy  Improvement Model (SIM) is used to evaluate how the current performance of an organisation as forecast in the OAM (‘business as usual’) can be transformed into one that supports the targets set by the TSM. The model allows managers to propose initiatives that can then be assessed, approved or rejected for implementation. Initiatives could involve improvements to current operations, such as replacing old machinery, or something entirely new, such as developing a new range of services or entering new geographic markets. In both cases, initiatives typically represent a set of activities that are not part of current business processes. 

From a logical point of view, the SIM consists of two sets of data linked to the OAM where ‘business as usual’ is kept.
Relationship between initiatives and ‘business as usual’

The first part of the model is where managers propose initiatives that are linked to business process goals, departmental structures, and resource. Here initiatives can be reviewed, assessed, and gain approval.
When an approved initiative becomes ‘live’, its set of activities and associated data are transferred into the OAM, where it is kept separate from existing operational data. However, the OAM allows the accumulation of resources and other measures to give a total ‘business as usual’ plus ‘strategic initiatives’ position.

This is achieved by defining a new dimension in the OAM for strategy, which is made up of the following members:

Total strategy. This is a consolidation member that accumulates ‘business as usual’ data with ‘total initiatives’ data.

    - Business as usual. This member contains all of the data for current business processes, but without applying any strategic initiatives.
    - Total initiatives. This is a consolidation member that contains the accumulation of data from its members; that is, the individual initiatives.

  • Initiative 1. This contains the data for a selected initiative as transferred from the SIM.
  • Initiative 2. This contains the data for a second selected initiative, and so on.

Keeping initiatives separate allows them to be monitored individually so management can keep a watchful eye on their implementation and resource usage versus expected benefits. Too often, initiatives are assumed to be responsible for an improvement in performance when no attempt has ever been made to actually measure whether this was true or whether the costs involved were worthwhile.
Linking the SIM to the OAM helps organisations to:

  • Accurately define ‘business as usual’ (or baseline) performance of the current organisational business processes.
  • Capture plan versus actual cost of strategy implementation and the benefits being realised.
  • Provide a transparent way of assessing priorities in the areas where performance improvement is most needed.
  • Avoid vague claims or estimates for initiatives, as the SIM requires clarity.

As time passes, it should be possible to re-plan, suspend, delete, or select new initiatives as required. Should an initiative be suspended, it can be moved back to the SIM until required at a later date.

7. Scenario / Optimization Model (SOM)

The last model in the planning framework is associated with Risk Management and enables managers to assess the impact of unexpected change and how it impacts corporate goals.  In the book ‘Best Practices in Planning and Performance Management’, author David Axson comments that  “Planning is not about developing a singular view of the future:  one of the most valuable elements of any planning activity is the ability to factor in the impact of risk on assumptions, initiatives and targeted results.”  He went on to say  “A scenario is a story that describes a possible future.  It identifies significant events, the main actors and their motivations, and it conveys how the world functions.”

As with DHM and DFMs, there may be more than one SOM.  For example, when balancing manufacturing costs with sales forecasts, some organisation’s employ sophisticated production models that determine which machines should produce which products and as a result what materials need to be ordered.

Similarly, when looking at the impact of a rise in commodity prices, it would be beneficial to assess a range of price values and to then compare the cost outcomes that these would generate.  From this management can then decide on how they would respond. For example, they may want to evaluate changing the current business structure or implement a new initiative.

The aim of the SOM is to allow management to ‘play’ around with different scenarios, each of which is documented as to the assumptions made about the future business environment and the change that could be made in response.  These are the presented back as a ‘side-by-side’ comparison from which decisions on the value set by the TSM can be evaluated, or what adjustments may be required to the current budget in order to keep the original plan on track.

The 2020 CFO - From "Accounting" to "Accountability"

By Ruchi Gupta, Finance Advocate and Consultant at Oracle India Pvt. Ltd

In my role as a technology advisor for CPM- Corporate Performance Management Solutions over several years, I have met hundreds if not thousands of CFO’s from startups to mid-size to large enterprises. Some CFO’s have a more traditional approach of preserving the assets of the organization by minimizing risk and getting the books right while some CFO’s cannot sit still, come to office in plaid sleeve shirts and talk about strategy, vision, market share etc. There are no rights and wrongs between conservative and aggressive approach but there are some common mindsets and traits that the new age CFO have:

Jack of all Trades, Master of Some

Modern CFO takes care of everything and takes on issues from all departments be it Sales, IT, HR or Finance. In an August 2016 Oracle survey of finance leaders in Europe, the Middle East, and Africa, 52% of respondents said their role is now predominantly focused on advising the business on how it can achieve growth goals, and 56% said they’re working with lines of business more closely than ever before. They strive to help their customers to make money and not just care for their own bottom line.

The Tech Savvy and “SAAS”y CFO

The forward-looking CFO now wants to take ownership of the IT department. He is an expert in cost management, organizational efficiency and fluent in cybersecurity, fraud prevention, business continuity planning and digitization as he “gets” the role of technology in business advantage. Also, a lot of new roles are techno-functional, “IT-Finance” being the new department being created. The modern CFO believes in investing in integrated decision support systems to “peak around the corners” and “see beyond the horizon”.
The 2020 CFO’s make the technology decisions today and do not simply watch and wait. “Playing it safe” is no longer equivalent to “playing it smart.” The status quo itself is a risk. A visionary CFO’s agree that the future of finance is in the cloud. They believe that their teams deserve the flexibility, configurability, and world-class security of running the business in the cloud. They don’t ask questions around security, local country data centers or data confidentiality. They probe us around speed, agility, interoperability, innovation and cloud analytics. They understand that cloud is more than just a delivery method or a cost saver. The tech-savvy CFO knows that cloud unleashes new levels of automation, collaboration and visibility. 

Distributed Finance Function

The modern CFO does not believe in centralized finance function. He places highly capable finance managers with the functional teams. We see that CFO’s today are stressing upon Finance as a Centre of Excellence emulating operational best practices.

High Performance Team across Finance and Operations

The Modern CFO builds a terrific team who are not only capable accountants but great communicators, with superb analytical skills and superior EQ. They are go-getters by nature and like to mix up with other teams rather than cubbyhole themselves churning out spreadsheets. He makes it critical for his team to understand business and sales, what matters the most for the organization and how to adapt to changing strategy. 

Delivering faster and smarter outcomes requires the skill sets of a vast range of talented people from across the organization. The fact is, the business of the future is not well served by outdated silos and independent specialization. Beyond “traditional accounting or transactional environments that are finance-focused, the modern CFOs are thinking about how to form cross-functional teams that link sales, distribution, marketing, finance, customer service, and other critical areas together in very flexible and adaptable way.

Continuous and inclusive Financial Planning

Reality is chaotic. Annual Planning is ordered and logical. They two don’t get well. Disruptive change is now a norm and the modern CFO sees planning as a continuous ever evolving function that is driven by market events like launch of a new product, competitor marketing campaign and pricing cuts and ever emerging information. They don’t believe in mundane Annual Operating Plan. In fact, I met a CFO of a FMCG MNC who was not happy with daily forecast and wanted “minutes old” forecast. 2020 CFO, doesn’t plan, he forecasts continuously and constantly, steering his organization based on current market conditions.

Eliminates Redundant Processes

Traditionally, CFO’s outsourced payroll, tax filings, internal audit and collections. As CFO’s are seeking economies of scale and creating human capital with the varied skill set, they are considering a wider array of potential outsourcing arrangements like Core Accounting, Financial Reporting, Accounts Receivable and Accounts Payable. Their “ask” from their team is analysis of numbers, the “hows” and “whys” and “what better” instead of “what” and “how much”.

“Millenial Ready”

Per a PricewaterhouseCoopers report, millennials already form 25 percent of the workforce in the US. By 2020, they will account for 50 percent of the global workforce. The 2020 CFO believes in millennials and sees them as future leaders.

For Oracle CEO Mark Hurd, this huge demographic change presents a huge opportunity. Hurd said that the new direction is to hire recent college graduates rather than recruiting employees which other companies don't even try to retain. It is so motivational for Oracle to have these kids come into the company. “There is so much excitement in our company, new talent, new skills, and a different view of the world. And I think it's very good for us. It changes everything.” said Hurd in a conference.

The 2020 CFO sees this as a huge opportunity, they want Millenials to come in and change the way they think.  Many of the modern CFO’s communicate with their teams on WhatsApp groups They believe in personal branding and are getting more active on Linked-in and other professional networks driving thought leadership and encouraging “outside in” approach. They invest in smart tools that employees can use on their smartphones from just about anywhere on the planet.

In conclusion, we see that the CFO’s are now moving from accounting to accountability and not just for financial performance, but also for customer centricity, cutting-edge technology, workforce excellence and fostering a high- performance culture. They are the number wizard, the generalist, the performance leader, and the growth champion. They use their talents, experience, and insights to guide major operational and strategic decisions within the company, playing a role even as the external face of the organization.

Drawing the Line Between Strategic and Tactical Planning

By Michael J. Huthwaite, Founder and CEO of FinanceSeer LLC 

Driver Based Planning and Rolling ForecastDriver-based planning and rolling forecasts have been two of the hottest subjects in FP&A in recent years. Fueled by the need for directional insight and agility, most FP&A teams are now considering these modeling techniques to be must-haves for their organization.

Despite the fact that these concepts represent obvious choices for better predictive planning, many companies are still finding it difficult to implement them successfully. This is troubling, especially as the next frontier of prescriptive (AI) planning is already upon us. Falling too far behind could be a real problem.

So why are so many companies still struggling to master driver based-planning and rolling forecasts?

Dialing in the right level of calibration

A common problem many organizations encounter when implementing driver-based planning/rolling forecasts has to do with establishing the right level of calibration.  Organizations often want enough sophistication to take action, yet at the same time they don’t want to take on too much complexity for fear it will reduce their ability to remain agile or run rapid what-if scenarios.  
As a result, FP&A teams often fall into goldilocks thinking, where the model shouldn’t be too complex yet not too high-level either.  Unfortunately, this can often be the worst place to be as it’s not strategic enough to grow the business yet not tactical enough to manage the business moving forward.       

Standing in the middle of the court

The best way for organizations to avoid standing in the middle of the court (a no-no in tennis) is by clearly drawing the line between Strategic vs Tactical planning.  
The better we are able to detect this line, the better we are able to avoid straddling it.  Of course, this doesn’t mean that we should focus on doing one approach over the other.  Nor does this mean that we can forgo strong integration between Strategic and Tactical Planning.  Rather, what is critical to realize is that trying to use a one-size-fits-all solution for two distinct planning processes is invariably going to limit your capability and efficiency over time.    
By segmenting out Strategic and Tactical planning, organizations can focus on addressing the people, processes and technology that are uniquely different between Strategic and Tactical planning, regardless of the fact that both approaches rely on driver-based planning and rolling forecast constructs. 

Two-model approach is the norm

For organizations thinking about implementing a new driver-based/rolling forecast solution, it’s important to realize the fact that most companies continue to maintain separate tactical and strategic models regardless of the technology available to them.  Trying to move towards a single model approach is simply not the norm, despite the suggestions or ambiguity made by experts and software vendors.
Some software vendors will go as far as suggest that organizations can add Strategic Planning later once they have successfully implemented tactical planning.  However, based on my experiences these organizations rarely realize any meaningful benefit down the road.

The real differences between strategic and tactical planning

The number of time periods (duration) and the amount of line item detail are two obvious visible differences between strategic and tactical models.  However, it’s important to realize that these are merely the visible effects of two completely different processes.
Here are 5 concrete signs you should look out for when drawing the line between optimal Strategic and Tactical planning.

1 – When you’re looking to grow value not find efficiency

True strategy, is all about exploiting new markets and opportunities by breaking down boundaries and creating barriers for others to follow.  This is what shareholder value creation is all about, establishing higher cash flow returns than your competitors and continue to maintain that advantage over longer periods of time.  
On the other hand, Tactical planning is all about hitting or exceeding targets based on the limited resources available to you in the most cost-efficient way possible.  It’s about managing issues as they come up and executing on the strategy in an efficient manner.

2 – When you need to evaluate multiple outcomes simultaneous

Strategy is not about planning for a single outcome, rather it’s about opening the discussion up to understanding multiple opportunities and tradeoffs and then using that data to influence peers based on discussion and debate.  
On the other hand, Tactical planning may require multiple reforecasts over time, but the objective should remain singular in nature (hit the target with a high degree of certainty).

3 - When the underlying strategy or approach is highly unique

When you’re forced to copy another company’s strategy under the same constraints with no meaningful differentiators, then you're only hoping to win is via efficiency.  
This is by definition another form of tactical planning.  Rather than using your own Strategic plan, you’re essentially borrowing the strategic plan of your competitor.  In this case, you don’t need to think strategically at all. You simply need to focus on execution and there is a big difference between the two.  Unfortunately, too many companies rely on operational efficiencies to run their business. 

4 - When the workflow rules are not pre-defined

Strategy is about influencing others based on your own perspectives and beliefs.  It’s about building consensus across peers, which requires free-flowing discussion and debate around alternative ideas and scenarios.  In order for this process to occur, users must be free to establish their own workflow rules in an ad-hoc and highly collaborative way.  The workflow pattern in Strategic planning often resembles a network map (pattern) where distinct peer teams interact with other peer teams in order to gather critical insight and achieve buy-in in order to establish a unified Strategic plan.
Tactical planning tends to leverage pre-defined workflow rules that often mirror hierarchical org charts within the organization.   Furthermore, Tactical planning is heavily reliant on the submission/approval process as part of an overall highly orchestrated and rigid communication process. 

5 – When you’re dealing with uncertainty not probability 

Strategic planning is full of uncertainty.  Yet literally anything is possible with the right level of investment, effort and time. This is quite different from the term “probability”, where resource limitations reduce the number of potential outcomes enabling modelers to think in terms of either discrete or continuous distributions.  
What makes planning under uncertainty so unique is that it is best evaluated based on an effective discussion and debate across a series of alternative outcome combinations.  This requires not only having the ability to quickly reconfigure alternative scenarios, but it also requires modelers to invite peers and subject matter experts to suggest alternatives and/or provide critical feedback to weed out risky or less advantageous options.  This is the cornerstone of a highly functioning strategic planning process.  
On the other hand, Tactical planning benefits greatly from assigning probabilities to short-term targets.  This helps manage short-term risk and enables companies to apply the proper short-term resources most efficiently.      


For organizations looking to get the most out of their driver-based planning and rolling forecasting initiatives, it is critical to realize that these terms apply in both Strategic and Tactical planning.  Yet the people, processes and technology applied to these two domains are quite unique.  Developing a one-size-fits-all answer two these distinct areas is where many FP&A organizations go wrong. 
Creating an environment that incorporates both strategic and tactical planning as separate, yet highly integrated processes is how organizations get the most out of their financial planning efforts.


The article was first published in prevero Blog

Top Three Recommendations for Mastering FP&A Dashboards

By James Myers , Global Finance Executive and FInance Transformation Consultant

Dashboards have become a powerful tool for FP&A to share insight and gain respect. When designed correctly, they deliver a clear message on what’s working and what’s not, and the actions to take to fix the issue. Technology now enables us to create dashboards in minutes, allowing us to share information in ways we could never before…you must leverage this technology! The big question has moved from “How do we create dashboards?” to “How do we harness this powerful tool to drive business behaviour?” Do it right and you win the day for FP&A!

Why are dashboards so important?

  • They make complex things understandable
  • They are excellent communication tool to Upper Management
  • The help with business alignment driving behaviour in various directions
  • They can be interpreted at a glance – executive can consume volumes of insight quickly and don’t have to page through endless reports and PowerPoint presentations (which they never do)
  • Makes data accessible to everyone (access dependent) in a user-friendly on-demand way

Without a focused strategy, most dashboards have:

  • Too many metrics making it difficult to decide what’s important
  • Too many messages in each metric, making it difficult to interpret
  • Conflicting metrics which result in internal conflict
  • “Vanity Metrics” that give “the rosiest picture possible” but do not accurately reflect the key drivers of a business.
  • Poor user experience making it hard to access the data
  • Low Adoption – “But, no one using the dashboards as the provider of no or limited value” (my personal favourite example)

In my experience, my top 3 recommendations for improving the quality of your dashboards are:

1.      Simplify

2.      Measure adoption

3.      Actions


1. Simplify

“The ability to simplify means to eliminate the unnecessary so that the necessary may speak.” - Hans Hofmann
Usually, there is so much noise created by dashboards it’s impossible to figure out what’s important. Having it focused on fewer metrics than more is the key to a successful dashboard. Reduce the number of metrics down to about 3 or 4 to have an impact. 

To be able to determine the most effective metrics requires a deeper understanding of what drives the most value in an organization. This can be determined by talking to key stakeholders - understand what decision they need to make and the information they need. For a more encompassing solution, you will need to start with the strategy and work backwards: understanding the actions derived from the strategy; the drivers of these actions and ultimately the measures for success that will drive these actions

2. Adoption 

"We must learn what customers really want, not what they say they want or what we think they should want." – Eric Ries
Adoption is the leading indicator of the value you are generating. Frequency and time of use are key to determine the success. The biggest challenge is making sure that your dashboard is solving a problem or adds value to a stakeholder. By using design thinking and empathizing with your users is the first step in improving the quality of your dashboards. 

Take time out to put yourself in their shoes and understand what the real business problems are that you are trying to solve. Once the dashboard has been developed, the next step is to continue the feedback loop and have an iterative improvement cycle. Continue to measure your adoption metrics and for any unexpected changes do a deeper dive – speak to stakeholders to determine why their frequency has increased/decreased after a new improvement. 

3. Actions

“Action is the foundational key to all success” – Pablo Picasso
Metrics that help drive actions are worth more than those "Vanity metrics" that just tell you where you are. Actionable metrics reflect the key drivers of the business and lead to informed business decisions and subsequent action. Actionable metrics need to be driver based and the more you understand the drivers of your business, the easier these will be to create. At all costs, avoid the trap of “Vanity metrics”. IT’s tempting to use measurements that give “the rosiest picture possible” (Eric Ries) but they do not accurately reflect the key drivers of a business. Things like the vaulted Revenue Performance – tell us where we are, but provides no clear guidelines on how to increase revenue or mitigate the risk of a decline!