20 Key Financial Modelling Definitions

By Rob Trippe, MBA, Financial Modelling Veteran

Financial model definitions can be tricky. Financial models are often dependent upon numerous functional areas and academic disciplines, such as accounting, finance and statistics. These disciplines may have differing uses of the same terminology. Model risk management has also drawn on numerous disciplines in its evolution. The result can be communicating at cross purposes.

No one academic discipline may lay claim to how a financial model’s terminology is defined. Financial model's output is often either a corporate finance concept or an accounting concept, while a driving calculation process may be statistical. Therefore, terminology should be defined among developers, owners and users as early as possible. A data dictionary may be a required element to financial models which fall under regulatory scrutiny. Add a definition of model terms dictionary too. This effort can be spear-headed by the data manager. When financial model output will be used in comparison to other figures, their definition, both numerically and non-numerically, should be identical. Non-narrative depiction of a definition can be extremely useful. Show builds and flows when possible. Here are twenty financial modelling definitions worth memorizing and employing:

1 Back-test 
Use of historic data as a test to model output validity.

2 Benchmark 
The comparison of model output to the output of an outside and independent source.

3 Emerging Risk
Unforeseeable risk arising further in time and model execution.

Set of rules for financial model design. Flexible, Appropriate, Structured and Transparent.

5 Impact Analysis
Assessment of cost, timing, scope and quality of a model - consequence.

6 In-Sample
Historical data used in model development.

7 Model
Quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. Models provide an explanatory framework for real world observations.

8 Out of Sample
Historical data not used in model development.

9 Outcomes Analysis
The comparison of model output to actual outcomes. Back-testing is one example. 

10 Parameter
Numerical characteristic of a set or population of numbers.

11 Re-calibration
Adjustment of data and/or assumptions.

12 Residual Risk
Remaining risk after a risk mitigation action has been performed.

13 Risk Appetite
Largest tolerable degree of uncertainty acceptable.

14 Scenario
Multiple changes to inputs to reflect a given set of circumstances.

15 Secondary Risk
Risk arising from a risk response.

16 Sensitivity Analysis
Impact of a change to an input relative to the change in output.

17 Stress Test
Assessment of model stability by employing hypothetical data inputs or drivers.

18 Threshold
Measure of uncertainty or impact worthy of attention.

19 Tolerance
Degree of deviation within which a model still functions properly.

20 Validation
A set of processes and activities intended to verify that a model performs as intended and as expected.

Why Your Business Planning Process Needs More Edge Answers

By Michael Huthwaite, Founder and CEO at FinanceSeer LLC


The long-standing narrative of Enterprise Performance Management (EPM/CPM) has been squarely focused on the effort to steer organizations away from spreadsheets by embracing Enterprise Performance Management suites (i.e. platforms).  

Yet, the dirty secret that is rarely spoken about is that most organizations continue to remain heavily reliant on spreadsheets even after spending huge sums of money on EPM solutions.  

So, why are so many organizations still deeply dependent on spreadsheets?  The answer to this question lies at the Edge.  

Understanding the Relationship Between Core and Edge

Any network, whether it be a social network or a computer network has activity occurring on both a centralized (core) and decentralized (edge) level.  This is because data live in the Core, but it’s often conceived at the Edge.  

Information that is managed at the Core level tends to represent data that can be highly leveraged (i.e. reused or accessed by others).  Examples of Core data might be the latest forecast for an established product line or business unit.  

Conversely, Edge level data tends to represent high-growth opportunities, which is often the life blood for your organization’ s future.  An example of Edge level data might be the evaluation of a new investment/acquisition opportunity or the risk assessment of a big swing in the market (a proverbial black swan).    

Despite the distinct differences between Core and Edge, you’ll never eliminate the symbiotic relationship that they share.  Therefore, in order to optimize the overall network, it’s best that organizations take a holistic view by addressing the need to harmonize both the Core and Edge.  

Real World Examples of Core and Edge

The concept of “core and edge” has been well documented for decades.  The notion exists in many technical areas, including, but not limited to Enterprise Performance Management.  

Recent proliferations of “core and edge” harmonization can be seen with the rise of Mobile, the Internet of Things (IoT) or Wearable technology.  In all three cases, it’s the people and devices that operate at the Edge that creates incremental value while the centralized or Core infrastructure acts to leverage or unify an established ecosystem.  In my opinion, this helps explains why as individuals, we are often hopelessly addicted to our devices and mobile apps.  Our brains pick up on the perceived incremental value that we create at an Edge level, while the Core level helps organize and maintain our lives over time through robust centralized infrastructure.

What are the Hallmarks of Core and Edge

In general, the tell-tail-signs of Core and Edge look like this:


At the Core level, technologies are often marketed as “platforms” and require a great deal of IT involvement and specialized Admin to operate.  Furthermore, these solutions tend to have the higher number of users, remain relatively static in their configuration and to a large degree focus on database storage (often associated with a “single version of the truth”).

Conversely, Edge level applications are just that, applications.  They’re not platforms per-se, but self-service solutions that focus on supporting small teams of individuals working together to drive new ideas. Edge level applications are typically installed locally (much like a smartphone app or a desktop application).  These applications are not web portals that enable users to enter and submit data, but rather, highly focused solutions that enable end-users to evaluate or try alternative configurations in order to maximize value creation.  

In addition to having full control of these Edge level applications, end-users must also have the ability to freely share their ideas and findings among their small team of peers in order to build consensus.  The speed and fluidity at which these business processes occur mean that IT and System Admins aren’t directly involved, but their ability to indirectly enforce governance must remain in place, if the ecosystem is to thrive.  

Edge level data differs from Core level data because the business processes occurring at the edge simply don’t require data to be centrally stored or accessed at such an early stage.   Rather, the speed and flexibility of the data is what enables end-users to think more creatively and begin the process of building credibility around that data.

Integration is quite literally the silent partner in the Core and Edge Paradigm.  Integration is often, but not always, managed by IT or System Admins and often requires stronger technical skills that are not necessary for end users.  A great deal of intelligence and validation must go into establishing a strong integration approach.  Integration includes data mapping, but should also include a broader range of communication between the Core and Edge.  
As the proliferation of Core and Edge increases, I think we’ll start to see more integration level concepts flourish such as Artificial Intelligence (AI) and Predictive Analytics where information captured at the Core is suggested to devices at the Edge, which in-turn enables end-users to take action.  For example, when Google Maps tells you that there is traffic on your current route that is a great example of tight integration between Core and Edge.  Slow moving traffic is captured on mobile devices and sent to Google’s Core servers and then passed back to the Edge devices of other users who are heading on the same route. These motorists can then elect to go a different route or ignore the suggestion altogether.

How Does Edge Work in Business Planning and Analytics?

The Core and Edge Paradigm is prevalent in all areas of business planning.


Edge Analytics
This is the one area of Business Planning and Analytics that has a strong record of embracing the Core and Edge Paradigm.  Self-service solutions that capture data and empower end-users to visualize data by running their own unique queries, applying their own custom filters and formatting in a way that uncovers data so it can be fully discussed and debated is quite prevalent in the market today.    

As a result, we should look to these solutions as examples for how businesses can begin to adopt better Edge level applications for business planning.  Sure, the technology won’t be the same across all areas of business planning, but there are a great deal of similarities that business planning can learn from Edge Analytics that can push the needle forward.  

Financial Planning at the Edge
I think it’s fair to say that everyone has a natural distain for Budgeting.  It takes up too much time and is outdated the minute it is published.  So maybe we should not give Budgeting all the credit in the world, but does that mean we should completely abolish it?  

This debate reminds me of my days as a student when I learned that the Income Statement is based on accounting, not cash.  I was shocked, but does that mean that we should abolish the Income Statement?   Probably not, but we should recognize it for what it is and what it isn’t.  

What is important is the need to set a unified plan (Core level functionality) while at the same time empowering individuals and small teams to recognize whether the ongoing market turbulence is significant enough to warrant a reforecast or not (Edge level functionality).  

Having the proper discussion and debate at the Edge level can help to regroup the individuals and small teams responsible for sounding the reforecasting alarm.  This saves the organization a ton of effort by enabling unaffected parts of the business to continue on without getting caught up in lengthy and continuous planning exercises.  

Some people may advocate for a Rolling Forecast approach that initially appears to eliminate the Core and Edge Paradigm.  However, based on my experience, I would argue that it probably doesn’t.  Rather this middle-ground approach is probably a significant reason why so many Rolling Forecast initiatives don’t achieve the level of success they were hoping for.

Operational Planning at the Edge
Spreadsheets are currently responsible for a lot of the analysis that end-users perform in the realm of Operational Planning. To be fair, a good deal of these spreadsheets should probably be replaced with Core level technologies. This will enable organizations to have more “Connected Planning” capabilities that will enable for better horizontal planning. But that will still never eliminate the need for Operational Planning at the Edge.  

At the “task” level, for example, there will always be a need for individuals and small teams to tie their daily activities with their personal and team operational targets. This could be achieved with greater integration to numerous Edge level applications that would be managed and updated by end-users (not System Admins).

Strategic Planning at the Edge
Strategic Planning is an exercise that is largely performed at the Edge.  As a result, this is why most companies continue to rely so heavily on spreadsheets to perform Strategic Planning regardless of their investment in EPM.  

The need to evaluate competing strategies (concurrent) or combine alternative scenario variations together (inclusive) is not the sweet spot of EPM.  These days, every EPM suite is quick to refer to their solutions as “models”, but does that mean that the end-users are experiencing greater “modeling” capabilities?  No.  Modeling is an Edge level activity that requires rapid reconfiguration of models in order to screen for potential business opportunities.   

Once the strategic plan is set, the need to execute strategy (using Strategy Management tools and methodologies) switches the conversation back to a Core level functionality.  

What are the Challenges of the Core and Edge Paradigm


If the Core and Edge Paradigm was so simple to achieve, then I think the EPM market would already contain a wide variety of Core and Edge solutions to choose from (which it clearly does not).  So, what is holding us back?  

I think the answer to this question cannot be pinned on one single reason, but rather it’s a shared problem that needs to be addressed across business users, IT and solution vendors.

Business Users
Business users waste an enormous amount of time trying to deal with Edge related business processes using inadequate tools.  Sure, spreadsheets, presentation tools and other personal productivity applications can enable business users to get the job done, but at what cost?  Business users need to recognize the advantage of Edge level solutions and start creating a greater demand for these alternatives.  
Like spreadsheets, Edge level solutions put end-users in control of their business processes, but they can do it in a way that provides more integrity and pre-built intelligence that will save end-users an immense amount of time and effort.  

IT has numerous considerations to take into account when dealing with the purchase of new software technologies.  ROI, security and ongoing support are just a few of the most common considerations.  

As a result, it’s natural for IT to want to limit the number of solutions. This makes perfect sense when dealing with “point solutions”.  Point Solutions are software applications that address specific problems without trying to address the concerns of any related or adjacent business processes. This creates disjointed processes that often increase both total cost of ownership and end-user inefficiencies.  

However, it’s important to realize that Edge level solutions are not point solutions. Rather, they are solutions that operate on different architectural levels and as a result, actually complement each other by furthering capabilities that cannot be optimally achieved by a single level architecture.  

Therefore, it’s important for IT to consider a multi-level architecture (Core and Edge) as part of their formal IT policy (sometimes referred to as Line of Business applications).  Failing to do so, will only limit creativity and increase reliance on generic personal productivity tools.

Software Vendors
If there was going to be any blame placed on a lack of Edge level solutions, I would have to put the onus squarely on software vendors. Their natural tendency to focus on database driven technology and old Enterprise Sales models is skewing their view of the Core and Edge Paradigm. As a result, they are too heavily focused on defending their existing Core level solutions rather than embracing and actively developing Edge solutions and opening secure APIs to other 3rd party Edge applications.    

Traditional on-premises vendors often associate Edge level requirements as ad hoc activities that can be addressed using Office add-ins.  However, Edge level applications are not one-time business processes.  
Cloud-based solutions are often quick to point out their intuitive UI, making it easier for users to perform logic changes, yet their centralized architecture still addresses Core functionality.  Just because it’s less of a barrier to make logic changes, doesn’t mean that all end users are free to do it.  

What Should Be Done to Address the Edge?

As I pointed out in the beginning, the result of not addressing Edge level architecture means that spreadsheets continue to have a strong foothold in Business Planning and Analytics.  
Yet, reliance on spreadsheets is not the optimal solution for end-users, IT nor software vendors.

End-users will still suffer from inefficient and error-prone spreadsheets, causing a tremendous amount of wasted time. 

IT will continue to lack the security and governance that eludes them with spreadsheets.

Software vendors will never fully realize the unanimous adoption of EPM/CPM that they have been trying to convey for the past 20+ years.  

This problem will only truly be solved when end-users, IT and software vendors come together and establish a balanced response to dealing with both the Core and the Edge.  

Like most initiatives, it all starts with awareness. End-users need to do a better job articulating their pain-points. IT needs to clearly establish their policy on “core and edge” and software vendors need to challenge themselves to develop safe and secure technology that clearly addresses both the Core and Edge and stop relying on slick marketing messages to pull the wool over the eyes of their customers.  
Until then, I’m afraid the spreadsheet will remain an integral and error prone element of our Business Planning process.  

Presenting FP&A Strategy’s Data Maturity Model to Accelerate your Finance transformation

By James Myers, Global Finance Executive and Finance Transformation Consultant

It’s the middle of the year and it’s time to take charge of your “data destiny” before the budgeting and planning season starts. What exactly is your data destiny? No, it’s not the new Netrunner card game where the objective is to control all the data in the Universe.

It’s understanding how you can leverage all your internal and external financial and operational data to give you a strategic advantage. Relevant data is the key to understanding it all, but too often this is disbursed throughout your organization. 

In 2017, Globally, 39% of all CFO’s are pursuing a significant upgrade in their company’s information, data, and communications systems (1). This trend is real and I’m already seeing companies moving swiftly to take advantage of new BI trends and hire more data-centric talent into their Finance teams. More than a “trend”, it’s a “shift”.

Why is this Tectonic Shift in Data happening Now?

  • Data quality issues - Multiple data sources require manual aggregation & consolidation which are prone to error resulting in a debate about the accuracy of the number and not the insight
  • Timing to insight - Low frequency and the length of time it takes to receive the insight, limits the ability of stakeholders to take early corrective actions
  • Data is offline – adjustments and data manipulation are done in offline Excel spreadsheets, usually by one person and no one else knows how it’s done
  • Data hoarders – Data is power and if no one else has it, it makes one important


Your Data Maturity model

FPA’s data maturity model is a step-by-step process to help you move along the value/maturity curve until you achieve your ultimate goal of game-changing data analytics.

Step 1: Aggregation = Single source

Having all your data in one place is the first major step toward data nirvana. This often requires a lot of IT heavy lifting, but it doesn’t have to be. There is new technology out there that will help, but my advice is to start small. Find the most critical data and focus on that – run through all the steps before on boarding the next set of data. This will enable you to prove out the solution and demonstrate the value quickly – Once you start adding value it will be easier to find more investment.

Step 2: Accuracy = Strong executive support

This step is all about agreeing on terminology and rules. This part is heavily reliant on all the stakeholders agreeing to a single naming convention; methodology and taxonomy – it’s all about standardization. In mature large companies, this can be one of the hardest tasks as it often requires change management. To ensure this part succeeds you will need to have strong executive support. You will also need to validate your data and bring the offline logic into the system. Accuracy is key as there is nothing worse than spending more time debating numbers as opposed to their meaning. You will need everyone bought into the idea that your single source of data is the only source otherwise you will fail in the next steps

Step 3: Access = Mastering dashboards

This is typically the easiest because of the development of powerful BI tools, but unless there is strong governance it can easily get out of hand (see Mastering Dashboards to Build Value). Good dashboards are designed to drive strategic initiatives into the company and to ensure users can identify issues quickly and know what corrective actions to take. Try to avoid dashboard design that results in: confusion; groups competing against each over or more common just being ignored. Security is key at this stage – making sure that the right users have access to the right data at the right time. 

Step 4: Actionable = Think!

Building dashboards that keep us informed are interesting but don’t add a whole lot of value. Take revenue for example – this is a typical metric you will see on any dashboard e.g. how much have we booked to date; how are we performing to targets or year over year growth. What we really want is to drive actions into the organization. The best way to do that is to focus on the underlying drivers of revenue e.g. what is your pipeline conversion rate; what pipeline coverage do you have or what is the impact of your marketing promotions.

Step 5: Accountability = Support

The key to driving insight is starting with the business problem. Too often I see people bombarding their stakeholders with data, normally in the form of 70 page PowerPoint decks with detailed tables and 11pt type. That only alienates. Take a step back and take the time to understand your stakeholder and what’s really important to them. Spend time on understanding the drivers behind this and create dashboards that give your stakeholders key insights that help them identify the outcome of their actions. You’ll know when this support is working when they come back and ask for more.


This is the time for data leadership. Discerning and communicating it in a meaningful way to the organization is literally the most important job you will ever do! Use FP&A’s Data Maturity Model to accelerate your FP&A transformation. The time to get ahead of this is right now.   



1. THE CFO ALLIANCE - 2017 CFO SENTIMENT STUDY: 39% of respondents reported they will be pursuing a significant upgrade in their company’s information, data, and communications systems in 2017, up from 29% in 2016

Corporate Finance Budgeting: Bridging Past and Future

By Rob Trippe, MBA, Financial Modelling Veteran

Many sophisticated business environments use budgets as a bridge between actual and forecast. Within the corporate finance domain, budgets are used to allocate resources and provide a starting point for current and future period estimates. Budgets and revised budgets are often communicated to shareholders and the investor community in numerous ways such as earnings estimates and road show presentations in the form of a “current estimate”. The budget becomes a foundation for expectation.

Here are five key approaches to a sound budget model:

  1. Utilize Quantitative Forecast Techniques
  2. Integrate Statements
  3. Roll the Budget - Period to Period and Model to Model
  4. Distinguish Between Assets in Place and Growth
  5. Use Expected Values

1. Common Quantitative Forecast Techniques

Budgeting and forecasting methods can be divided into two broad categories: qualitative and quantitative. Listed below are common quantitative forecast techniques for financial statement modeling. 

  • Time series change (price, for example)
  • Causal relationships (cash and receivables as a % of sales)
  • Naïve - Beginning balances equaling prior period ending balances or flat-lined (fixed assets, debt)

Qualitative techniques may be used to adjust quantitative forecast output, based upon subject matter expertise.

Time Series
A time series is simply a set of observations measured at successive points in time or over successive periods of time. Time series uses past trends of a variable.

  1. Cyclical. Any recurring sequence of points above and below the trend line that last more than a year is considered to constitute the cyclical component of the time series. Example; while the trend line for gross domestic product (GDP) is upward sloping, the output growth displays a cyclical behavior around the trend line. 
  2. Seasonal. Seasonal components capture the regular pattern of variability in the time series within one-year periods. Many economic variables display seasonal patterns.  Seasonal will require you to forecast by quarter and/or month.
  3. Calendar. Similar to seasonal, but follows a calendar path, such as weeks or quarters (Q1, Q2, Q3, Q4).

With causal relationships, the forecaster examines the cause-and-effect relationships of a variable with other relevant variables such as the level of consumer confidence, an income statement or balance sheet item. Below are examples of common causal relation calculations:

1.    Position calculations represent a company's financial position regarding earnings, cash flow, assets or capitalization. Calculations can be expressed as a dollar amount, a percentage, or a comparison. Position calculations are often referred to as “common-sized” when it is uniformly applied to a whole statement.

  • Cash % Assets
  • Debt / Equity

2.    Metric calculations assess financial position relative to a non-financial figure such as days, transactions or number of customers.

  • Transaction Figures (Units Sold Per Day, Transaction Days, etc.)
  • Utilization %

A widely-known causal method is regression analysis, a technique used to develop a mathematical model showing how a set of variables are related.  Regression analysis that employs one dependent variable and approximates the relationship between these two by a straight line is called a simple linear regression. Regression analysis that uses two or more independent variables to forecast values of the dependent variable is called a multiple regression analysis.

Smoothing methods are appropriate when a time series displays no significant effects of trend, cyclical, or seasonal components (often called a stable time series). In such a case, the goal is to smooth out the irregular component of the time series by using an averaging process. 
This relates closely to valuation’s use of “normalized” statements. The purpose of normalizing financial statements is to adjust the financial statements of a business to more closely reflect its true economic financial position and value conclusions of operation on a historical and current basis. 

The term "moving" refers to the way averages are calculated—the forecaster moves up or down the time series to pick observations to calculate using an average of a fixed number of observations.
In calculating moving averages to generate forecasts, the forecaster may experiment with different-length moving averages. The forecaster will choose the length that yields the highest accuracy for the forecasts generated. Weights may be assigned to time periods.

Stochastic Modeling
Stochastic modeling is a statistical process incorporating a random element to a figure’s composition, based upon pre-determined parameter estimates. It can be used to determine revenue and expense items. This is one technique coined “predictive analytics”.

2.    Integrate Statements

Integrated statements are a fundamental requirement for forecast DCF and the budget plays a part in that development process. Both conceptually and mechanically, integrated statements, by acknowledging the balance sheet-income statement-cash flow interdependency, demonstrate a core understanding of corporate finance synthesis and provide a foundation for both accounting and modeling structure. This is why the investment banks and Big Four consulting firms go to great lengths to develop this skill in their incoming analysts. Without integrated statements forecast, DCFs are likely to not hold upon to scrutiny. 
Integrated statements can create circular references such as this:

  • Interest expense on the income statement is a function of an average debt balance on the balance sheet.
  • Current net income is dependent upon interest expense.
  • Balance sheet equity is dependent upon current period net income.
  • Debt is dependent upon total capitalization
  • Total capitalization is dependent upon equity.

Though conceptually correct, both modeling organizations FAST and Best Practice Modeling recommend avoiding circularity. This can be solved by using a goal seek macro, for instance, maintaining a debt to equity level through all model periods. However this is handled, the best integrating approach will reflect the actual management approach employed in the organization.

3.    Roll the Budget Downstream – Period to Period and Model to Model

The mechanics of the budget as the bridge between actual and future estimate are straight forward. 

  1. With the budget in place, as actual data becomes available it replaces former budget data for the same period. The model’s number of periods (actual + forecast) does not change. Implied here is that the budget is developed from the same chart of accounts as actual. Revisions could be daily, weekly, monthly or quarterly.
  2. Future periods beyond the budget are impacted and a new, longer term plan is developed. This is because financial statements are impacted by time series movement and causal relation. The future is largely a result of past and current positions.

Therefore; actual, budget, current estimate and longer range plans are each dependent upon a respective upstream model. Any change to a prior period assumption or data may impact all future periods and models thereafter.

The granularity of the budget and a resulting downstream longer term plan is dependent upon need and ability. It is easier to begin by developing greater granularity in a model and rolling up into longer periods than it is to break apart a period into smaller segments, such as going from quarterly to monthly. Pay attention to this during development and attempt to anticipate future requirements.

4.    Distinguish Between Assets in Place and Growth Opportunities

“Return on Capital (ROC), Return on Invested Capital (ROIC) and Return on Equity (ROE): Measurement and Implications” by Aswath Damodaran, illustrates well the power behind distinguishing between assets in place and growth assets. Separating the two allows management to more accurately gauge the following (quoting Damodaran):

  1. How good are the firm’s existing investments? In other words, do they generate returns that exceed the cost of funding them?

  2. What do we expect the excess returns to look like on future investments?

This approach has numerous advantages. First, it eliminates noise and confusion. Second, it sets FP&A up for developing appropriate discount rates by project and growth opportunity, thus avoiding a one size fits all weighted cost of capital approach. And third, it acknowledges that future returns may not be the same as prior returns – discount rates change through time.

5.    Use Expected Values

Though asset capacity and elasticity constraints (utilization rates) usually prevent the symmetrical distribution of outcomes from occurring in real life, as a planning and forecast tool, budgets most often assume a bell-shaped curve. In such a symmetrical distribution, the most likely outcome is equal to the probability weighted outcome. This will provide compatibility to most downstream models such as valuation models which most often are built upon the assumption of expected, as in valuation.

There is much discussion centered on the development, use, and validity of budgets. Viewing the budget as a bridge from past to future should help eliminate doubts, will provide commonality between FP&A models and synthesize FP&A responsibilities. 
It all works together.

Rob Trippe is a financial modelling veteran. With over fifteen years’ experience, Rob has developed corporate finance models for valuation, M&A, forecasting and performance monitoring. He is widely respected for his deep understanding of corporate finance theory, lectures at university and has worked with some of the world’s largest and most respected firms. His research while at the investment bank Houlihan Lokey Howard & Zukin has been published in the Wall Street Journal and USA Today. His cash flow model work has been published in SEC and quarterly press release filings. Rob was accredited in valuation in 2008 and holds an MBA ,Finance from Boston College. 

SR 11-7 and Corporate Finance Modelling: Managing Risk and Promoting Success

By Rob Trippe, MBA, Financial Modelling Veteran

Through the financial crisis, the advent of drill down database capabilities and with direction from the Federal government, financial modeling is evolving into both a defined art and science. The idea that financial modeling means one knows Excel and work with numbers has been superseded by a financial model governance framework which requires the proper employment of academic theory, collaborative development, identification and management of risks and controls, and verification of final model output through validation techniques and ongoing monitoring practices.

Since the great recession, the Federal Reserve has issued “SR 11-7” which defines what a model is, how a model is developed and implemented and how a model should be proven accurate and effective. The audience for SR 11-7 are financial institution’s model developers, owners and users. But much can be learned from the document to serve the far broader corporate modeling world and is invaluable in pursuing the ill-used and often undefined wording of “best practice” in everyday corporate life. 

What is a Model?

SR 11-7 begins by eliminating the idea that models can pretty much mean anything to anyone. By definition, a model employs:

  1. Quantitative theory such as statistical, financial or accounting theory
  2. Three components: inputs-calculation processes-outputs
  3. Transformation of data into useful information (a reporting component)
  4. Quantitative output
  5. Possibly qualitative model inputs (assumptions), calculations or outputs so long as the model remains quantitative in nature, thereby having some uncertainty in outputs
  6. Repetitive use
  7. Subject matter expertise

A few other model characteristics which may help model identification and rank of importance are:

  1. Risk potential of model use
  2. Reporting impact of changes resulting from model output

Development, Implementation and Use

SR 11-7 sets forth general guidelines to ensure the model development approach is disciplined, knowledge-based and properly implemented. 

Dedicated Resources

Banks employ a dedicated development team and resources for key models. With this approach, models operate under the concept of leverage. Just as with operating leverage, substantial upfront time and effort can lead to losses but also extraordinary gain through the quality of output and confidence in that output. Consider taking the time and expense of developing your model upfront, research methodologies, and calculation techniques, give design and structure a top billing and take the time to validate it through back-testing and benchmarking. Model development utilizing lower leverage will witness a more limited reward, but with reduced risk. Models developed and improved on an “as needed” basis over time can result in confusion, key person risk, extraneous bulk, and circuitous audit trails.

Form and Structure

Thought to form and structure is critical for all models. From experience, we know that such thought is simply not always the case. All financial models follow the same logical order. They: 

  1. Compile data
  2. Adjust and conform data as required
  3. Transform data through calculation
  4. Present final model output such as a value, a table or a forecast

Use this commonality to develop discipline in how models are built and structured among and across users and functional areas. Create an intuitive and easy to follow workflow, such as using tabs left-to-right in Excel. Models in MatLab code can leverage replicable building blocks. 

I am a fan of the FAST standards which can be found at the link below. The FAST standards view Excel financial models not simply as mathematical calculation tools, but communication tools. I find that powerful, and think any veteran financial modeler will ultimately gravitate towards models which possess well defined structures, clarity in logic, and brief audit trails. FAST views financial models as narratives; with sentences, paragraphs, and chapters. Here is the link to the FAST organization: www.fast-standard.org.

Flow charting financial statement builds provide the user a quick gauge on what data is at hand versus what is needed to complete a model. A flow chart will help identify the required builds and environments from which data will be acquired. That’s helpful in managing large projects. In M&A, a banker or seller’s rep will often provide information which does not align to a valuation model builds. With a flow chart, it is easier to identify and manage what is required versus what is available and provide focus on the point where most models breakdown – the middle. Flowcharting will help visualize core components across models and those which are “plug ‘n play”, meaning unique to the particular model at hand. A good example of this would be a set of financial statements (balance sheet, income statement, and cash flow) in consolidated formats. These are applicable to a range of models; capital allocations, valuation, forecasting, just with each model tailored to provide additional output.
Though your model output is needed urgently, things change, and what you solve for today might not be the same as what is required tomorrow. Solid component piece model building provides built-in flexibility and adaptation. Such models can shed one-time output with reduced risk and react to changing sources and systems more easily.

Development Collaboration 

In financial modelling, no one functional area reigns supreme. Example: a financial model solves for a balance sheet answer. That means keeping accountants involved in the model process, wing-to-wing. Statistical methodologies and techniques may be employed in data transformation, but stats are just part of a broader model framework. At every possible step speak in terms of the language used in a model’s final output environment (finance, accounting, etc.). Functional areas will often use the same terminology to mean different things. Be aware of this and agree on definitions first. When a term is used for the first time, stop and define it for all involved. One company I worked with made investments in equity securities using a portion of equity. Imagine the potential for confusion as model output is passed from entity to entity to entity! “Corporate”, owners and business units do not always coordinate and come to terms on basic terminologies and build and often work under differing constraints. 
Below is a link covering Excel design content and protocol (Wall Street Prep). Every workplace across functional areas should be having conversations on modeling topics such as these: www.wallstreetprep.com/knowledge/financial-modeling-best-practices-and-conventions/


Good documentation serves a few powerful purposes; it allows one to communicate a model’s structure, design and output across functions and academic disciplines with confidence. It provides comfort that a model was accurately developed, and it forces the model developer to think longer and harder about both academic standard and quality of design. A common approach to model documentation would include the three primary components of SR 11-7:

  1. Development, Implementation, and Use
  2. Validation and Ongoing Monitoring
  3. Governance and Controls

SR 11-7 discusses documentation as a critical component of both development and validation. Documentation ensures smoother use of a model as owners and users change over time.
Flow chart documents are powerful tools. Flow charts can come in many forms and there is no one exact manner to flow a model. From my experience, two flow charts stand out as invaluable; 

  1. System flow chart “wing to wing” (input to output)
  2. Development flow chart

A system flow chart will show, left to right, a model’s data inputs and IT/business unit environments, calculation processes, and model output and IT/business unit environments. A development flow chart will visualize the exact mathematical methodologies and techniques employed to transform input data into useful business information and will generally focus on only one business environment. Flow charts will dramatically improve model buy-in and provide a path for the solid structure. Dead ends, duplicity and unmanageable audit trails now become visual.
Here is a document link to PwC which shows the documentation of a cash flow model with integrating statements. 


Model validation is a set of processes and activities intended to verify that a model performs as intended and as expected. All components of a model (inputs, calculation processes, outputs) should be subject to this verification. Validation should be commensurate with the potential risk in the model’s use. Validation does not end once a model is implemented. The same validation tools which are used during development can be employed on an ongoing manner. SR 11-7 recommends established periodic review (seldom seen in the corporate finance world) and the establishment of thresholds and tolerance levels when model output deviates from expectation or actual.

The tools SR 11-7 suggests for validations are back-testing and benchmarking. 

  • Back-testing utilizes historic data as a proof to model output. In development, building a model with actuals and observing known and reliable metrics as building blocks would be a sound approach and align a model’s build to already established and accepted metrics. On a forward-looking, ongoing basis, once the forecast period becomes actual, back-testing is again in play.
  • Benchmarking is the comparison of model output to the output of an outside and independent source. One example of this would be reconciling a DCF’s output to a market multiple approaches, careful to explain variations between methodologies.

Both require a degree of independence from the model developers and owners, though that would vary case by case in a non-regulatory environment. Seminars, workshops, and certification are available for model validation. 

The “Use Test” adds qualitative validity and is mentioned in Basel II. Model validation through sensitivity and scenario analysis rounds out the validation process. Sensitivity analysis tests for the impact of a change to an input relative to the change in output. Scenario analysis would involve multiple changes to inputs to reflect a given set of circumstances.
The Global Association of Risk Professional has numerous articles on model risk and validation. The link is https://www.garp.org.

Conceptual Soundness

For starters, know exactly what your model solves for. Define your final model output upfront and in painstaking detail. What is the answer conceptually and how will it be expressed? What environment will the final model output be in and will it affect downstream models? Example: if you are developing a net cash flow model, make sure your model solves for a net figure (all economic benefits less all economic detriments resulting from the implementation of the planned activity). Another example is fair value versus fair market value. Fair value is a recently conceived accounting measure for balance sheet reporting. Fair market value is an age-old valuation concept. Lawyers go to remarkable lengths to define financial statement items and their calculations for the purposes of debt covenants in securities documentation. Model developers should as well. It’s arduous, but an impressive example.

Acknowledging conceptual limitations is critical to model integrity. All theory is limited and so too are models which depend upon it. A classic example is CAPM. Simple, intuitive, widely adopted, uncanny in accuracy and flawed in its claim to measure expected returns. Another example is IRR. Solving for IRR is extremely useful when comparing similar projects and investments, but IRR also comes equipped with built-in pitfalls. If you know your model’s limitations upfront and have articulated these limitations comprehensively, you will have a far greater chance of model acceptance and adoption than if you are broadsided during the challenge and cross-examination. 

The Fed asks that model developers give thought to various theories and approaches (for example DCF versus normalized earnings, or market multiples versus income approach). Documentation provides a platform to communicate conceptual soundness and academic theory and empirical evidence should be cited, as should alternative approaches. Qualitative judgment should be challenged and put to test to ensure that subjective adjustments to the model are not simply compensating for an equal but opposite model error. 

Alignment to Academic Standard

Once, I opened a weekly forecast model to find it lacking any standard professional structure and protocol. Fair enough, this had been someone’s individual work assignment and they knew it well. The harder challenge to the model was the conceptually incorrect methodology it employed in attempting to forecast EBITDA. The model extrapolated a small percentage of monthly revenue into a full month’s figure. The population base (i.e. a number of customers) of the revenue streams was small (about 40) and dissimilar. This is not right (just because it rained the first two days of the month does not mean it will rain all month). Adjustments required to reach an accurate estimate were cumbersome and undermined model credibility. So, seek advice from other business units and corporate for model approaches, if needed. Coordination from corporate to business units is not always as strong as one might hope. In the absence of methodological guidance, the business plan should be the starting position for any forecast in corporate finance. This will also serve a second use in keeping your business plan development in check.

Another example of aligning to academic standard is financial statement structure and terminology. Over time companies will have developed statements and statement terminology meaningful internally. Taking the time to document and align to common statement structure and terminology will dramatically improve a model’s adoption by future users and outside parties. EBIT, EBI and EBT for example, should be clearly shown as such. “Cash flow” is a generic term. So, define your cash flow as you would read from a textbook, such as “net debt free cash flow to equity holders”.

Ongoing Monitoring

Ongoing monitoring is highlighted in SR 11-7 to ensure a model continues to function as intended and to evaluate whether any external changes require model alteration. Ongoing monitoring will also ensure that changes by a model user separate from the developer do not affect the model’s intended output. These would include overrides, partial formulae, etc. 

Sensitivity analysis and benchmarking are specifically cited for monitoring purposes, as is outcomes analysis. Outcomes analysis is the comparison of model output to actual outcomes. Back-testing, which was previously mentioned, is a type of outcomes analysis and may serve as an excellent model development approach as well.

Governance and Controls

Policy and procedure formalize risk management activities for implementation. SR 11-7 recommends an emphasis be placed on testing and analysis with a key goal of promoting accuracy. These roles and responsibilities can be divided among ownership, control, and compliance. 

Strong governance coordinates processes and model output across functional areas and validates the final model output. It provides a venue for sharing ideas across areas and business units. Governance activities may include:

  • Challenge to model development and verification of validation testing
  • Developing and disseminating model standards which identify the critical elements of model development, implementation, use and monitoring and provide a road map of expectations for success.
  • Naming and ensuring secured models.
  • Model inventorying which should be taken on a periodic basis noting both a model’s rank of importance and where each model is in the development-use-monitoring lifecycle.
  • Implementation and use governance (use of links, data dumps, input identification). 
  • Shared control tools and techniques for users; watch windows, balancing checks, backups, etc.
  • Flow chart guidance which will visually point out risk and control points which may not be obvious to a user in a live environment. 
  • Qualitative risk analysis is a good starting point in identifying risk points and controls. Consider brainstorming, Nominal Group or Delphi Technique which includes both upstream and downstream model owners and users. 

Become an informed, insightful and invaluable employee by utilizing its guidance, even in non-regulatory environments. Excellent executive decision-making demands excellent modelling and analysis.
SR 11-7: www.federalreserve.gov/bankinforeg/srletters/sr1107.htm

Rob Trippe is a financial modelling veteran. With over fifteen years’ experience, Rob has developed corporate finance models for valuation, M&A, forecasting and performance monitoring. He is widely respected for his deep understanding of corporate finance theory, lectures at university and has worked with some of the world’s largest and most respected firms. His research while at the investment bank Houlihan Lokey Howard & Zukin was published in the Wall Street Journal and USA Today. His cash flow model while at the Hertz Corp. was published in SEC and quarterly press release filings. Rob was accredited in valuation in 2008 and holds an MBA, Finance from Boston College.