Financial models, the tools used to forecast cash going into and out of a company, profit, loss, liquidity, risk are important tools to managers. Unfortunately sometimes they go wrong.
Often they go wrong due to unforeseen events – problems in manufacturing, industrial action, problems with suppliers. These issue are difficult to model and require a lot of data that might not be forthcoming.
Sometimes even the best forecasts of some variable will be wrong. Many companies were left with excess inventory at the start of the financial crisis, a costly mistake given the fall in demand. Few people could see what was happening and the extent of it in time to build a much lower demand for products into models. Sometimes a more sophisticated statistical approach will yield better results but often there are diminishing returns here.
Whilst both of these factors are hard for a company to deal with, larger companies may be able to throw more resources at the problem or may be able to insure against some losses but these are still challenging for almost any company.
I will list below three problems (or groups of problems) that I have seen in financial models that go wrong, things that can be identified in advance rather than hindsight and things that an organization that wishes to improve its financial forecasts might look to fix.
1) Winners Curse, Adverse Selection, regression to the mean and similar problems
Basically this often comes down to not using some available information resulting in selection bias. The classic example of “the winners curse” is an auction. If a number of prospective purchasers go into an auction for an item with imperfect information then none of them will know exactly the true value of the item. Some will think it is worth less than it is and some will think it is worth more. On average however we would expect a large number of purchasers to get the value roughly correct. The issue arises because only one purchaser’s price is relevant: the highest price. All other prices are not paid. If each bidder were to bid what they believed the item was worth then the highest bid is much more likely to have overestimated the value than underestimated it and as a consequence the winner will pay too much. The solution is for the bidder to bid less than they believe the value to be – their bid should reflect the most likely value (plus margins for profit/covering costs etc.) of the product given that their estimate is higher than all other bidders.
Whilst is can seem obscure, even a little esoteric, this is actually a common problem. Bidding for a contract brings similar issues; given that you put in the lowest price are you likely to have underestimated the cost of undertaking the work or providing a service? This can be particularly pronounced when bidding against an incumbent; they have a much better estimate of the real cost of fulfilling a contract and undercutting them can be a very expensive business. Whilst those putting together bids in larger organisations often appreciate these risks, often financial forecasts based on aggregating forecast profits on contracts miss out on these effects.
These effects are not limited to single projects or events. Aggregated across a larger number of activities there can be a pronounced regression to the mean effect. As an illustration, consider a public sector organisation that funds any project with a benefit/cost ratio of 2:1 and will not fund any project with lower expected benefits. Even assuming that all people making forecasts are unbiased and have no interest in making any prospective projects look more attractive the overall benefits can be disappointing. All forecasts of benefits have some uncertainty. A project that in reality is just below the funding threshold where the forecast is favourable will be funded (and will deliver it’s “real” return of less than 2:1); a project just above the threshold with an unfavourable forecast will appear less attractive than justified and will not be funded. A Post Hoc evaluation will generally only look at the projects undertaken and thus will work on a sample disproportionately populated by projects where the initial forecast was optimistic. Organisations that have a strong focus on reviewing missed opportunities and reviewing decisions where the choice made was to not act will have a better sense of any problems in their forecasts.
Finally the phenomenon of adverse selection is worth a brief mention; where there is some asymmetric information customers may not behave as an overly simple model might suggest. Here the classic example is insurance. If you offer life insurance to a population at a given price some will accept it and some will reject it. Those that buy life insurance do so because it appears good value – maybe they are ill, maybe they smoke or have an otherwise risky lifestyle. If an insurance company makes plans based on prevalence of disease or longevity across a population as a whole with limited reference to individual circumstance, then each customer who knows more about their lifestyle than the insurance company will buy insurance with greater likelihood if it is a bad deal for the insurer.
In most markets the effects of this are weaker and most companies have recently become much more sophisticated in their approach to adverse selection, nevertheless there are still some surprises. Travellcards on buses or trains that allow unlimited travel draw off the most lucrative customers; phone contracts that offer cheaper calls attract those that make the most calls. Sometimes companies offer new products or services that do not attract sufficient new customers to justify the losses made by drawing off old customers from more expensive products or services.
2) Inappropriate use of averages
Consider something like the FMCG market and a companies’ financial model that helps them project their profit in this market. It can contain production costs, markup, demand, capacity for production and more.
A manager looking to maximise profit will look to avoid producing more than they can sell and will also look to produce as much as they can sell. Forecasting revenue from sales and expenses incurred must also take account of fluctuations as well. Even if the average demand over an extended period is known it does not mean that in the short term there will not be fluctuations. If unsoled stock excedes storage capacity then excess stock must be discounted or disposed of.
Likewise businesses, especially new businesses unfamiliar with financial modelling often manage to make errors in estimating staffing. They match staffing levels to average demand for labour. In some industries this is fine – some work can be put aside to quiet periods and staff can be efficient. In other industries fluctuations between busy and quiet periods mean that using an average demand for labour is not constructive.
Where there are high markups on goods, the cost to a company of demand that is not met is high, and where markups are low the cost of goods produced but not soled can be very expensive. Appropriate demand models that take a more sophisticated approach can help to manage this uncertainty and reduce costs to the business.
This is not to say that averages should never be used; for a headline figure an average is often all that a senior manager will need to know on a day to day basis. However that manager should look to ensure that any department providing him with figures is aware of when they might be inappropriate or limited and that that department will provide more in depth figures as needed.
3) Not updating assumptions for changed circumstances
Building a model is hard. It takes a lot of work, in depth understanding of the thing being modelled, collection of a lot of data and, ideally, some rigorous testing. It is no surprise that many models used on a day to day basis (financial, demand, supply or risk) are actually quite old and have been patched up somewhat to make them work. Not only is this not surprising, it is inevitable. Given the expertise needed to build an appropriate model it can be an expensive task and regular comprehensive rebuilding is not cost effective.
This being said there are sometimes occasions where a model can rapidly cease to be fit for purpose. To give an example, consider a forecast for demand for commercial vehicles over the past 10 years. A simple statistical model might incorporate current fleet size, GDP growth, interest rates and running costs. Demand for vehicles is replacing old vehicles when it becomes more cost effective to have a new one, and purchasing new vehicles to meet the new demand in a growing economy. A car manufacturer might estimate demand for future production from this relatively simple model and appropriately calibrated it would provide years of service with good predictions.
In a recession or depression this model could very well brake down. Leaving aside macroeconomic considerations about 0% interest rates there is a very real question about how demand for new vehicles is influenced by a contracting economy. In the old model the assumption was that vehicles exited commercial circulation as they stopped being attractive relative to new vehicles – now companies downsizing release vehicles onto the market in a way that the original model was never meant to capture. A model calibrated on data with purely positive GDP growth is likely to be poor at forecasting under radically different assumptions.
These are obviously not the only problems associated with financial modelling or forecasting but they do highlight some common issues, especially within smaller or medium sized businesses (or sections of large businesses unused to dealing with these issues). In addition all of these issues are associated with a misuse of information; what is the cost of a project given that your bid has won? What is the cost of stock shortfalls given the observed variability in demand? what is the expected demand for a product given that there has been negative GDP growth?