Classifications of risks that can help NED's to understand risks and events
Why do we have corporate governance regulation?
What drives corporate governance regulation? Is it media focus, political pressure, or a need to ’do something’?
Or is it sound analysis leading to thoughtful prescriptions? I suspect that pretty much everyone accepts that the answer is somewhere in the former list. Why does it have to be like this?
Evidence-based medicine is a well-established movement in health care. Even the UK Government published a White Paper in 1999 ("Modernising Government"), admitting that it "must produce policies that really deal with problems, that are forward-looking and shaped by evidence rather than a response to short-term pressures; that tackle causes not symptoms". Sadly this went the way of many well-meaning political initiatives.
The UK Corporate Governance Code starts;’The purpose of corporate governance is to facilitate effective, entrepreneurial and prudent management that can deliver the long-term success of the company.’ Fine words, but how do we know that actually does this?
Corporate Governance regulators have not as yet woken up to the needs of evidence, analysis and proof. There was an outcry about executive pay. They reacted by asking the great and the good as to what should be disclosed, and then mandating it. The result is 30 pages at least in every annual report listing every last detail of directors’ remuneration. Where is the evidence that this has remedied the problem of excessive pay and payment for failure? I can see that investor interest and therefore pressure on boards has had an effect on boards, but such pressure was in fact the driver, not the result, of additional regulation.
Listed company directors now have to put themselves up for re-election every year now. This was because people thought it would be a good idea. Where is the evidence that this would help and where is the post implementation review that shows it was effective in what it set out to do?
Politicians and the media are baying for more diversity on boards. The regulators duly oblige by setting targets for more females. This time there are also claims of a statistical relationship between number of females on boards and good performance. Except that the statistics in fact are pretty dodgy, and fail to show a company performance improving over time as a result of the presence of more women (if you think about it, such a relationship would be quite extraordinary given the complexity of company profitability). Where is the post implementation analysis that shows company profitability in the UK has improved as the percentage of FTSE100 female directors has doubled? I’m not saying that there isn’t a moral or political case for female representation. If regulation is just political, then let’s not dress it up as rules for improved performance.
Would it be so hard to develop evidence-based regulation? This is what it should look like:
The original events that led to the ’need for regulation’ are thoroughly analysed, and their causes identified;
The theory is tested as to why the regulation will be effective against those causes, and what the possible impacts of the regulation might be;
The counterfactual is tested: what would be likely to occur if the policy were not implemented;
The impact of the new regulation is measured;
Both the direct and indirect effects of the regulation are identified;
The uncertainties and other influences outside of the regulation that might have an effect on the outcome are identified;
The analysis and tests is capable of being tested and replicated by a third party.
The regulation is tested to identify if it ever becomes unnecessary or develops unforeseen consequences.
This is a manifesto for good regulation. None of the current corporate governance rules would satisfy this standard. Yet, given the costs of implementing the governance rules, is it unreasonable for regulators to justify themselves with a bit of evidence?
Put simply, governance regulation should start with an analysis of what has gone wrong in companies, identify regulation to stop this recurring elsewhere, and then check that this is being successful. The analysis into what goes wrong at companies must be far-reaching and insightful, going beyond condemning individual directors and failures of risk management. It needs to look at culture and accept human fallibility.
We all need rules, but the regulators are perpetuating a lie in suggesting that rules improve performance. Football teams couldn ’t play a match without a common set of rules. But you won’t improve Manchester United’s performance by adding new rules to the game. Teams improve with better tactics, advice, and encouragement. Boards are teams too.
This would be the regulators’ toughest challenge. How can they go beyond rules and compulsion, to encouragement, best practice and helping boards? They need to accept the discipline of evidence, the limitations of rules, and open their eyes to the importance of culture and how to foster the right one. And that probably requires culture change at the Regulators themselves.
The basic model of commercial aviation is a thin tube of highly pressurised metal being propelled at 600 miles per hour by inflammable fuel at 35,000 feet in all weathers at temperatures of -57 degrees. So have you ever wondered how aviation got to be one of the safest forms of transport, despite being inherently full of such potentially catastrophic risks?
On the other hand, the average boardroom, comfortably at 20 degrees, often going nowhere, an executive suite above the ground, continues to struggle with business risks, and major business incidents and errors are showing no signs of reducing.
There are of course many reasons why aviation risk management is more advanced than the corporate equivalent. There is also one reason why it is not. It is not because one group is cleverer or more dedicated than the other.
Most of the explanation lies in the imperative to get aviation safety right. The feedback loop on an aircraft is very fast and exceptionally forceful. If bad decisions can kill both you and hundreds of passengers at once, then you will tend to take risk management very seriously indeed. By contrast, poor board decisions usually take months, if not years, to become evident, and tend to result in financial losses that are often survivable for the executives concerned.
Business could learn a vast amount from the risk culture that aviation has developed as a result of its safety imperative. When something goes wrong in aviation, the major drive is to find out what went wrong, rather than finding culprits.The aim is to learn the lessons and then disseminate recommendations to manufacturers, pilots, operators, air traffic and anyone else, to ensure that this set of events cannot be repeated.
In business, by contrast, the focus is on naming and shaming the director held to be responsible, ensuring he doesn’t get a bonus and possibly gets fired. It is driven by bloodlust, not analysis.
Aviation puts the emphasis on process, procedures and systems, accepting that humans will make mistakes. Moreover, the higher the stress levels, the more the mistakes that will be made. In the corporate world, the conventional assumption is that executives are too highly paid to make errors, and so if they do, the key outcome is for them to be punished.
Many will argue that this is simplistic. There are of course many other features of aviation risk management. It is often the media and politicians who personalise corporate failures. However, I haven’t heard many corporate commentators argue that the recent problems at Tesco and Morrisons, for example, shouldn’t be blamed on the outgoing CEO’s, but need to be understood in the broader context of the changing market and process failures in the individual companies.
In the end the question is; which of these two models produces a better understanding of what went wrong in a particular event? Which is therefore more likely to produce a lasting improvement in risk management?
Then ask yourself, how safe would you feel in an aircraft regulated by the standards of today’s corporate governance codes?
The popular image of a pilot is of a dashing hero, who pulls off amazing feats of skill to save his aircraft from imminent disaster. However, in reality, what airlines value most in pilots is keeping to procedures and operating according to checklists, acting with defined responses to various planned and unplanned events.
That’s not to deny that pilots possess considerable skills and knowledge. It’s just that these are best deployed in known routines and responses. You don’t really want a pilot inventing a new dashing way to land a jumbo jet full of passengers.
That makes a pilot different to a CEO. You want a CEO who does lead the business into new ventures and innovative ways of working. These sets of skills are rare and so we end up looking harder for the right CEO, and paying them a lot more money. We end up with the superstar CEO.
Whilst this may be the right strategy, it does have some undesirable consequences. Much of the responsibility for performance is placed solely with the CEO. If something goes wrong, the media, politicians, and often investors, are out calling for the CEO’s head. The CEO can’t complain about this, as it’s the flip side to demanding the superstar salary. Their remuneration and incentives are, after all, based on the idea that the CEO is making a massively disproportionate contribution to company performance.
But it’s also rather convenient for others. For the media, it’s a simple age-old people story; another Icarus melts his wings and falls to earth. For investors, it can be reassuring. Perhaps they didn’t invest behind the wrong business model, the chief just let them down. Even the other board directors can deflect difficult questions about their own role by taking decisive action on the boss.
The problem with this familiar narrative is that, whilst it will have some truth, it is never the whole story. It enables others to avoid the sort of soul searching that might produce more insights and more long-term solutions. It enables regulators to avoid proper forensic inquiries into company failures; investigations that might produce real explanations as to how all the corporate governance, rulebooks, regulation, overseers, auditors, independent non-executive directors, and well-informed investors have all failed to stop the egregious corporate failures as we have seen in the last few years.
How did the boards of Lehmans, Merrill Lynch, Royal Bank of Scotland, and others, fail to spot their looming problems? They were full of very intelligent, experienced directors, so there must have been a systemic problem. But there has been very little attempt to learn from these events, because the focus has always tended to be on blaming the individuals at the very top, not understanding the systemic issues.
I joined the board of Northern Rock in late 2007, just after it had suffered the first run on a British bank in 150 years. I then conducted an inquiry into what had happened, in order to understand if there was a case to sue either the previous management or the auditors (there wasn’t). However, I learnt a lot about why it had happened. The then UK banking regulator never once spoke to me about the inquiry, and, to my knowledge, never forensically investigated what really happened at the bank.
It’s so much easier just to shoot people, than it is to find out the real story. It’s also much less likely to expose your own failings.
The result is that we haven’t really learnt the lessons of the last few years. Well-meaning corporate governance rules have however proliferated. Regulators have begun to accept that behavioural factors are an issue. However they have responded by, for example, insisting now that companies list out their risk factors and opine about their ’risk appetites’. You somehow doubt that an expert in human behaviour was involved in devising that remedy.
No one links new governance rules to the precise reasons for previous failures. No regulator assures us that if only companies had been applying their new rule, that failure wouldn’t have happened. There is simply no linkage between corporate failures, analysis of their causes and new regulation. That’s because regulators go straight from reacting to the public outcry about corporate failures to drafting new regulation, missing out the analysis stage completely.
So you wouldn’t want a CEO piloting your airliner, but we could certainly do with business problems being focussed less on ’pilot error’ and more on really understanding what exactly happened and why. This must be primarily aimed at preventing failures recurring. Isn’t that more important than taking revenge on individual executives?
Why you might want a pilot running your Risk Committee
There is a saying in aviation: ’Never fly in the same cockpit with someone braver than you’. Risk management for a pilot is literally a matter of life and death. Have you ever asked yourself whether you would share a boardroom table with executives braver than you are?
Company risks are often presented as long shopping lists; each with a reassuring comment, about how it’s unlikely but that it’s covered off. A typical audit committee, and now the whole board, will be faced by this list and asked to opine as to whether this is a fair summary of the risks facing the business and the mitigations. The board, or more likely the CFO, will then select a dozen of the juicier risks to list in the annual report.
Neither the non-executive director nor the annual report reader is likely ever to gain much nourishment from this exercise. However, the drive of the regulators to be seen to get action from board on risk will be satiated for another year. Has this sort of exercise ever helped to prevent a financial failure?
It’s not a surprise to find that aviation has developed a more insightful way of looking at risk. As the great aviator Ernest K. Gann wrote; ’Rule books are paper - they will not cushion a sudden meeting of stone and metal.’ The director could well substitute Annual Report for rule book.
Aviation has developed a Threat and Error Management model, which includes looking at risks by their type and then applying a three stage management process; avoid, trap and mitigate, which can equally be applied to business risks.
Categorising the types of risks
There are three high level categories of risks and events; unexpected external, expected external, and internal risks. We are, of course, here in the realms of Rumsfeld’s known unknowns and unknown unknowns. Rumsfeld incidentally was a naval pilot himself. An event is when a risk actually becomes a reality.
Unexpected external risks are, by definition, the most difficult to foresee. To quote Gann again, ’The emergencies you train for almost never happen. It’s the one you can't train for that kills you.’ It was, for example, the failure of confidence in AAA securities that was one of the key problems causing the recent financial crisis, yet almost no one predicted this risk could happen.
An expected external risk might be a rise in inflation or interest rates. These are risks that might reasonable be expected to have a chance of happening. They are the most common type to appear on a risk register, as they are easy to imagine and therefore easier to plan for.
Internal risks are those that are under your control in the business and are the ones most looked at in traditional control systems. These tend not to be so prominent in external communication of a business’s risk, as to acknowledge them implies that the control systems are not fully reliable.
Threat and error management
Clearly the best outcome is to avoid a risk becoming an event. To achieve this, companies put processes and controls in place or take pre-emptive avoiding action. This is generally applicable only to expected external risks, as it is a tough task to avoid an unexpected external risk. Some expected external risks are also not avoidable. For example, a rise in general interest rates is not within a company’s control, but a campaign against, say working practices, could be avoided by pre-emptively maintaining high standards of care for employees.
Most internal risks are avoided by careful management, strong defined processes and robust control systems. For example, fraud can be deterred by visible deterrents and controls. Increasing visibility of such deterrents (eg cameras) is in fact a prime avoidance technique. However, in any company, internal risks will crystallise into events.
No matter how good the controls and avoidance techniques are, the assumption should be that there will be a breach. All humans make mistakes. No avoidance system is ever 100% full proof. The next stage is therefore to try to trap the event. This is where information systems are crucial.
It is essential to know that the first defence (avoidance) has been breached, so there has to be an alert. Directors need to understand what systems there are to alert managers to any possible, upcoming or actual breaches.
When an event happens, management needs to (a) notice it and (b) interpret it as important. An unexpected external event is particularly tricky to pick up every time. It does not fit easily into a standard control system, as it may not even be monitored. The event may however cause a performance measure or a forecast to move, which may then trigger an alert.
Generally you hope that senior management has the ’helicopter vision’ to spot unexpected strategic events, but at working level, it may be any employee who spots an unexpected new risk; for example a sudden bout of arson in a local community. The person who initially notices the event may well not be the same as the one who spots its significance.
How good are the communication systems so that these different people can be linked together? The simplest example here would be an employee looking at a bank statement, and querying a suspect transaction. In this case there should be a system to flag and investigate unusual transaction, and an agreed procedure that follows. However, the information system on other less structured risks may be an informal network; for example a casual mention of something new to someone else over the coffee machine. It is easy to forget the informal information systems, but these can be very important.
In summary, the important features of trapping are noticing an event and then interpreting it as important. The methods for achieving this are both formal and informal information systems. This may also require preliminary investigation to understand the nature of the event, including cause, extent and implications. Trapping unexpected external events is particularly problematic, as, by definition, you do not know what you are looking for.
Having trapped the event, the task is now to mitigate its effects. This may well require in-depth investigation, in order to understand fully what happened, which controls failed and what can be done to minimise the ill effects of the breach.
The direct and indirect effects of the breach need to be identified. Indirect effects can be often missed. The event itself may be mitigated by, for example, removing an errant individual, but there may be a loss of confidence in the that department that causes others to move work elsewhere or put in their own informal double checks, reducing efficiency.
The business may need to compensate and replan for the event. A rise in interest rates, for example, may cause the business to reduce costs elsewhere or conserve cash. A spate of arson is likely to cause the business both to review its insurance cover and improve fire suppressant systems.
Finally the business needs to learn from the breach to reframe processes and controls. In rare cases, it may decide that nothing could be done, particularly from unexpected external events. However generally there will be lessons and enhanced procedures that will either reduce the chances of a future breach, or will mitigate its effects. This tends to be, at least on internal risks, the province of the internal audit recommendations. This feedback is often the most important part of the response, as the company has learnt how better to handle the risk and to prevent future such events.
Applying this thorough framework could help companies and boards better understand and manage all aspects of risks. In particular, it focuses on the importance of; informal and formal communication, the role of everyone in the business to spot possible events, timely and comprehensive information systems, compensation, and feedback.
It also emphasises that every risk should be assumed to be going to become an event. This forces proper consideration of trapping and mitigating, which otherwise tend to be assumed not to need detailed thought. It’s likely, as Gann said, that it will be the risks that you never thought of, or never believed could happen, that will be the most painful. Just ask the people who used to believe in AAA bonds.
Think like a pilot: How a non-executive director can challenge business risks.
Have you, as a non-executive, ever sat there wondering how you can realistically review the long shopping list of corporate risks? What value can you add? How can you systematically challenge them? And when something goes wrong, the executives explain what happened, but how does a non-executive go deeper to understand if there are systematic issues, rather than one-off bad luck?
These are real questions that non-executives face all the time. There are however pointers from aviation, an industry that really understands risk management. In an earlier article I reviewed how aviation categorises and reviews risks, by sub-dividing them into expected external, unexpected external and internal risks.>
For most non-executives, internal risks are the most commonly discussed ones, as they are, or should be, under the control of the business. Aviation has three levels of risk management; avoid, trap, mitigate.
Any organisation should have basic defences to avoid risks becoming events. The major defences are the control environment, systems and procedures. The control environment comprises culture and controls. Is the atmosphere lax or do people expect things to be done by the book? Are short-cuts or compromises allowed? Are whistle-blowers encouraged? Procedures include policies, methodologies and rules. Who can authorise what, and how does an action get approved? Systems should follow from the procedures (and not the other way round), as rules and methodologies are coded into systems to automate processes.
Defences can be overt or covert. Strong password disciplines on systems, for example, are not only a control, but also a visible deterrent, dissuading some people from trying to break into the systems. The first line of any defence is always deterrence. So a strong control environment must first be understood and acknowledged to be so. There may even be a value in bluffing, giving the impression that controls are even stronger than they are. The classic example of this is dummy cameras. Another is supermarkets that encourage local police to use their staff canteens. The sight of uniformed officers walking through the store always has a highly salutary effect on shoplifters.
A control has failed, and an internal risk has become an event. This now should be understood as either a process failure or a violation.
a. Process failures
These are unintentional control errors, for example, an invoice that gets paid twice because a clerk or a system fails to stop a duplicate. You can normally divide these into skill-based and knowledge-based;
1. Skill-based errors are often failures arising from human frailties, such as forgetting to check something, or even a physical slip or fall. They are a result of an operator being human. Every person makes mistakes, and so control systems have to be resilient to human error. A slip occurs when a person intends to do something, but inadvertently does something else, for example, pressing the wrong button or dialling the wrong number. A lapse occurs when someone forgets to do something. For example, an employee forgets to log off a computer or fails to check a security system.
2. Knowledge-based failures arise from someone’s lack of expertise or information. A buyer who is new to the job might not know that they need to check a new supplier is already on an approved list, or has certified that they apply certain ethical standards. A manager may simply not know how to deal with a situation. A person lacking the right skills is a knowledge failure, not a skill one.
These are deliberate breaches of controls and processes. They can be either routine or exceptional.
1. Routine breaches are fairly common ones. It may be ’accepted’ practice for employees to share a login code, possibly even one written on a post-it note on a terminal. Procedures may be too cumbersome or out-dated to be practical, and people get used to skimping on them. So routine breaches need to be reviewed to see if the problem is with the process, rather than the violators. The violators might be keeping a process going that otherwise simply would not work.
Routine breaches are most likely to be failures of the avoidance system, instigated neither out of malice nor incompetence, but by humans reacting to a control system that hinders what they perceive as their everyday roles.
2. Exceptional breaches come in many different guises. They may result from exceptional situations that were not envisaged in the standard process. An unexpected external event may have happened and the staff on the spot then had to make a quick decision about how to react, without the time to consult a manual or a superior. By contrast, a violation may even occur out of boredom. The Chernobyl reactor fire started as a result of unauthorised experiments by bored staff. Corruption or criminal behaviour might cause an employee to deliberately violate procedures. Finally, a disaffected employee could violate in order to sabotage a process or company.
Exceptional breaches are more likely to represent a serious failure of the control system to envisage certain events or a person intent on abusing the system.
The difference between knowledge and skill failures is important.
A skill-based error may reflect the need to change the ergonomics (for example, if two options on a system are similar, eventually someone will click the wrong one or the same with two very similar levers on a machine), or the need to ensure people are careful when long-standing processes are changes (as the operator is very likely to revert unintentionally to their previously learned behaviour). Skill-based errors should be easier to trap, as they may be predictable. For example, most computer input systems put extensive effort into trapping obvious mistakes. Google, for instance, will try to spot a spelling mistake in a query, and offer a ’Did you mean?’ alternative.
It is important to distinguish everyday slips from outright carelessness. The mitigation for the latter might be punishing an individual, but, for the former, it is a process solution, making it more difficult to commit common slips and providing better immediate trapping of when they do occur.
Knowledge-based errors generally represent a failure to provide training. The knowledge usually exists, but hasn’t been disseminated to the right people. The mitigation may be to review training standards and provide remedial education.
Routine violations are likely to require a change in the process or system to make it less attractive to violate than to comply with controls. It may be of course that sanctions also need to be taken against the violators if their actions are unreasonable.
Exceptional violations are more difficult to generalise. Ensuring that risk management has encompassed less likely scenarios will help, as will cultural measures that reduce boredom and identify employee disaffection.
Non-executive directors need to challenge risks more, but they often don’t know how to. This classification can help to challenge executives about risks and events. It moves beyond knee-jerk reactions to errors, failures and violations, which often focus on condemnation of individual behaviour. It accepts human frailty, and points towards making system and processes resilient to operator error. It forces managers to distinguish control breaches, which serve to keep the organisation running, from malevolent violations that undermine controls.
Next time you fly, take comfort from the risk management that has made commercial aviation such a safe form of travel. Then get back to the office, and use it in your boardroom.