“Patrick Naim and Laurent Condamin articulate the most comprehensive quantitative and analytical framework that I have encountered for the identification, assessment and management of Operational Risk. I have employed it for five years and found it both usable and effective. I recommend this book as essential reading for senior risk managers.”
–C.S. Venkatakrishnan, CRO, Barclays
“I had the pleasure to work with Laurent and Patrick to implement the XOI approach across a large multinational insurer. The key benefits of the method are to provide an approach to understand, manage and quantify risks and, at the same time, to provide a robust framework for capital modelling. Thanks to this method, we have been able to demonstrate the business benefits of operational risk management. XOI is also well designed to support the Operational Resilience agenda in financial services, which is the new frontier for Op Risk Management.”
–Michael Sicsic, Head of Supervision, Financial Conduct Authority; Ex-Global Operational Risk Director, Aviva Plc
“The approach described in this book was a ‘Eureka!’ moment in my journey on operational risk. Coming from a market risk background, I had the impression that beyond the definition of operational risk, it was difficult to find a book that described a coherent framework for measuring and managing operational risk. Operational Risk Modeling in Financial Services is now filling this gap.”
–Olivier Vigneron, CRO EMEA, JPMorgan Chase & Co
“The XOI methodology provides a structured approach for the modelling of operational risk scenarios. The XOI methodology is robust, forward looking and easy to understand. This book will help you understand the XOI methodology by giving you practical guidance to show how risk managers, risk modellers and scenario owners can work together to model a range of operational risk scenarios using a consistent approach.”
–Michael Furnish, Head of Model Governance and Operational Risk, Aviva Plc
“The XOI approach is a simple framework that allows to measure operational risk by identifying and quantifying the main loss drivers per risk. This facilitates the business and management engagement as the various drivers are defined in business terms and not in risk management jargon. Further, the XOI approach can be used for risk appetite setting and monitoring. I strongly believe that the XOI approach has the potential to become an industry standard for banks and regulators.”
–Emile Dunand, ORM Scenarios & Stress Testing, Credit Suisse
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Australia and Asia, Wiley is globally committed to developing and marketing print and electronic products and services for our customers' professional and personal knowledge and understanding.
The Wiley Finance series contains books written specifically for finance and investment professionals as well as sophisticated individual investors and their financial advisors. Book topics range from portfolio management to e-commerce, risk management, financial engineering, valuation, and financial instrument analysis, as well as much more.
For a list of available titles, visit our website at www.WileyFinance.com.
This edition first published 2019.
© 2019 John Wiley & Sons Ltd.
Registered office.
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Names: Naim, Patrick, author. | Condamin, Laurent, author.
Title: Operational risk modeling in financial services : the exposure, occurrence, impact method / Patrick Naim, Laurent Condamin.
Description: Chichester, West Sussex, United Kingdom : John Wiley & Sons, [2019] | Includes index. |
Identifiers: LCCN 2018058857 (print) | LCCN 2019001678 (ebook) | ISBN 9781119508540 (Adobe PDF) | ISBN 9781119508434 (ePub) | ISBN 9781119508502 (hardcover)
Subjects: LCSH: Financial services industry—Risk management. | Banks and banking—Risk management. | Financial risk management.
Classification: LCC HG173 (ebook) | LCC HG173 .N25 2019 (print) | DDC 332.1068/1—dc23
LC record available at https://lccn.loc.gov/2018058857
Cover Design: Wiley
Cover Images: © Verticalarray /Shutterstock, © vs148 /Shutterstock, ©monsitj / iStock.com, © vs148/Shutterstock
I met Patrick and Laurent at a conference on operational risk in 2014. This meeting was a “Eureka!” moment in my journey on operational risk, which had started a year earlier.
I had been asked to examine operational risk management from a quantitative perspective. Coming from a market risk background, my first impressions were that, beyond the definition of operational risk, it was difficult to find a book that described a coherent framework for measuring and managing operational risk. Operational Risk Modelling in Financial Services is now filling this gap. Nevertheless, in the absence of such a book available at the time, I became familiar with the basic elements of operational risk: the risk and control self-assessment process (RCSA), the concept of key risk indicators (KRIs), and the advanced model approach (AMA) for capital calculation under Basel II.
In examining the practices of the financial industry, I had the impression that these essential components existed in isolation from each other, without a unifying framework.
The typical RCSA is overwhelming because of the complexity and granularity of the risks it identifies. This makes individual risk assessment largely qualitative and any aggregation of risks problematic.
KRIs were presented as great tools to monitor and control the level of operational risks, but in current practice they appeared to come from heuristics rather than from risk analysis or a risk appetite statement.
Finally, at the extreme end of the quantitative spectrum, all major institutions were relying on risk calculation teams specialising in loss distribution approaches, extreme value theories, or other sophisticated mathematical tools. Financial institutions have fuelled a very sustained activity of researchers extrapolating the 99.9% annual quantile of loss distributions from sparse operational losses data.
As difficult as this capital calculation proved to be, it was generally useless for risk managers and failed to pass the use test, which should ensure that risk measurement used for capital should be useful for day-to-day risk management. This failure should not be attributed to the Basel II framework, as AMA has tried to combine qualitative and quantitative methods in an interesting way and has introduced the important concept of operational risk scenarios!
In summary, I was confronted with an inconsistent operational risk management framework where the identification, control, and measurement of risks seemed to live on different planets. Each team was aware of the existence of the others, but they did not form a coordinated whole.
This inevitably raised the question of how to bridge the gap between risk management and risk measurement, which was precisely the title of Patrick's speech at the Oprisk Europe 2014 conference! Eureka! Never has a risk conference proven so timely.
The question is fundamental because it creates a bridge between an operational risk appetite statement and KRIs, and establishes a link between major risks, KRIs, and RCSA by leveraging the concept of operational risk scenarios.
The quantification of these risks (the risk measurement) can be compared to the stress testing frameworks used in other risk disciplines such as market risk. It can also be used to build a forward-looking economic capital model.
Once a quantitative risk appetite is formulated, once KRI are put in place to monitor key risks, and once an economic capital consistent with this risk measure is established, better risk management decisions can then be made. Cost-benefit analyses can be conducted to establish new controls to mitigate or prevent risk.
In other words, a useful risk management framework for the business has emerged!
I believe that Operational Risk Modelling in Financial Services is a book that will help at every level from the seasoned operational risk professional to the new practitioner. To the former, it will be an innovative way to link known concepts into a coherent whole, and to the latter it will serve as a clear and rigorous introduction to the operational risk management discipline.
Olivier Vigneron
Managing Director | Chief Risk Officer, EMEA |
JPMorgan Chase & Co.
Thank you for taking the time to read or flip through this book. You probably chose this book because you are working in the area of operational risk, or you will soon be taking a new job in this area. To be perfectly honest, this is not a subject that someone might spontaneously decide to research personally, as can be the case today for climate change, artificial intelligence, or blockchain technologies.
However, we quickly became passionate about this subject when we first started working on it over 10 years ago. The reason for this is certainly that it remains a playground where the need for modelling, that is, a simplified and stylized description of reality, is crucial. Risk modelling presents a particular difficulty because, as the Bank for International Settlements rightly points out in a discussion paper in 20131: “Risk is of course unobservable”.
Risks are not observable, and yet everyone can talk about them, and have their own analysis. Risks are not observable, yet they have well observable consequences, such as the 2008 financial crisis. It can be said that risks do not exist – only their perceptions and consequences exist.
Risk modelling therefore had to follow one of two paths: modelling perceptions or modelling consequences. In the financial field, quantitative culture has prevailed, and consequence modelling has largely taken precedence over perception modelling. For a banking institution, the consequences of an operational risk are financial losses. The dominant approach has been based on the shortcut that since losses are the manifestation of risks, it is therefore sufficient to model losses.
As soon as we started working on the subject, we considered that this approach was wrong, because losses are the manifestation of past risks, not the risks we face today. We have therefore worked on the alternative path of understanding the risks, and the mechanisms that can generate adverse events. This approach is difficult because the object of modelling is a set of people, trades, activities, rules, which must be represented in a simple, useful way to consider – but not predict – future events, and at the same time seek ways to mitigate them. This is more difficult than considering that the modeling object is a loss data file, and using mathematical tools to represent them, while at the same time, and in a totally disconnected way, other people are thinking about the risks and trying to control or avoid them. This work on mechanisms that can lead to major losses bridges the gap between risk quantification and risk management, and is more demanding for both quantification and management, since modellers and business experts must find a common language.
It is only thanks to the many people who have trusted us over these 10 or 15 years that this work has gone beyond the scope of research, and has been applied in some of the largest financial institutions in France, the United Kingdom, and the United States. We have worked closely and generally for several years with the risk teams and business experts of these institutions, and for several of them we have accompanied them until the validation of these approaches by the regulatory authorities.
This book is therefore both a look back over these years of practice, to draw a number of the lessons learned, and a presentation of the approach we propose for the analysis and modelling of operational risks in financial institutions. We believe, of course, that this approach can still be greatly improved in its field, and extended to related areas, particularly for enterprise risk management in nonfinancial companies.
This book is not a summary or catalogue of best practices in the area of operational risks, although there are some excellent ones. In any case, we would not be objective on this subject, since even though we have been privileged observers of the practices of the largest institutions and have learned a lot from each of them, we have also tried to transform their practices.
The first part of this book is both a brief presentation of the method we recommend and a summary of the lessons learned during our years of experience on topics familiar to those working in operational risks: RCSA, loss data, quantitative models, scenario workshops, risk correlation analysis, and model validation. In this section, we have adopted a deliberately anecdotal tone to share some of our concrete experiences.
The second part describes the problem, that is, operational risk modelling. We go back to the definition of operational risk and its growing importance for financial institutions. Then we discuss the need to measure it for regulatory requirements such as capital charge calculation, or stress tests, or nonregulatory requirements such as risk appetite and risk management. Finally, we discuss the specific challenges of operational risk measurement.
The third part discusses the three main tools used in operational risk analysis and modelling: RCSA, loss data models, and scenario analyses. We present here the usual methods used by financial institutions, with a critical eye when we think it is necessary. This part of the book is the closest to what could be considered as a best-practice analysis.
Finally, the fourth part presents the XOI method, for Exposure, Occurrence, and Impact. The main argument of our method is to consider that it is possible to define the exposed resource for each operational risk considered. Once the exposed resource is identified, but only under this condition, it becomes possible to describe the mechanism that can generate losses. Once this mechanism is described, it becomes possible to model and quantify it.
The method we present in this book uses Bayesian networks. To put it simply, a Bayesian network is a graph representing causal relationships between variables; these relationships being quantified by probabilities. You go to the doctor in winter with a fever and a strong cough. The doctor knows that these symptoms can be caused by many diseases, but that the season makes some more likely. To eliminate some serious viral infections from his diagnosis, the doctor asks you a few questions about your background and in particular your recent travels. The following graph can be used to represent the underlying knowledge.
Nodes are the variables of the model, and links are represented by probabilities. The great advantage of Bayesian networks is that … they are Bayesian, that is, that probabilities are interpreted as beliefs, not as objective data. Any probability is the expression of a belief. Even using an observed frequency as a probability is an expression of a belief in the stability of the observed phenomenon.
Bayesian networks are considered to have been invented in the 1980s by Judea Pearl of UCLA2 and Stefen Lauritzen3 of University of Oxford. Judea Pearl, laureate of the Turing award in 2011, has written extensively on causality. His most recent publication is a non-specialist book called The Book of Why4. It is a plea for the understanding of phenomena in the era of big data: “Causal questions can never be answered by data alone. They require us to formulate a model of the process that generates the data”.
Pearl suggests that his book can be summarized in a simple sentence “You are smarter than your data”. We believe this applies to operational risk managers, too.