OCC 2000-16 Subject: Risk Modeling Description: Model Validation Date: May 30, 2000 TO: Chief Executive Officers and Compliance Officers of All National Banks, Department and Division Heads, and All Examining Personnel PURPOSE This bulletin provides guidance to help financial institutions mitigate potential risks arising from reliance on computer-based financial models that are improperly validated or tested. The guidance outlines key model validation principles and the Office of the Comptroller of the Currency's (OCC) expectations for a sound model validation process. The expectations included in this bulletin supplement previously issued model validation guidance, generally found in the subject matter booklets of the Comptroller's Handbook or OCC Bulletins. CONTENTS Background...........................................................1 General Procedures for Model Validation..............................2 Elements of Sound Validation Policy..................................3 Validating the Model Inputs Component................................4 Validating the Model Processing Component............................5 Model Reports (Management Information System)........................7 Summary of Supervisory Expectations Regarding Model Validation.......8 BACKGROUND Computer models are abstract representations of the various relationships among events and values in the real world. They are used in banking to estimate risk exposure, analyze various business strategies, and estimate fair values of financial nstruments and acquisitions. Due to a better understanding of their potential enhancement to management information systems, and due to the ongoing reduction in the cost of computing power, models are playing a progressively more important role in the banking industry. The tools are now routinely used for credit scoring, asset-liability management, trading-risk management, and for valuation estimates of financial instruments, such as securitization retained interests. In the next decade, it appears that the models will increasingly guide enterprise-wide risk management, economic, and regulatory capital allocation, whole-bank credit risk, fiduciary asset management, and internal profitability measurement. In light of this increasingly pervasive use, it is apparent that models can provide extremely useful information for bankers' decision making. Model development is a complex and error-prone process. While many completed models work as planned, some models contain fundamental errors. Moreover, the internal logic of most models is usually very abstract and limiting, so it requires considerable judgment and expertise to apply model results outside of the narrow context under which they are derived. The OCC has observed several instances in which decision makers either relied on erroneous price or exposure estimates, or on an overly broad interpretation of model results, with serious consequences for their bank's reputation and profitability. There are many more instances in which the incorrect use of models created the potential for large losses, which were avoided only fortuitously. This problem is generally referred to as "model risk." Fortunately, model risk can be considerably reduced. Sound model building includes rigorous procedures for "model validation." Model validation not only increases the reliability of a model, but also promotes improvements and a clearer understanding of a model's strengths and weaknesses among management and user groups. A model consists of three components: An information input component, which delivers assumptions and data to the model; a processing component, which contains the theoretical model, transforms inputs into estimates via the computer instructions (code); and a reporting component, which translates the mathematical estimates into useful business information. Since errors in any of these three components can cause the model's information to be meaningless or misleading, an effective model-validation process must address all three components. In this document we delineate principles and policies that guide effective model-validation procedures and offer some specific examples. However, in practice, model validation requires not only technical expertise but also considerable subjective business judgment. It is important for decision makers to recognize that this subjectivity elevates the need for sound and comprehensive validation processes. GENERAL PROCEDURES FOR MODEL VALIDATION There are three generic procedures that are applicable when validating a model: (a) independent review of the logical and conceptual soundness, (b) comparison against other models, and (c) comparison of model predictions against subsequent real-world events. Depending on the circumstances, any or all of these procedures should be separately applied to each of a model's three components. Regardless of a bank's size, the OCC believes that it is essential that banks develop formal policies that ensure that all of these principles are applied when circumstances warrant. The depth and extent of the validation should be consistent with the materiality and complexity of the risk being managed. If properly designed, formal validation policies provide staff with the necessary guidance as to the rigor desired by decision makers, who in turn can be confident that the bank's modeling information is reliable and useful within the given business context, and is delivered at reasonable cost. Elements of Sound Validation Policy A bank's validation policy should help ensure that its model-validation efforts are consistent with senior management's view of the proper trade-off between costs and benefits. To reflect that view, the policy should include the following elements: Independent Review The personnel performing model validation should be as independent as possible from the personnel who construct the model. At money-center banks, with multiple modeling "shops," independent review is often readily available in house, and can be complemented by external reviewers or internal audit. For smaller banks, the validation policy should provide for as independent a review as practicable. When comprehensive independence is not practicable, the policy should explicitly provide for an effective communication process between modelers and decision makers; technical complexity never liberates model builders from the responsibility for providing clear and informative descriptions of modeling assumptions and limitations to senior management. Defined Responsibility The responsibility for model validation should be formalized and defined just as is the responsibility for model construction. Consistent with best practices, policies should specify that, before a model can enter production, (a) the independent model-validation unit or external reviewer must document the model validation tests and the reasons for concluding that the model is valid, and (b) internal audit must verify that no models enter production without formal approval by the validation unit. At smaller banks that lack the resources for effective independent review, the policy should explictly require senior management to formally approve all models that are used for pricing or risk-limit compliance. Management should approve both the conceptual approach and the key assumptions for such models, and verify that reasonable quality-control processes are in place. Model Documentation Model documentation creates a corporate memory in the event of the departure of key modeling personnel. At the corporate-wide level, a catalogue of models and their applications should be maintained. Policy should also require documentation for specific models that is adequate to facilitate independent review, training of new staff, and clear thinking by the model developer. The most rigorous policies require documentation that is sufficiently detailed to allow the precise replication of the model being described. At a minimum, model documentation should provide summary overviews of the general procedures used and the reasons for choosing those procedures, describe model applications and limitations, identify key personnel and milestone dates in model construction, and describe validation procedures and results. Ongoing Validation Even after entering production, most models are frequently altered in response to changes in the environment or to incorporate improvements in modelers' understanding of the model's subject. However, model alterations can also help evade risk limits or disguise losses. For example, modest changes in the assumptions that quantify future interest-rate volatility can significantly reduce a model's estimate of a bank's interest-rate exposure, or increase the estimated value of a position in interest-rate derivatives. Such changes will generally be obscure to senior management, but can hide noncompliance with interest-rate-risk limits or trading losses. Best practices for validation policies require that all changes in the modeling process be documented and submitted for independent review. A useful practice is to allow model changes only periodically, and only after independent review and approval by the appropriate level of the bank's decision makers. It is useful for a bank to store multiple copies of model code to facilitate disaster recovery, as well as to monitor assumption changes. Models should be subjected to change-control procedures, so that code cannot be altered except by approved parties. Audit Oversight While large banks may have model-validation units with internal audit departments, model validation is often outside the scope of audit responsibilities. Nevertheless, the formal policy should clearly specify that internal audit is responsible for ensuring that the model validation and model-validation units adhere to the formal policy. Validating the Model Inputs Component Data It is possible that data inputs contain major errors while the other components of the model are error free. When this occurs, the model outputs become useless, but even an otherwise sound validation process will not necessarily reveal the errors. Hence, auditing of the data inputs is an indispensable and separate element of a sound model-validation process, and should be explicitly included in the bank's policy. Data come from both internal sources and external sources. For data arising from internal sources, the bank's audit functions should ensure that information provided to the model agrees with the bank's general ledger data, terms of outstanding contracts, and so on. Externally provided data can also often be checked against multiple sources. In addition, extremely effective and inexpensive procedures to spot errors include automated filters and the inspection of the inputs by experienced personnel. In some cases, particularly when models are relatively new, it is difficult for the responsible business units to ensure that the data inputs are always accurate. If a bank decides that the model provides useful information despite the data problems, the bank's policies should specify that audit, risk management, and modeling personnel are independently responsible for apprising senior management of the data problems. This alerts decision makers both that the model results may not be completely reliable and that there may be a need to devote more resources to providing quality data. Assumptions Besides raw data, computer models require an array of assumptions. Prime examples include prepayment functions for loan-valuation models, "market-implied" interest-rate volatilities for derivative pricing models, and core-deposit decay assumptions for asset-liability models. These types of assumptions are generally determined by a separate model, which itself has inputs, processing and outputs that should be validated using the principles elucidated here. Many assumptions will be available in general form from publicly available sources at relatively low cost. For example, many banks use the market-implied volatilities and mortgage prepayments that are available from the various vendors. On the other hand, a bank may feel that it is better to derive its assumptions by studying its own customer base than by using general information about national or regional populations. Similarly, a bank may feel that it has a special insight into market behavior, and that its assumptions about markets are superior to publicly available assumptions. Modelers should be able to provide a clear rationale for their choice between public and private assumptions. Whether drawn from public sources or from the bank's own research, important behavioral assumptions should be routinely compared to actual portfolio behaviors. For example, prepayment assumptions project the actual prepayment rates for "all possible" changes in interest rates. These projections should be compared, on a monthly basis, to the actual prepayment behavior that the bank experiences on its residential mortgage loan and security portfolios. As interest rates change, the bank's actual prepayment rates will change. If, over a period of several months, actual changes are consistently more pronounced than projected, then the prepayment function is systematically over optimistic (and the converse holds as well). As a best practice, some banks routinely include these comparisons in the reports to senior management. Validating the Model-Processing Component Model processing consists of the computer code and the theoretical models that the code implements. The choice of theory is at least partly a matter of art rather than science: all theories are greatly simplified representations of reality, and judgment comes into play in deciding what simplifications are acceptable. Aside from the choice of theory, the validation policies for the processing component of its models should ensure that the mathematics and computer code are error free. Code and Mathematics A number of procedures exist for testing code. Most models, such as those that operate in spreadsheets, have relatively simple code and equations, which can be cheaply tested by the independent construction of an identical model. If the results of the two models agree precisely, it is usually highly unlikely that the two independently constructed models would contain precisely identical errors. For more complex models, independent construction of an identical model may be too costly. These situations require alternative practices. Some practices will: (1) Assign a modeling professional with the task of line-by-line proofreading of the code. This practice may uncover most of the errors, but is not foolproof. (2) If possible, compare model results to the results from a second, well-validated "benchmark" model. This procedure is most useful when the validator can ensure that the inputs and theory in the second model are identical to those of the first, at least for a trial run. In most cases, however, the inputs and theory will differ at least slightly between the two models, so there will be at least slight discrepancies between the model outputs. Unless the discrepancies are glaring, the validator will be required to render a subjective judgment of whether the output differences are the result of input differences or processing error in the model under construction. Even if a bank uses a vendor model, it should seek assurances that the model is defensible and works as promised. Vendor models present banks with a trade-off between convenience and transparency. Within the limit that vendors will not reveal proprietary information, bank users of vendor models should require that the vendors provide information on how the vendor built and validated the model. As professional modelers, vendors should themselves follow good model validation practices and demonstrate that to client banks. One common misconception is that validation of the computer processing is not necessary for vendor models, because these models have "met the market test." In fact, banks that apply good validation procedures to vendor models often find material processing errors. These experiences illustrate that the same validation principles should be applied regardless of whether a model is purchased from a vendor or developed in house. When evaluating vendor models, banks should consider the ease with which, once identified, processing and other software errors can be corrected. Theory Implementing a computer model usually requires the modeler to resolve several questions in statistical and economic theory. Generally, the answer to those theoretical questions is a matter of judgment, though the theoretical implementation is also prone to conceptual and logical error. An obvious means to guard against this source of model error is to ensure that the theorist has the training and experience necessary to perform the work. One of the largest sources of model error arises in the use of theoretical tools, most often statistical methods, by untrained modelers. Regardless of the qualifications of the model developers, an essential element of model validation is independent review of the theory that the bank uses. In many circumstances, internal review will be quite effective. In other circumstances, effective internal review is difficult to obtain. In those situations, senior management should expect modelers to (a) provide clear descriptions, in nontechnical terms, of the theory underlying the models; and (b) show that the theory underlying the model has received recognition and support from professional journals or other forums. Comparison to other models is often very useful for uncovering theoretical errors. In this case, other models include pre-existing or similar models already in use at the bank, market prices (which represent the "True Model"), and publicly available model results. When developing a new model, the comparison of the results to these other sources of information will confirm the modeler's expectations, reveal a model error, or lead to an enhanced understanding of the phenomenon under scrutiny. Model Reports (Management Information Systems) After processing the inputs, the model produces price or exposure estimates, or decision indices that will be used by decision makers. Obviously, the model validation process should assess the validity of those estimates. However, it is equally important that the reports distilled from model output are clear and that decision makers understand the context in which the model results are generated. Validating Model Results Many of the procedures used to validate the input and processing components of a model are also useful for validating the model results. At the time a model begins to produce outputs, model developers and validators should compare its results against those of comparable models, market prices, or other available benchmarks. Once in use, model estimates should continually be compared to actual results, a procedure referred to as "back testing," "out-of-sample testing," and similar terms. Many models, asset-liability models in particular, deliver projections that are "conditional" upon the economic environment that actually materializes; over time, such conditional projections can also be validated against actual outcomes. Validating the Context of Reports The business decision maker and the modeler often have quite different backgrounds. Even in apparently clear pricing and risk reports, the modeler and the decision maker may interpret the information in quite different ways. For example, decision makers often mistake a model's risk estimates as the "worst-case scenario" for their banks, even though there are inevitably plausible scenarios and assumptions under which the bank could lose more than estimated. When addressing model-documentation requirements, a bank's model-documentation policy should include the requirement for an executive summary that is made available to senior management. Properly explained, the questions that models answer are invariably quite narrow in strict logical terms, so a clear statement of model purposes helps senior decision makers understand the limitations of the model. The summary should also include the major assumptions, further illuminating the model's limitations. Independent review of a model's underlying theory should always extend to the reports that transmit information from the modeler to the decision maker. An essential element of designing a model's reports is ensuring that the results are communicated clearly and accessibly. In addition to the model estimates used to estimate fair values or assess risk, best-practice model reports also contain sensitivity analyses, or so-called "what if" scenarios. These provide alternative estimates using reasonable alternatives for the major assumptions. Sensitivity analysis serves not only to provide a range of estimates, but to communicate to decision makers the robustness or fragility of outputs from the model. SUMMARY OF SUPERVISORY EXPECTATIONS REGARDING MODEL VALIDATION Model validation can be costly, particularly for smaller banks. On the other hand, using unvalidated models to manage risks to the bank is potentially an unsafe and unsound practice. Even where the risk is not particularly material, the reliance on unvalidated models can be a poor business practice. Supervisors believe that the assessment of the costs and benefits of model validation is subjective and context-driven and is the responsibility of senior management. To promote a sound process, the OCC expects that formal policies ensure the following goals are met: (a) Decision-makers understand the meaning and limitations of a model's results. Where the models are too abstract for non-specialists to understand the underlying theory, the bank must have a model reporting system in place that transforms the models' outputs into useful decision-making information without disguising the model's inevitable limitations. (b) Particularly when a model has been in use for a reasonable period of time, its results are tested against actual outcomes. (c) The bank should demonstrate a reasonable effort to audit the information inputs to the model. Input errors should be addressed in a timely fashion. (d) The seniority of the management overseeing the modeling process should be commensurate with the materiality of the risk from the line of business in process. (e) To the extent feasible, model validation must be independent from model construction. (f) Responsibilities for the various elements of the model-validation process must be clearly defined. (g) Modeling software should be subject to change-control procedures, so that developers and users do not have the ability to change code without review and approval by an independent party. Computer models are increasingly used in banking to estimate risk exposure, analyze business strategies and estimate fair values of financial instruments and acquisitions. As models play an increasingly important role in decision-making processes, it is critical that bank management reduce the likelihood of erroneous model output or incorrect interpretation of model results. The best defense against such "model risk" is the implementation of a sound model validation framework that includes a robust validation policy and appropriate independent review. If you have any questions on the contents of this bulletin, please contact either the Risk Analysis Division at (202) 874-5250 or the Treasury and Market Risk Division at (202) 874-5670. _________________________________ __________________________________ Jeffrey A. Brown Kathryn E. Dick Director, Risk Analysis Division Director, Treasury and Market Risk Division