|
HHS IRM Guidelines for Capital Planning and Investment ControlJanuary 8, 2001HHS-IRM-2000-0001-GD TABLE OF CONTENTS
A. Guideline A - Model Process
A.1.1 Business Need Identification A.1.2 Project Formulation A.1.3 Performance Measures A.1.4 Cost-Benefit and Alternatives Aanalysis A.1.5 Return on Investments A.1.6 Economic Sensitivity Analysis A.1.7 Gap Analysis A.1.8 Risk Assessment A.2. Investment Decision Management A.2.1 Concept Review Based on Mission Impact A.3.1 Systems Life Cycle A.3.2 Project Management A.3.3 Technical Strategy and Plan A.3.4 Configuration Management A.3.5 Measuring Performance A.4. Operations and Maintenance A.5. Project Performance Evaluation B. Guideline B: The Raines’ Rules C. Guideline C: HHS IT Capital Planning and Investment Control/Budget Formulation Time Line D. Guideline D: Financial Criteria E. Guideline E: Monte Carlo Analysis G. Guideline G: The Capability Maturity Model Guideline A - Model Process This model process is to be used as guidance to assist the OPDIVs in managing their CPIC processes and procedures. OPDIVs may adopt their own specific process. Some of the outputs specified below are required to be completed and available for Departmental ITIRB review. The IEEE/EIA U.S. implementation of the ISO/IEC 12207 Standard for Information Technology – Software Life Cycle Processes defines a framework for software life-cycle processes from concept through retirement. It is especially suited for acquisitions because it recognizes the distinct roles of acquirer and supplier. ISO 12207 describes five primary processes: acquisition, supply, development, maintenance, and operation; eight supporting life cycle processes: documentation, configuration management, quality assurance, verification, validation, joint review, audit and problem resolution; and four organizational life cycle processes: management, infrastructure, improvement and training. The full standard is available from IEEE - http://www.ieee.org. DoD has adopted 12207 in place of their Software Development Life Cycle Specifications (e.g. DoD 2167, DoD 2167A, DoD 498). A similar approach will be used in capital planning for all projects subject to HHS IRM Policy # HHS-IRM-2000-001, "Capital Planning and Investment Control." OPDIVs should adopt this or similar processes for projects under their direct control. At each of the first four stages, a business case is developed, refined and updated. The business case consists of documents outlining the requirements, return on investment (ROI), cost, scope, schedule, and risk associated with the project. After the project is initially approved by the ITIRB, the business case is generally updated annually. This may be required more frequently for specific projects. The output from each of the first three stages provides the information necessary to make the decision to move forward with the project, by the Department and/or the OPDIV’s ITIRB. Figure A "ITIRB Review Process with Required Products of HHS IRM Policy for Capital Planning and Investment Control" illustrates the stages and outputs expected from the OPDIVs during the ITIRIB process. The following sections provide detail on the recommended content and documentation for ITIRB review. A.1 Requirements Management In order to identify and therefore manage, the requirements of any project, a particular sequence of steps is usually followed. A business need is identified and an initial business case for the project including cost, scope, schedule, and risk is developed. A.1.1 Business Need Identification All business needs flow from the mission of the OPDIV. As the mission changes, additional projects reflecting new or changed emphasis on IT are identified. IT is often used as a tool to implement the solution to the changes. The first step in developing an IT investment project is to identify a specific business need and preliminary, high-level system requirements. A high level, rough order of magnitude (ROM) determination of resource and schedule requirements is conducted. In addition, pre-coordination with stakeholders and other interested parties (i.e., executive management, budget staff, etc.) can assist by providing necessary information to each party. As the system concept is developed, the general scope of the project is established. A.1.2 Project Formulation As a part of the Federal budget cycle, a formal call for the submission of proposed IT projects is generally issued by the OPDIV CIO, in September, when the organization is developing the operating plan for the upcoming fiscal year. The IT budget, at its highest level, is simply the aggregate of all of the approved IT projects for a given year, with possible exceptions for emergency situations (i.e., new legislation, unexpected system failure). To ensure that the HHS CIO is working with up-to-date information regarding the Department’s IT investments and expenditures during the annual Budget Review Board sessions, OPDIVs must update and submit an OMB Exhibit 53, "Agency IT Investment Portfolio" and any necessary OMB Circular A-11 forms 300B, "Capital Asset Plan and Justification." HHS IT projects include new investments, ongoing investments, and ongoing operations and maintenance systems, as well as grant programs with significant IT components. See the timeline in Guideline C, "HHS IT Capital Planning Investment Control/Budget Formulation Time Line". The project manager prepares a spending/procurement plan that lists the specific purchases, including operations and maintenance costs, anticipated for the coming year. One purpose of the plan is to ensure timely, efficient procurement of planned IT acquisitions throughout the year. Another purpose of the plan is to assure availability of funds in the appropriate quarter of the fiscal year. A business case containing a complete description of the project must be created and justified for the project. The business case includes: A business needs statement that defines the specific need(s) this system will fulfill, clearly demonstrates how the project supports the OPDIV and HHS strategic plans, business functions, and mission-critical functions, and HHS IT architecture. A project plan and schedule list the major milestones of the project. The work breakdown structure lists the hierarchical arrangement of the work to be performed. The paragraphs below cover some of the required input to complete the eight documents delineated above. A.1.3 Performance Measures Performance measures shall be established to demonstrate a successful outcome by designating accountability for the results and by striving for concrete results. All systems developed and deployed since 1996 require the establishment and tracking of appropriate performance measures. Performance measures are established to track the success of the project. Each performance measure, shall be reported and monitored, and should demonstrate no more than a ten (10) percent deviation from the target. Example measures include: (1) Amount of contract award: Is the amount of the initial contract award within the budget? Is the overall projected contract value, at the time of award, within the budget? (2) Baseline or current performance or level of service: How will the current environment or performance be improved. (3) Implementation status: Describe the service level achieved compared to the level planned. Will the project produce the anticipated return on investment? Are deliverables for each quarter being received on time? (4) Performance of hardware/software or other purchases: Are hardware and software packages working well for the agency? Are the hardware and software contributing to project success? For example, is down time excessive? (5) Schedule: How does the current schedule compare with the project schedule? (6) Risk. How does projected risk compare to actual risk? (7) Definition of benefits and subsequent measure: Are the definitions and measures appropriate? (8) Benefits: Are the benefits being achieved? For example, are they producing the anticipated cost savings and cost avoidance? (9) Meeting business need: Has the project meet the identified business need? The OPDIV should add any other performance measures that are appropriate or necessary. A.1.4 Cost-Benefit and Alternatives Analysis The cost/benefit analysis should identify at least three alternatives considered to implement the proposed development or change request, and identify the option with the greatest benefit to the agency. For each alternative, the project manager should show how the project’s functional requirements would or would not be met; estimates of the project life cycle costs, and the anticipated benefits or the return, including the corresponding analyses that were conducted to develop the estimates. A complete cost/benefit analysis should include the following for each alternative considered: (1) Identify and quantify costs, both recurring and non-recurring for investment and operations and maintenance. (2) Identify assumptions and constraints that were used when developing the figures. (3) Identify both quantitative and qualitative benefits. (4) Evaluate alternatives using net present value techniques, including a discount factor, ROI and cost-benefit ratio. (5) Conduct a gap analysis, which refers to a need that is not being fulfilled. Costs and benefit risk elements Include, but are not limited to: A.1.5 Return on Investments The ROI of the project shows how much the project benefits the organization in comparison to the savings or cost avoidance. Anticipated return can be the basis for measuring project performance. Prototypes and pilots can help quantify anticipated gains in performance or reductions in cost by testing expectations on a small scale. All enhancements are subject to an ROI justification, and in general, the industry convention is that, if the annual cost of enhancing an application exceeds 20 percent of its replacement cost, or 20 percent modification to code, there may be less risk and less cost if the system is replaced. Present value and net present value (NPV) analyses are used to compare investment alternatives that occur over multiple years. See Guideline D, "Financial Criteria." A.1.6 Economic Sensitivity Analysis The sensitivity analysis refers to the relative magnitude of change in one or more elements of an economic analysis. Due to the uncertainties in the analysis, it is necessary to know more than the results using one set of conditions especially if a recommendation would change if one or more of the input variables to the cost-benefit analysis changed. Sensitivity analysis expresses cash flows in terms of unknown variables and calculation of the consequences of wrongly estimating the variables. These variables may be cost estimates, implementation schedule length, requirements, configuration of equipment or software, conversion costs, or other types of assumptions. The "Monte Carlo" method of simulation is a useful example. Many products can be used to conduct a sensitivity analysis, depending on the OPDIV needs, and the scope and scale of the project purchase. The method was originally developed for computer simulations of nuclear fission, which were used to test whether an atom bomb was feasible. Refer to Guideline E, "Monte Carlo Analysis." A.1.7 Gap Analysis When an IT project is initiated, a gap may exist between the current state and the desired state. A gap analysis compares and examines the current system functionality against the proposed system and requirements. If the proposed system meets the current system functionality or new requirements, there is a fit. If it does not, then there is a gap. If the proposed system exceeds the requirements or current system functionality, then there is "value added." Once a list of unique functionality has been developed for each alternative, the analysis can begin to identify benefit categories. Each alternative will not necessarily provide the same benefits, nor will they necessarily provide each benefit in equal amounts. A.1.8 Risk Assessment Risk assessment and risk management are ongoing activities throughout the life of the project. The project owner is responsible for recognizing risks and of developing risk mitigation strategies. Again, prototypes and pilots can minimize risk by substantiating assumptions and testing technical solutions on a small scale. Typical categories of risk include: Project risks - the size of the investment, the project size and longevity, scalability, project interdependencies, or the experience of the project team in managing similar projects. Organizational risks - the impact on agency programs if the project does not proceed, management commitment to the project, or political expectations or legislative requirements to complete the project. Technical risks - risks associated with maintaining skilled staff, hardware and software dependencies, application software, other infrastructure needs, and security vulnerabilities and safeguards. To deal effectively with potential risks, the project owner must develop a risk assessment plan that includes: Risk identification - documenting events that could jeopardize the successful completion of the project within the quality, cost, and schedule defined for the project. Risk analysis - estimating the impact of risk on the quality, cost, and project schedule and seeking ways to reduce the impact of a risk event prior to its occurrence. Risk monitoring and reassessment - reporting on the progress of the project, deleting risks as necessary to reflect the current situation, identifying new risks, and beginning mitigation action to reduce risk. Deviation by more than 10% from the performance measures requires risk mitigation. Risk progress reporting - developing complete documentation on identified risk events, mitigation actions, and success/failure of these actions. Finally, if the OPDIV is considering more than one investment, all candidate systems must be ranked and prioritized by the strength of the business needs. A.2 Investment Decision Management All projects must be reviewed and prioritized for funding, and proceed based on mission and cross-functional impact. A.2.1 Concept Review Based on Mission Impact The highest priority projects will be those that have the greatest impact on the OPDIV mission or on other dependent agencies. The project may be important to the OPDIV, but the OPDIV does not consider it critical. In this case, the OPDIV may "wait-list" the project in favor of another, more important project, or until the OPDIV makes funding available, or until the OPDIV can assure that an appropriately skilled project team is available. The OPDIV’s ITIRB must evaluate each project concept based on the project’s impact and the probable benefit to the OPDIV and/or other OPDIVs, or HHS as a whole. The review will result in a rating for each system. The rating will then be used to form a priority ranking of the candidate systems. The budget should allow for a mix of projects, including proposed projects, projects under development, and operational systems. The portfolio should ideally be a mix of mission-related systems, administrative systems, and infrastructure. The OPDIV ITIRB process for ranking and funding projects shall build in project management costs, IV&V funds and independent testing funds up-front for the largest and most critical projects. Actual costs for operations and maintenance shall be calculated for five years, and shall include hardware and software, technical refreshment, and training funds, at a minimum. If full funding is not received, the project owner may need to reduce the level of service or activities proposed to reflect the new lower amount. If the project is not approved at all, the project owner may wait and resubmit the project at a later date, or refine the implementation strategies based on guidance provided by the board. The ITIRB may find that the project is under-funded and cannot achieve success without more money. In any case, the business alternatives analysis and/or cost-benefit analysis shall be updated and resubmitted if the project is intended to proceed. A.3 Systems Development Each project shall use system life cycle and development planning. Systems development shall include planning for any training, configuration management, quality assurance, risk management, and any other collateral tasks associated with the project. A technical strategy and the associated plan shall be developed since the project is mature enough at this point for detailed design. All systems design, development, and maintenance efforts must follow a minimal set of activities, documentation, and review requirements. "System Life Cycle (SLC)" is defined as a structured development approach with defined activities, phases, products, and reviews that provide a standard to support the development of systems from identification through implementation, operation, maintenance, and eventual retirement. The systems life cycle process is a basic requirement for systems development. There are a variety of life cycle models, such as: waterfall, spiral, evolutionary, decomposition (or stepwise refinement), and formal transformation. The choice among the models is made based on the specific project. Once chosen, a change in method being employed must be justified to the ITIRB. To improve information management to support these business operations, the SLC shall guide technology investments and process improvements. The SLC incorporates objectives, formal documentation, and key reviews throughout its major activities and phases that allow managers to make key decisions about systems development and information technology control. Planning, oversight, and approval processes within the SLC support the development of high-quality systems, delivered on time and within budget. The project manager is responsible for tracking and assessing the actual performance against the established goals and projections. Reporting on project performance should be done throughout the life of the project and is the basis for monitoring the progress of the project against projected costs, schedules, performance, and deliverables. If a project is projected to be or is actually late or over cost, a corrective plan and adjustments and improvements must be made for the project to continue. Early identification and correction of any problems that arise are critical. Project managers will report performance using earned value techniques (see Guideline F, "Earned Value"). The project manager is responsible for: (1) Application of the specific measures included in the project plan to track and compare estimated versus actual values for costs, schedules, benefits, risks, and project success. (2) Documentation of any changes in the nature and scope of the requirements between the original requirements and the current business conditions, and changes in the business assumptions. (3) Preparing "lessons learned" and identifying process improvements that can be applied to later phases of the project or similar projects. Project managers are responsible for staffing, how staff will be used, and how long the project will take. A complete project schedule should include: Project managers should continuously compare the actual costs and benefit data for the project to the initial figures submitted in the project plan. If there have been approved changes in the project as a result of changing environments, budgets, or priorities, the HHS IT repository should be revised to reflect these changes with appropriate documentation and rationale. A.3.3 Technical Strategy and Plan OPDIV program officials/project managers must work closely with technical experts to ensure that all system requirements are fully developed. The technical plan must address the technical parameters of the project in detail, including the platform chosen and the expected impact on the infrastructure. The assumptions regarding the level of data processing resources, the amount of staffing support anticipated, and who will provide the support (contractor staff or OPDIV staff) must be included. In addition, the amount of operations and maintenance support must be considered. The plan must address the options considered, and provide a justification of why the specific models or types of hardware, software, network, and telecommunications purchases noted in the spending plan were selected. A.3.4 Configuration Management All HHS system proposals constitute formal change requests. The change requests will be designated critical, routine, or administrative. Any proposed new system development or upgrade must be documented with a change request, and, eventually, a business case. A program level change management process is critical to control and document changes to systems. The OPDIV shall create and operate a change control board. In most cases, change control will exist to manage changes concerning a related family of systems. Change management occurs throughout all phases of the system life cycle. Changes during maintenance of existing systems cycle forward into new business needs and the need to modify existing systems or develop new systems. All changes must be tracked and monitored for adherence to the HHS ITA, interaction with other systems and organizations (crosscutting systems only), HHS and OMB policies, the CCA, and other pertinent policies and legislation. Substantive changes that meet ITIRB thresholds need to go through the ITIRB process. Specifically, the OPDIV shall: OPDIVs must establish a system to measure the performance and cost of an operational asset against the baseline established in the planning phase. This information will allow agency managers to optimize the performance of capital assets and identify the need for new investments. Analysis of the cost of asset ownership is defined as the total of all costs incurred by the owners and users to obtain the benefits of a given acquisition. Ownership costs, such as operations and maintenance, service contracts, and disposition can easily consume up to 80 percent of the total life-cycle costs. A.4 Operations and Maintenance Proper maintenance is essential to the life of an asset. If an asset is not properly maintained, its useful life can be shortened dramatically, reducing the return on the taxpayers’ investment. Day-to-day operation and maintenance of any asset must be carefully planned. In addition the projected maintenance cost of the asset must be factored into the procurement of that asset, to make a best-value determination when selecting between competing proposals, and tracked throughout its life cycle. Factors affecting the successful implementation of an Operations and Maintenance plan include: The OPDIV should have a close relationship with the vendor representative on maintenance the vendor is required, by contract, to keep the equipment in good operating condition. The maintenance provisions of the contract is the basis for negotiating a maintenance with the vendor(s). A.5 Project Performance Evaluation The evaluation phase occurs once the project has been completed, and consists of an assessment of the project's success or failure at each major phase. The Contracting Officer may seek past performance data on an annual basis, during the contract performance period. The IT project owners should make sure that their input to such data is consistent with the past performance evaluation data that they collect in the HHS IT repository. Past performance data about the project shall be collected in the HHS IT repository and analyzed to compare expected results against actual benefits and returns. The past performance evaluation will be based on: The project’s overall effectiveness in meeting the original objectives. The identification of the specific benefits that have been achieved, including whether these benefits match the projections and the reasons for any discrepancy. A comparison of actual costs incurred against the projected costs. A determination of how well the project met time schedules and a determination of management and user perspectives on the project, if necessary. An evaluation of issues that require further attention. A summary of "lessons learned" for use in providing future project management. Guideline B: The Raines’ Rules Guideline C: HHS IT Capital Planning and Investment Control/Budget Formulation Time Line April/May HHS ITIRB meetings held and recommendations to the CIO
completed. June/July HHS Budget Review Board meetings and funding decisions made. (CY+2 or Budget Year +1) August HHS ITIRB meeting. September Final Secretary’s initiative papers due. October Beginning of the new fiscal year. November HHS ITIRB meeting. OMB passback received in the Office of Budget. December Department appeals of OMB passback. January Final budget all-purpose table due; includes all years. February HHS ITIRB March OPDIV investment reviews completed and documentation, in
accordance with the OMB passback instructions. Guideline D: Financial Criteria Financial criteria are required for mission-related and administrative IT projects (proposed, ongoing, and existing). Two major but relatively simple financial analyses are used to evaluate projects. These analyses are Net Present Value (NPV) and Return on Investment. Present value (PV) and net present value (NPV) analyses are used to compare investment alternatives that occur over multiple years. PV requires that all quantifiable benefits and costs are brought back to current day dollar values; i.e., "present" value. By defining all costs and benefits in current dollar amounts, various alternatives can be compared directly. An analysis becomes a net present value when the analyst subtracts the project’s PV costs from the forecasted PV benefits. When estimating costs over time, a person initially estimates dollar amounts in current day dollars. This is known as using constant base year dollars to estimated costs. However, this method is not accurate simply due to the time value of money. To account for this timing, or opportunity, difference, and analyst multiplies each yearly cost and benefit by a yearly discount factor. Once each dollar is multiplied by the discount rate, the dollar is considered to be in "present value" dollars. The equation used to multiply the cost/benefit amounts is: P = F 1/(1 + I)n, where 1 = discount rate, and n = year of project (base year = 0). PV is needed in addition to Return on Investment because Return on Investment does not account for the magnitude of the savings, just the relative savings ratio. Return on Investment is simply the rate of return for a given investment. Considering financial criteria only, and given a number of investment opportunities, the goal of capital planning is to construct an investment portfolio with the highest overall Return on Investment. For these purposes, Return on Investment is defined as NPV divided by present value costs, expressed as a Percent, or ROI = TDB –TDC/TDC, where TDB = total discounted benefits, and TDC = total discounted costs. Guideline E: Monte Carlo Analysis A sensitivity analysis refers to the relative magnitude of change in one or more elements of an economic analysis. Due to the uncertainties in the analysis, it is necessary to know more than the results using one set of conditions, especially if a recommendation would change if one or more of the input variables to the cost-benefit analysis changed. Sensitivity analysis expresses cash flows in terms of unknown variables and calculation of the consequences of wrongly estimating the variables. These variables may be cost estimates, implementation schedule length, requirements, configuration of equipment or software, conversion costs, or other types of assumptions. The "Monte Carlo" method of simulation is one example. The name of this method was assigned because the random number used to generate information for the trials is independent for each trial, like spins of a roulette wheel. The method was originally developed for computer simulations of nuclear fission, which were used to test whether an atom bomb was feasible. The Monte Carlo simulation is a mathematical technique for numerically solving differential equations. It is extensively used in finance for such tasks as pricing derivatives or estimating the value at risk of a portfolio. The Monte Carlo method is typically used in a financial situation. Because many financial problems are complex, this method is frequently used. A typical Monte Carlo simulation is used to solve problems which require that one or more statistics of a probability distribution be calculated. For example: One example is how a complex option might be priced. Suppose the option’s value is dependent on underliers, a stock index, and an exchange rate. The Monte Carlo simulation might be used to price such an option. It can randomly generate 10,000 scenarios for the value on the option’s expiration date, and the two underliers, and will do so in a manner that is consistent with an assumed joint probability distribution of the two variables. It can also determine what the option’s expiration value would be under each of the scenarios, and form a histogram of these results, generating an approximation for the probability distribution of the option’s expiration value. The discounted mean of the histogram is the estimated option price. More information on the Monte Carlo method may be found at http://contingencyanalysis.com. Guideline F: Earned Value In 1967, DoD established the earned value concept. This concept: There are two major objectives of an earned value system to: (1) encourage contractors and government staff to use effective internal cost and schedule management control systems. (2) permit the government to be able to rely on timely data produced by those systems for determining product-oriented contract status. The essence of earned value management is that at some level of detail appropriate for the degree of technical, schedule, and cost risk or uncertainty associated with the program, a target value (e.g., budget) is established for each scheduled element of work. The project owner must ensure that both contractor and government plans, budgets, and scheduled work are established in increments that constitute a performance measurement baseline. Milestones defined as deliverables are concrete and measurable and can demonstrate work accomplished. These are the "planned value." The sum of work completed and the cost for direct and indirect effort is the "earned value." An understanding of several terms is necessary to understand earned value calculations. These are "budgeted cost of work scheduled," or BCWS; "budgeted cost of work performed," or BCWP. As an example, if BCWS – BCWP = ACWP, or Actual Cost of Work Performed," the work scheduled equals the work performed, and there is no deviation. However, if a work deviation occurs, the cost variance will be BCWP – ACWP. Indices may also be used to measure deviations. A financial deviation is noted as ACWP/BCWS. Other helpful terms include Earned Value (EV) and Actual Cost (AC). As an example, EV/AC yields a cost performance index and EV-budget equals schedule variance. To illustrate, assume that a contract calls for an IT system to be developed in four weeks at a cost of $4 million. After three weeks of work, only $2 million has been spent. By analyzing planned versus actual expenditures, it appears that the project is under-running the estimated costs. However, an earned value analysis reveals that the project is in trouble, because even though only $2 million has been spent, the system is only 25% completed. On the basis of work completed, the project will cost $8 million ($6 million to complete the remaining 75% of the work, and the work will ultimately take a total of 12 weeks.) To learn more about earned value, please refer to the EIA-747 standard
entitled, "Earned Value Management Systems." You may also wish
to participate in the Office of Grants and Acquisition Management "Early
Warning Project Management System Workshop." Guideline G: The Capability Maturity Model To fully implement the Clinger-Cohen Act, the Department urges the OPDIVs to strive, at a minimum, to meet the Software Engineering Institute (SEI) Capability Maturity Model (CMM) level 2, the repeatable level, of this five-step model. In level 2, basic project management processes are established. Cost, schedule, and functionality are tracked and reported. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. The SEI CMM is important because each maturity level provides a layer in the foundation for continuous process improvement. Achieving each level of the maturity model institutionalizes software processes, resulting in an overall increase in the process capability of the organization. As the Department’s abilities and processes continue to mature, OIRM will ask the OPDIVs to meet additional levels of the CMM. CMM assists organizations in maturing their people, processes, and technology assets to improve long-term business performance. The SEI has developed CMM for software, people, and software acquisition, and assisted in the development of the CMM for system engineering and integrated product development. For more information, contact http://www.sei.cmm.org.
|
Last revised: August 29, 2001