Is DAWIA Worth It? An Approach to Analyzing the Impacts By Dean (Dusty) Rhoads The Defense Acquisition Workforce Improvement Act (DAWIA) has brought change, but is it worth it? This article provides information on DAWIA and suggests an approach for conducting a study of the impacts of implementing DAWIA. INTRODUCTION More than a year has passed since the last mandatory provision of the Defense Acquisition Workforce Improvement Act (DAWIA) became effective (October 1993). Results are beginning to surface, although the full effects of DAWIA implementation will not be known until the ramifications of a more highly qualified acquisition workforce have worked their way through the system. It is time to begin asking what the effects of DAWIA implementation on the DoD acquisition system and workforce have been and what are the costs associated with implementation. As with any new program initiative, structures and mechanisms are needed to collect the necessary data and to identify emerging trends. This article identifies an approach for conducting an evaluation of DAWIA impacts and costs, and for interpreting the results based on proven analytical techniques. The first part of a DAWIA effects study would be a performance evaluation; that is, a structured assessment of the Act's actual or potential impacts on the acquisition system, its processes, people, organizations, and products. The primary goal would be to assess how well the objectives of DAWIA are being realized. The evaluation would be based on identifying criteria for success by establishing suitable measures of effectiveness (MOEs), determining which MOEs are most applicable to the problem (i.e., have the highest cause/effect correlation), and differentiating multiple effects from multiple causes. The second part of the study would be a resource analysis keyed to the measured effect of practical constraints (such as money, other resources, and time) on expected outcomes and achievable capabilities. In combining these two parts, this study resembles several other types of analyses, including tradeoff, risk-return, cost-benefit, and return-on-investment. STUDY APPROACH A carefully selected team of analysts should be assembled for a study of this scope. It should possess a broad mix of education, training, skills, and experience relevant to DAWIA, defense acquisition, and the analysis techniques involved. The team needs to build synergy and carefully consider all aspects of a problem to minimize surprises and to maintain objectivity. At all steps in the process, close and frequent contact with the DAWIA stakeholders should be maintained to ensure that the analysis remains on track and achieves its objectives. The analysis starts with the problem statement as the premise for the study and follows these steps: 1. Define study objective(s); 2. Define problem domain and boundaries; 3. Identify MOEs; 4. Develop model; 5. Identify data to be collected and sources of data; 6. Collect data; 7. Analyze and interpret data; and 8. Report. STUDY OUTLINE Problem Statement Since full DAWIA implementation was mandated to occur by October 1993, a detailed analysis of its impact can be undertaken now to develop the methodology and models and to collect baseline results. Follow-up studies can then be conducted annually and the results compared to the baseline data to identify trends in the impacts of DAWIA implementation. Annual studies can be accomplished after all Services and agencies have submitted their October 1994 DAWIA reports to the Defense Manpower Data Center (DMDC) and the data are available for analysis. The basic process for the initial study is as follows: o Define study objective(s) The purpose of analyzing the impacts of DAWIA is to determine empirically whether its objectives are being achieved. This requires tracing and analyzing the Act's legislative, statutory, and regulatory history to identify the underlying expectations. Study questions can then be formulated. For example, what is the effect of DAWIA implementation on the DoD acquisition process? What is the return or benefit anticipated from implementing DAWIA? The answers to these questions will provide decision makers with pertinent information to support informed budgeting decisions for DAWIA. Performance evaluation in this case would be accomplished in two phases, implementation and effects. For the implementation phase, how successfully DAWIA requirements (e.g., the requirement that critical acquisition positions be filled by Defense Acquisition Corps members) have been implemented across all DoD services and agencies would be evaluated. In the effects phase, an attempt would be made to quantify the impacts of DAWIA implementation on the DoD acquisition process (e.g., are Defense Acquisition Corps members better program managers than pre-DAWIA program managers?). To illustrate the methodology, we begin from the premise that an objective of DAWIA is to raise the qualifications of the defense acquisition workforce, since DAWIA requires acquisition personnel to have more education, experience, and acquisition training than was previously required. An implied assumption is that a more qualified workforce would have positive impacts on DoD acquisition programs and processes. One of the first questions to answer is this: have the qualifications of the defense acquisition workforce improved since DAWIA? (In statistical analysis terms, this is an "activity" question.) The second, more difficult, question is whether the changes in the defense acquisition workforce have had any impact on acquisition programs and/or processes (an "outcome" question). This second analysis can only be concluded after determining that there have, indeed, been changes in the qualifications of the defense acquisition workforce that can be attributed to DAWIA implementation. A resource analysis would follow each of the performance evaluation phases to identify the costs associated with bringing about changes in the qualifications of the defense acquisition workforce, as well as costs saved and/or avoided in acquisition programs and processes as a result of DAWIA. o Define problem domain and boundaries The problem's domain and boundaries are implicitly defined by the objectives of the analysis. This second step ensures that we explicitly understand what is, and what is not, part of the problem. Here we would determine whether to answer questions such as these: what is the nature of DAWIA's impact on the DoD acquisition process? Has it been effective? Beneficial? Worth the cost? Boundaries must be identified for both the implementation and effects phases of the performance evaluation and resource analysis. Objectives must be structured to avoid defining problems in so broad a way that they cannot be solved. For example, one broad objective of this analysis is to determine if DAWIA has had positive effects on the management of acquisition programs. To be servicable, this objective must be broken down into multiple, well-defined, measurable questions that can be answered with some degree of certainty. For example, does ACQ 201 - Intermediate Systems Acquisition, a required course for Level II certification in the career fields of program management and communications-computers, provide effective training on cost control measures? If the questions are not appropriately structured and bounded, there is no way of assessing whether other outside influences are also affecting the observed results, and the questions become impossible to answer. The first phase, implementation, would be easier to delimit than the less well-defined effects phase. Many complex variables affect the outcome of acquisition programs, some of which are beyond the control of the acquisition workforce. For example, if an acquisition program is behind schedule and over cost, is it due to problems with its acquisition workforce or to funding perturbations on Capitol Hill or both? These kinds of considerations would require time to sort out and could hinder the ability to assess DAWIA effects on DoD acquisition. A further examination of this analysis of DAWIA outcomes may reveal that finding definitive answers would require greater investment than the potential benefits warrant. It may be more beneficial to identify a series of indicators of acquisition program success rather than focus efforts on unachievable results. o Identify measures of effectiveness (MOEs) Which workforce qualifications or performance objectives need to be evaluated? The MOEs would be different for each phase of the study. For the implementation phase, the MOEs would be measures of workforce performance and qualifications, such as number of critical positions identified, critical positions filled by Corps-qualified personnel, and requested and/or approved waivers. Education, experience, and acquisition training data would be analyzed to identify trends before and after DAWIA. Identifying MOEs for the effects phase requires more study and analysis than warranted by this brief outline. A key question in the effects phase is whether a post-DAWIA workforce is accomplishing the acquisition business of DoD more effectively. The MOEs in this phase would be much more difficult to collect and analyze. They could include number of people required to accomplish various acquisition functions, size of organizations, and length and complexity of acquisition training courses. From a resource analysis perspective, in the implementation phase the study would measure the investment cost, and in the effects phase, the return on investment. The MOEs for the implementation phase could include the cost of acquisition training, the cost of reporting, and the cost of maintaining the DAWIA required data. For the effects phase, MOEs could include reduced personnel costs; avoidance of fraud, waste, and abuse costs; and cost savings through improved performance. All MOEs would be weighted by some form of dollar and/or time factor (i.e., before and after DAWIA comparisons of schedules, inspection discrepancies, resources consumed, etc.). o Develop model Models of the implementation phase of the performance evaluation would be developed to assess whether or not DAWIA has affected the "activity" side of the house (i.e., changes in workforce qualifications). Models to measure the effect of positive activity results on acquisition processes and programs (outcomes) would also be developed. The focus would be on simple models that illustrate major trends rather than on complex models, which tend to lose visible results in too much detail. All models would be amenable to sensitivity analysis and "what if" exercises. The models would also identify outside factors that might influence outcomes. Examples of such factors include acquisition reform, acquisition streamlining, force downsizing, new regulations, and budgetary constraints. Technology and tools could also be mitigating factors, especially the application of information technologies that increase acquisition process efficiency and effectiveness. This methodology involves identifying dependent and independent variables and their relationships, activities, and outcomes. For example, which independent variables affect the qualifications and performance of the acquisition workforce (dependent variables)? If education, acquisition training, and experience are three independent variables, what is the relationship between them and the dependent variables? For the effects phase of the performance evaluation, the dependent variables from the implementation phase (workforce qualifications) would become the independent variables whose effect on dependent variables (acquisition processes and programs) would be identified. Sensitivity analyses could then be performed by varying the levels of workforce qualifications (i.e., mix of education, acquisition training, and experience) to identify the effects. For example, from a return-on-investment perspective, what is a minimum level of investment (in the independent variables) to realize any effect, or what level of investment provides the greatest return, or what level of investment yields the greatest percentage return? o Identify data to be collected and sources of data The MOEs, the activity and outcome measures, and the models would be the determinants in deciding what data to collect and analyze. We have activity measures and outcome measures, both of which have MOEs that are estimated by models. In the case of workforce qualifications, the data collected would include educational degrees, acquisition course completions, and experience. No new data reporting would be required. Data would be derived from the information presently collected on the workforce and reported to DMDC. The data collected would be both pre- and post-DAWIA implementation for comparison and analysis. The DMDC data would be compared with data from the Defense Acquisition University (DAU) to identify the number of graduates of various acquisition training courses and the number of them who are now working in acquisition positions. o Collect data The data would be collected and used to validate the hypothesized models. Data already being reported by the Services and agencies and collected by DMDC would be utilized to the utmost; no new reporting requirements are envisioned. Most, if not all, data needed to evaluate changes in workforce qualifications should be available from existing personnel systems and can be collected through programmed queries to the databases. Cost data would be collected from selected financial accounting databases in DoD. o Analyze and interpret data Various statistical analysis tools, depending on the models chosen, would be employed. By using the tools and analyzing the results, we would identify changes in workforce qualifications attributable to DAWIA, associate causes and effects, and assess the value of effects in relation to the investment. o Report Management level reports summarizing the process methodology and interpreting the results would be provided at the conclusion of the analysis. SUMMARY The DAWIA implementation is driving major changes in the way acquisition careers are managed and the way acquisition professionals are selected for assignments, promotions, and advancement. It is imperative that DoD decision makers fully understand the impacts of DAWIA implementation, not only on acquisition processes and programs but also on the people involved. A performance evaluation and resource analysis would help DoD ensure that DAWIA implementation is beneficial to both its people and its processes and is in the best interests of the Government. This article outlines a methodology to provide both the qualitative and quantitative feedback that DoD executives need to make informed decisions regarding DAWIA. The study outline is a first step that does not answer all the questions, but confronts some of the difficulties involved in finding answers. The effort needs to be undertaken to generate unbiased, accurate, in-depth, and pertinent information concerning the merits of implementing DAWIA in the DoD.