New Mexico Human Services Department, DAB No. 1224 (1991)

Department of Health and Human Services

DEPARTMENTAL APPEALS BOARD

Appellate Division

SUBJECT: New Mexico Human Services Department
Docket No. 88-252
Decision No. 1224

DATE: February 6, 1991

DECISION

The New Mexico Human Services Department (State) appealed a funding
reduction imposed under section 403(h) of the Social Security Act (Act)
by the Office of Child Support Enforcement (OCSE).  Based on audits of
the State's child support enforcement and paternity establishment
program, OCSE determined that the State did not substantially comply
with requirements of the Act.  OCSE imposed a one percent reduction
(approximately $469,000) of the amount otherwise payable to the State
for Aid to Families with Dependent Children (AFDC) during the period
April 1, 1987 through March 31, 1988.

The State challenged the regulations that governed this disallowance,
the statistical methodology used by OCSE in calculating whether the
State had substantially complied with program requirements, the
auditors' inclusion of certain classes of cases in the audit sample, and
OCSE's conclusions concerning specific cases.

For the reasons stated below, we uphold OCSE's decision to reduce by one
percent the State's AFDC funding for the one-year period beginning April
1, 1987.  Specifically, we conclude that--

o       OCSE properly applied its interpretation of the statutory term
"substantial compliance" to the time periods here;

o       OCSE reasonably interpreted the statutory requirement for
"substantial compliance" to mean that a state must be taking action to
provide basic child support services (required under the Act) in at
least 75 percent of the cases requiring those services;

o  OCSE was not required to promulgate its statistical        sampling
methodologies as a rule under the                 Administrative
Procedure Act;

o  the statistical sampling evidence submitted here reliably shows that
the State failed to achieve "substantial compliance;"

o       OCSE's selection of cases for the sample was based on proper
interpretations of the statute and OCSE's implementing regulations; and

o       while we agree with the State that OCSE erred in not crediting
it for one action under OCSE's audit criteria for paternity and for
location, the increase in the number of action cases is not enough to
change our conclusion that the State did not meet the 75 percent
standard under either criteria.

Statutory and regulatory provisions

Each state that operates an AFDC program under Title IV-A of the Act is
required to have a child support enforcement and paternity establishment
program under Title IV-D of the Act.  Section 402(a)(27) of the Act.
The Title IV-D program has been in existence since July 1975.  OCSE has
the responsibility for auditing state Title IV-D programs, pursuant to
section 452(a)(4) of the Act, and evaluating whether the actual
operation of such programs conforms to statutory and regulatory
requirements.  Following adoption of Title IV-D, the participating
states were given 18 months by Congress -- until December 31, 1976 -- to
establish and begin operating their programs before compliance audits
actually began.  Under the applicable statute, a state was subject to a
five percent reduction of its Title IV-A funds if the audit found that
the state was not in compliance.  Congress, however, continuously
extended the initial moratorium on imposition of any penalty under these
provisions, so that no state was ever penalized during the first eight
years of the program's operation, although OCSE did continue its annual
audits.

On August 16, 1984, Congress adopted the Child Support Enforcement
Amendments of 1984, section 9 of Public Law 98-378 (the 1984
Amendments).  As amended, section 403(h)(1) of the Act provides that--

 if a State's program operated under Part D is found as a result
 of a review conducted under section 452(a)(4) not to have
 complied substantially with the requirements of such part for
 any quarter beginning after September 30, 1983, and the
 Secretary determines that the State's program is not complying
 substantially with such requirements . . ., the amounts
 otherwise payable to the State under this part [A] for such
 quarter and each subsequent quarter, prior to the first quarter
 throughout which the State program is found to be in substantial
 compliance with such requirements, shall be reduced . . . .

(emphasis added).

The amended section then provides for graduated reductions, starting
with a reduction of "not less than one nor more than two percent" and
increasing to a maximum of five percent with each consecutive finding
that a state is not complying substantially with Title IV-D
requirements.

The 1984 Amendments provided for the continuation of compliance audits,
which could in appropriate cases be scheduled as infrequently as once
every three years.  Rather than directing immediate imposition of a
penalty on a state that failed an audit, the Amendments provided that a
penalty could be suspended while the state was given an opportunity to
bring itself into compliance through a corrective action plan approved
by OCSE.  Section 403(h)(2)(A)-(C) of the Act, as amended.  If a
follow-up review of a state's performance during a corrective action
period showed that the state still did not achieve substantial
compliance, a reduction of one to two percent would be imposed.  Section
403(h)(1)(A) of the Act.  Continuing noncompliance would bring higher
penalties as time went on, up to a limit of five percent.  Section
403(h)(1)(B)-(C) of the Act. 1/

Section 9(c) of the 1984 Amendments provides that they "shall be
effective on and after October 1, 1983."

OCSE proposed regulations implementing the Amendments on October 5,
1984, 49 Fed. Reg. 39488 (1984), and issued final regulations on October
1, 1985, 50 Fed. Reg. 40120 (1985).  (We refer to these regulations as
the "1985 regulations.")  The 1985 regulations amended parts, but not
all, of the audit regulations at 45 C.F.R. Part 305.  Section 305.20(a),
as amended by the 1985 regulations, provided that, for the fiscal year
(FY) 1984 audit period, certain listed audit criteria (related primarily
to administrative or fiscal matters) "must be met."  This section also
provided that the procedures required by nine audit criteria "must be
used in 75 percent of the cases reviewed for each criterion . . . ."
These criteria relate to performance of basic services provided under a
IV-D state plan and are the criteria at issue in this appeal.  All the
service-related audit criteria are based on sections of 45 C.F.R. Part
305 which (with minor exceptions not relevant here) were originally
published in 1976, with minor amendments in 1982.  (We refer to these
provisions, as amended in 1982, as the "existing regulations" since they
were in effect during FY 1984.)

Thus, under the 1985 regulations, substantial compliance for FY 1984
audits was measured by audit criteria from the existing regulations, but
a state had to be providing the required services in 75 percent of the
cases requiring them.  In follow-up reviews after a corrective action
period, OCSE would examine only the audit criteria that the state had
previously failed or had complied with only marginally (that is, in 75
to 80 percent of the cases reviewed for that criterion).  45 C.F.R.
305.10(b) and 305.99, as amended. 2/


Background

OCSE's program results audit for FY 1984 (October 1, 1983 through
September 30, 1984) resulted in a February 24, 1987 notice to the State
that it had been found to have failed to comply "substantially with the
requirements of Title IV-D of the Act" in two areas, location of missing
parents and establishment of paternity. 3/  OCSE found that the State
took action in 78 of 153 cases requiring location of missing parents, or
52 percent of sampled cases, and that it took action in 102 of 207 cases
requiring establishment of paternity efforts, or 49 percent of sampled
cases.  OCSE Ex. 1.  4/

Rather than appealing OCSE's findings, the State opted to propose a
corrective action plan that was accepted by OCSE on May 12, 1987.  The
follow-up audit by OCSE of the State's performance for FY 1987 (October
1, 1986 through September 30, 1987) resulted in the December 2, 1988
notice of substantial noncompliance that is the subject of this appeal.
OCSE found appropriate action taken in location cases in only 22 of 50
cases, or 44 percent of sampled cases, and in paternity cases in 12 of
28 cases, or 43 percent of sampled cases.

 


State's Arguments

The State contended that the disallowance should be overturned because
the regulations upon which it was based -- those specifying the 75
percent standard for determining whether a state's performance
substantially complied with program requirements -- were invalid.  The
State maintained that, under Bowen v. Georgetown University Hospital,
488 U.S. 204 (1988) (hereafter Georgetown), OCSE needed specific
statutory authorization to adopt these regulations with a retroactive
effective date and apply them retroactively.  The State also argued that
the 75 percent standard could not validly be applied to it because
OCSE's adoption by regulation of that particular percentage was
arbitrary and capricious.  The State also maintained that the
regulations were invalid because they did not contain any definition of
the term "noncompliance of a technical nature," and the State claimed
that its alleged violations were in fact of a technical nature.  After
the record had closed, the State sought and was granted permission to
add to the record the argument that the statistical methodology used by
OCSE in the audits was invalid because it had not been promulgated as a
regulation under the Administrative Procedure Act (APA), 5 U.S.C. 553 et
seq.

As for the conduct of the follow-up review on which the penalty is
based, the State contended that the auditors erred by including in the
sample cases involving absent parents who were Native American, and thus
not subject to State jurisdiction; cases the State called "not cost
effective;" and cases opened at any time during the audit period.  State
Appeal Br., p. 40.  The State also argued that some cases were
erroneously determined to be "no action" where action had been taken.
The State submitted various State and federal audit records for these
disputed cases.  The State generally maintained that if all appropriate
cases were properly excluded from the sample universe, and credit were
given for "action" in the "no action" cases that it contested, the
result would be a finding that the State had met the 75 percent standard
for both criteria for fiscal year 1987, so that no reduction would be
imposed. 5/

Finally, during the regular briefing process the State produced an
affidavit of an expert in statistics that challenged the validity of the
statistical sampling techniques and statistical decision rule used by
OCSE's auditors.  The State contended that this evidence established
that the audit lacked sufficient validity to be used as the basis for
finding the State to be out of compliance.  Subsequently, OCSE notified
the Board and the State that it had identified errors in its
computations for the program results audit, and it provided a
recomputation methodology that was challenged in subsequent State
submissions and affidavits by a second State expert as being too
inaccurate to establish that the State did not achieve substantial
compliance.

Analysis

I.  The State's challenges to the 1985 regulations are without merit.

The State challenged the 1985 regulations that OCSE used in concluding
that the State was not in substantial compliance.  Specifically, the
State argued that--

o  the regulations are impermissibly retroactive under Georgetown, since
OCSE lacked express statutory authorization to apply these regulations
retroactively;

o       the regulations have retroactive effect in violation of the APA,
which defines a "rule" as having "future effect" (see 5 U.S.C. 551(4)
and Georgetown (Scalia, J., concurring));

o       the 75 percent standard in the regulations had no empirical
basis and therefore was established in an arbitrary and capricious
manner under Maryland v. Mathews, 415 F. Supp. 1206 (D.D.C. 1976); and

o       the regulations were invalid because they did not include a
definition of "violations of a technical nature," based on section
403(h)(3), as amended.

OCSE disputed the State's position, but also pointed out that the Board
is bound by applicable laws and regulations under 45 C.F.R. 16.14.  The
regulations at issue were "effective" on the date of final publication
(October 1, 1985).  However, section 305.20(a), which sets out the 75
percent standard for service-related audit criteria, states that it is
to be applied "[f]or the fiscal year 1984 audit period."  The preamble
to the regulations confirmed that OCSE intended to apply this section to
FY 1984 audits, based on the October 1, 1983 effective date of the 1984
Amendments.  50 Fed. Reg. at 40126, 40131-2, and 40138.

We are, of course, bound by the Department's regulations, even if
invalid under a constitutional analysis, if those regulations are
applicable.  While some of the issues here clearly would be controlled
by 45 C.F.R. 16.14, the State's arguments also raise interrelated
questions of applicability.  We do not need to sort out these issues
precisely, however, since we conclude that all of the State's arguments
concerning the regulations are completely without merit. 6/  Our reasons
are:

o       Section 403(h)(1) of the Act, as amended, requires reductions
for states not found to be in substantial compliance in audits "for any
quarter beginning after September 30, 1983," and Congress explicitly
made the 1984 Amendments effective on October 1, 1983.  The
circumstances here are therefore distinguishable from those in
Georgetown, where the agency published cost-limit rules for Medicare
providers in 1984 and attempted to apply the rules to 1981 costs, in the
absence of any statutory authority to do so.  Here, the statute
expressly made the change in the standard retroactive. 7/

o  The effect of the 1985 regulations here is also significantly
different from the effect of the cost-limit rules considered in
Georgetown.  There, Medicare providers were entitled to a specific level
of reimbursement under the regulations in effect in 1981, and the 1984
rules would have retroactively reduced that level.  Here, the AFDC
funding reduction applies to periods after the 1985 regulations were
published.

o       The audit criteria at issue here were in the existing
regulations, had been in effect without substantial change since 1976,
and were based on IV-D state plan requirements.  The 75 percent standard
is more lenient than the standard in the existing regulations, which
provided that the State must "meet" the criteria.  Even if the State is
correct that OCSE could not reasonably have implemented this by
requiring action in 100 percent of the cases, the existing regulations
clearly contemplated a compliance level greater than 75 percent. 8/

o       More important, the 1985 regulations afforded the State a
corrective action period.  The State here had notice of the 75 percent
standard prior to this period, and at least a year to adjust its
administrative practices before the follow-up review period.

o  The regulations here merely interpret the statutory term "substantial
compliance."  Obviously, the range of compliance levels OCSE could adopt
is limited by this term, particularly when it is read together with
section 403(h)(3) of the Act (which permits a finding of substantial
compliance only when any noncompliance is of a technical nature).  A
level lower than 75 percent would have been subject to challenge as
inconsistent with statutory intent.

o  Even in the absence of the 1985 regulations, we would      reject the
State's position that it should be found to     meet the substantial
compliance standard.  The record      here supports a finding that the
State did not achieve     substantial compliance under any reasonable
reading of     that term.  This Department clearly may retroactively
adjudicate a state's entitlement to AFDC funds under       the
applicable statutory standard, without violating       the APA (even as
interpreted in the concurring opinion     in Georgetown).

o       Since the 75 percent standard reasonably interprets the
statutory term "substantial compliance," the circumstances here are
distinguishable from those considered in Maryland, where the court found
that regulations setting "tolerance levels" for AFDC eligibility
determination errors were not reasonably related to the purposes of the
statute.  Moreover, unlike the "tolerance levels" in Maryland, the 75
percent standard here had an empirical basis in past performance levels
measured through OCSE's audits.  While audit results from FYs 1980 and
1981 showed that some states were not yet achieving 75 percent levels,
other states were achieving 100 percent levels at that time.  See OCSE
Exhibit (Ex.) 8.  OCSE could reasonably expect all states to be
achieving 75 percent levels by FY 1984.  9/

o       Finally, we reject the State's arguments based on section
403(h)(3) of the Act.  That section permits OCSE to find substantial
compliance only where any noncompliance is "of a technical nature not
adversely affecting the performance of the child support program."  OCSE
implemented this provision through its regulations, determining that
failure to meet the critical service-related audit criteria in its
regulations is not simply technical since the required activities are
essential to an effective program.  50 Fed. Reg. at 40130.  We find that
interpretation to be reasonable as applied here since the State's
failures under a service-related criterion would adversely affect
program performance; the State took no action whatsoever to provide
basic child support services in a significant number of cases. 10/

Thus, we conclude that application of the 1985 regulations here was
clearly proper, and that those regulations are consistent with the 1984
Amendments.

II.  The State's statistical sampling arguments are without merit.

We next turn to the State's arguments about OCSE's statistical sampling
methodology, because if we accepted the State's position that the
disallowance should be reversed on these grounds, we would not need to
reach the State's arguments about OCSE's audit policies or
characterization of individual cases.


Background

In both the program results audit for FY 1984 and the follow-up review,
OCSE used statistical sampling techniques to determine whether the State
met the 75 percent standard for the applicable service-related audit
criteria.  OCSE drew a systematic random sample of the State's Title
IV-D cases that were open during each relevant time period. 11/  Each
sample was "stratified" i.e., samples were drawn from case groups
(strata) consisting of cases from one or more political subdivisions
(counties) of the State.  OCSE first examined each sample case to
determine what action, if any, was required in the case (in other words,
what audit criteria applied).  For example, if paternity for a
beneficiary was not legally established by marriage, acknowledgment, or
adjudication, the case would be classified as a "paternity" case,
requiring review to see if the State took any action to establish
paternity, as required by 45 C.F.R. 305.24(g).  OCSE then examined the
case files and other records to determine whether the State had, in
fact, taken any required action during the relevant time period, finding
either "action" or "no action" for each sample case reviewed.  For
example, in the follow-up review, OCSE found that the State took action
in only 12 of the 28 cases which required action to establish paternity.

OCSE then used the sample findings to calculate an "efficiency rate" and
an "efficiency range" for each criterion.  The "efficiency rate" is the
single most likely estimate of the percentage of cases requiring review
under an audit criterion which were "action" cases.  The "efficiency
range" was to be equivalent to what is called the "95 percent confidence
interval."  A confidence interval is a statistician's calculation of the
range of values within which the statistician can say with a specified
degree of certainty (here, 95 percent) the true value occurs.

Under OCSE's audit procedures, a criterion was considered "unmet" if the
"high range" of the "efficiency range" (also called the "upper limit" of
the confidence interval) was less than 75 percent, and only "marginally
met" if the "high range" was 75 to 80 percent.  It is undisputed that,
to determine the high range (upper limit) of the 95 percent confidence
interval, you first calculate the "standard error" associated with a
particular sample, then multiply that amount by 1.96, and then add the
product to the efficiency rate.  By using the "high range" figure, OCSE
was essentially assuming the risk associated with potential sampling
error.  See Ohio record:  OCSE Supplemental Appeal File at 113, 438,
510-513.

In other words, not only could OCSE say with at least 95 percent
certainty whether a state was meeting each criterion, it could also say
that its approach erred on the side of passing a state where a complete
review might well have identified a failure.

In the program results audit, OCSE examined all audit criteria listed in
section 305.20(a) of the 1985 regulations.  In the follow-up review,
OCSE examined only those audit criteria which were "unmet" in the
State's program results audit, the establishment of paternity and locate
criteria.

This Board has recognized that sampling can produce a valid result only
if done "in accordance with the general rules and conventions
statisticians have developed . . . ."  California Dept. of Social
Services, DAB No. 816 (1986), pp. 4-5.  We discuss the State's arguments
under three separate headings:  first, arguments relating to
promulgation of the sampling methodologies as a rule under the APA;
second, challenges to various aspects of the statistical sampling
methodology in general; and third, arguments about OCSE's calculation of
results from the sample in the program results audit.

    A.  Arguments relating to promulgation of the sampling methodologies
    under the APA

After the record had closed in this case, the State (with OCSE's
consent) requested and received permission from the Board to supplement
the record with an additional argument about the validity of OCSE's
sampling methodologies.  In this supplement, the State submitted for
this record an argument made by the State of Arizona in Board Docket No.
89-230 that the sampling methodologies were invalid because they had not
been promulgated as a rule under the APA.  Invalidation of the sampling
methodologies would render both the program results and follow-up
reviews invalid, and, consequently, no reduction could be based upon
them.

It is undisputed that the 75 percent standard for substantial compliance
was issued under the APA rulemaking procedures. 12/  The State
maintained that OCSE, "recognizing that the regulations describing the
audit procedure (sections 305.10 through 305.13) and the 75 percent
review standard (section 305.20(a)(2) and (b) (2)) were substantive or
legislative rules, complied with the [section] 553 of the APA by
publishing the proposed regulations in the Federal Register.  49 Fed.
Reg. 39488 (October 4, 1988)."  State Supplemental Submission,
attachment p. 10.  According to the State, since the audit methodology
was as much an integral part of the audit process as the audit criteria
and the 75 percent standard, OCSE was required to publish it too.  In
support of its arguments, the State cited Batterton v. Marshall, 648
F.2d 694 (D.C.Cir. 1980), and Estate of Smith v. Heckler, 747 F.2d 583
(10th Cir. 1984), on remand, Estate of Smith v. Bowen, 656 F. Supp. 1093
(D.Colo. March 24, 1987) (Smith I), enforced, Estate of Smith v. Bowen,
675 F. Supp. 586 (D.Colo. Dec. 18, 1987) (Smith II).

OCSE responded that the 75 percent standard was an interpretive, not a
legislative rule, that was exempt from publication under the APA.  OCSE
also contended that its audit methodologies were exempt from the APA's
requirements because they were not an "inflexible statutory formula" (as
was the case in Batterton) for sampling child support cases.  Instead,
OCSE maintained, the audit methodologies were complex procedures which
integrate published governmental and professional standards as well as
program rules and the auditors' own sound judgment.  OCSE argued that
Batterton and the Smith cases were distinguishable from the case here,
and that cases such as Guardian Federal S&L v. Federal S&L Ins. Corp.,
589 F.2d 658 (D.C.Cir. 1978), were much more closely analogous.

The APA requires generally that federal agencies publish for comment
rules of general or particular applicability but exempts "interpretative
rules, general statements of policy, or rules of agency organization,
procedure, or practice."  5 U.S.C. section 553(b)(A).  We have already
found above that the 75 percent standard was interpretive.  We now
reject the State's contention that publication of the associated audit
methodology as a legislative rule was required.  After careful
consideration of the use and application of these audit methodologies in
this case, we conclude that they are general statements of policy or
rules of agency procedure or practice, not legislative rules.

As we stated in Ohio Dept. of Human Services, DAB No. 1202 (1990) at
12-15, we consider OCSE's audit methodologies to be a means for
gathering evidence about whether a state has achieved substantial
compliance, not as an inflexible standard that must be applied.  OCSE
developed its sampling methodologies for use by its auditors and does
not consider its methodologies binding. 13/  OCSE response to supplement
at 11.  In fact, in the present case, OCSE substantially revised its
program results calculations when errors in them were indicated in the
Ohio proceeding.  See section II.C. below discussing this change.
Moreover, the State was free to propose alternative methodologies for
establishing that it met the substantial compliance standard. 14/
Consequently, this case is clearly distinguishable from Batterton, where
the federal agency, directed by statute to develop a method of
measurement, adopted an inflexible system that was not subject to
adjustment through adjudication or other method.  Similarly, the Smith
cases are not analogous because, unlike those cases, the audit
methodologies here do not establish substantive rights or
responsibilities, define a standard of conduct or level of care, or
create or alter any basic program compliance responsibilities.

We agree with OCSE that Guardian Federal, which concerned an agency's
unpublished audit policy guidelines, provides the best guidance for
treatment of the audit methodologies at issue here.  In that case the
court determined that the federal agency's guidelines as to what
constituted a satisfactory audit of a regulated entity were a general
statement of policy, since the agency retained discretion "to accept a
non-conforming audit report, or for that matter to prescribe additional
requirements in a particular case."  Guardian Federal, 589 F.2d 658,
666.  As in that case, we find that the federal agency must be allowed
to maintain its ability to make an individualized determination, to
respond with flexibility in assessing the wide variety of structures and
procedures used by the states to administer their programs and, finally,
to use its audit resources efficiently.

Consequently, since we conclude that the audit methodologies used here
were general statements of policy, or rules of agency procedure or
practice, rather than a legislative rule, we reject the State's
contention that they may not be applied in this case.

    B.  General Challenges to the Sampling Methodology

With its initial briefs, the State produced an affidavit of an expert in
statistics that raised questions concerning two aspects of the
statistical sampling methodology used by OCSE's auditors.  State Ex. 48.
(Since the State employed two such experts during the course of this
proceeding, we shall refer to this expert as the State's first expert.)
The first expert alleged that systematic bias may exist in the sample of
cases selected for audit because OCSE used a systematic random sampling
technique.  Id. at 2.  He also alleged that the sampling method used for
samples fewer than 50 cases (a method based upon Student t random
variables) could lead to faulty determinations.  Id. at 4.  The State
contended, based on these allegations, that it had shown that the audit
lacked sufficient validity to be used as the basis for finding the State
to be out of compliance.

In addition, the first expert provided decision tables which he alleged
showed the proper threshold for determining when the State had achieved
substantial compliance at the 75 percent level.  The State pointed out
that, if all of its arguments about excluding certain cases from the
sample and about whether it did take action in some cases were accepted,
then these tables would support a conclusion that it achieved 75 percent
compliance for both criteria.  (Since audit policy required at least 95
percent confidence that the State had not complied with the 75 percent
standard for a state to be found out of compliance, the federal standard
would be met if the probability of the State's non-compliance with the
standard were less than 95 percent.)

In response, the OCSE expert affirmed the validity of the systematic
sampling technique generally and asserted that the rare instance in
which systematic sampling could yield a biased sample of cases is where
the sampling frame is arranged in groups of approximately equal size and
the cases in each group are then further arranged in the same specific
order.  He asserted that the auditors specifically found that no such
arrangement occurred in this instance and that therefore "no credence"
could be given to the contention that it was possible that the auditors'
sample was biased.  OCSE Ex. 16.  OCSE's expert noted that "many
sampling applications, particularly in auditing, rely heavily on
systematic sampling."  Id. at 2.  Moreover, he noted that systematic
sampling had an advantage over random sampling in that it insured
inclusion of cases from the beginning, middle, and end of the universe,
which was in this case arranged in chronological order.

OCSE's expert also responded to the State's first expert's concern
relating to the sampling method used for samples that were fewer than 50
cases, and concluded that the method "yields very close approximations,
if not exactly the same results, as those obtained by the State's
consultant's method (which is not identified as to the theorem upon
which it is based)."  Id. at 7.

The State did not reply to the affidavit of the OCSE  expert during the
initial briefing in this case.  In his report on OCSE's revised
calculation methodologies, the State's second expert again criticized
OCSE's use of systematic random sampling, stating that --

 In such a serious situation as the one under consideration here,
 where a penalty is to be imposed on a state, the auditor should
 remove all possible doubt as to whether a bias may be present.

November 9, 1990 Report at 5 (emphasis in original).

We conclude that the State's concerns about OCSE's statistical sampling
methodology are without merit.  Although the State subsequently
addressed several arguments made by OCSE, the State never challenged
OCSE's assertions concerning the lack of any grouping or ordering of the
sampling frame.  Since the cases sampled were the State's, the State
would have been in the best position to identify some specific biasing
factor, but did not do so.  The record indicates that the list from
which the sample was drawn was grouped in the order of the date that the
cases were entered into the State's IV-D system.  We therefore conclude
that there is absolutely no evidence that the sample was biased so that
the audit results were unreliable.

As to the State's argument that the sampling method used could lead to
faulty determinations, the State did not contest OCSE's assertion that
its decision table for samples of less than 50 cases differed, if at
all, in results from the State expert's table only at the fourth decimal
place.  See OCSE Ex. 16 at 7.  Such a difference is clearly not material
here.  Thus, we reject the State's argument concerning the sampling
method used for samples of less than 50 cases.

    C.  The Calculation Arguments for the Program Results Audit

The State also raised issues regarding how to determine valid efficiency
rates and ranges for the program results audit.  We provide a history of
how the issues developed and what the remaining issues are, before
explaining our analysis.

   1.      How the issues developed

During the regular briefing process (see 45 C.F.R. Part 16), the State
raised only the statistical methodology arguments discussed above.
After the close of briefing, but before the Board could issue a
decision, OCSE asked the Board to stay the case because OCSE was
reviewing its calculations for all of the FY 1984 program results audits
as a result of questions raised in the Ohio proceeding.  OCSE Letter
dated March 1, 1990.  In that proceeding, Ohio's expert had asserted
that OCSE auditors had erred by using a method for determining the
efficiency rates and ranges which is appropriate for simple random
sampling, but not appropriate for the more complicated sampling used
there, i.e., two-stage stratified sampling.  See Ohio at 11.  The State
did not object to the requested stay, and it was granted.

In its subsequent submission, OCSE recalculated the efficiency rates and
ranges for the program results audit applying a two-stage sampling
technique using two formulas (OCSE Methodology #1 and OCSE Methodology
#2), which OCSE's statistical sampling expert said were based on the
treatise cited by Ohio's expert.  OCSE's Expert's May 31, 1990
Declaration at 4.  (The follow-up audit was not affected because it had
been based on a revised methodology; the State did not challenge this
methodology.)

In reply, the State submitted an affidavit and report from a newly hired
expert, who was the statistician who had initially identified the
problems with OCSE's calculations in the Ohio proceeding.  He attested
that OCSE had not, in fact, properly applied the correct formulas.  He
stated his opinion that Methodology #2 was more appropriate because it
recognized the principle that "the universe, or population, of interest
is the total number of cases applicable for review for a given
criterion" and that, if cases are not in the target population, they
should not be included in the sample on which the efficiency rate and
the corresponding standard error are based.  August 16, 1990 Report at 2
(emphasis in original).  He further asserted that OCSE Methodology #2
uses certain values which are inconsistent with this principle.  He
explained how these values should be replaced, offered alternative
formulas for calculating the efficiency rates for the State, and
provided the results of some of his calculations.

In its subsequent submission, OCSE presented another affidavit and
report from its expert.  See OCSE's Expert's September 28, 1990
Declaration.  OCSE's expert defended OCSE Methodology #2 on the ground
that the formula used sufficiently recognizes the relevant universe.  He
stated:

     The fact that OCSE, upon my advice, weighted the political
     subdivisions and strata according to the total number of cases
     rather than the number of cases representing the various criteria
     is not inconsistent with that principle because this decision rests
     on the reasonable assumption that there is a high positive
     correlation between the criteria weights and total weights.  In
     other words, it is logical to assume -- as confirmed by the
     experience of the OCSE auditors -- that the larger the total number
     of cases, the larger the number of cases for the criteria covered,
     and vice versa.

Id. at 2-3.

OCSE's expert acknowledged that the alternative suggested by the State's
expert -- estimating the number of cases applicable for review for each
audit criterion per subdivision and stratum -- is also an appropriate
statistical approach.  OCSE's expert expressed the opinion that
recalculations were not necessary since the State's expert had not
presented any evidence that OCSE's assumption was unreasonable, nor had
he done a complete analysis of his recommended approach.  OCSE's expert
said this was especially true since the State's expert never claimed
that adjustments to the calculations to correct alleged deficiencies
would make a difference in the ultimate finding that the State was not
in substantial compliance.  Id. at 3-4.

In spite of this position, OCSE nonetheless again recalculated the
relevant efficiency rates and efficiency ranges.  OCSE's expert
explained that the procedure used for this was called a "ratio
estimation" procedure.  He suggested that OCSE apply this procedure in
three ways:  (1) substituting the estimated criteria weights in
Methodology #2 to demonstrate the reasonableness of the assumption upon
which OCSE relied (OCSE Methodology #2A); (2) preparing scatter diagrams
showing the relationship between the estimated criteria weights and the
corresponding total-case weights (correlation analysis); 15/ and (3)
using the ratio estimation technique as an independent methodology for
calculating the efficiency rates and standard errors (OCSE Methodology
#3).  Id. at 5.

The high range figures produced by both OCSE Methodology #2A and OCSE
Methodology #3 are substantially less than 75 percent for the two
criteria ("establishing paternity" and "state parent locator service")
which OCSE found the State failed to meet in the program results audit.
Thus, under each of OCSE's methodologies, OCSE found that the State
failed to meet both criteria.

The State asked for and received an opportunity to respond.  The State
then submitted another affidavit from its second expert, together with
his more detailed report.  See November 9, 1990 Affidavit and Report.
This affidavit acknowledged that OCSE's new methodologies partially
corrected for the previous errors alleged by the State's expert.  The
State's expert asserted, however, that there were still problems which
had not been corrected.  His basic points were that--

o    OCSE Methodology #2 is based on certain assumptions that are
discredited by OCSE's own correlation analysis; 16/

o    OCSE Methodology #2A correctly calculates the efficiency rate, but
is incorrect because the formula used to calculate the efficiency ranges
fails to take into account the fact that the actual total number of
cases requiring review for each criterion is unknown and that a random
estimate is being used instead;

o       While OCSE Methodology #3 employs a ratio estimator found in
most books on sampling, this ratio estimator is not directly applicable
to the situation here;

o       Existing ratio estimators could be modified to account for the
particular problem here (the randomness of the estimates); and

o       Taking into account the randomness of the estimates of the
number of cases requiring review for a particular criterion would
increase the standard error of the efficiency rate and therefore widen
the efficiency range (confidence interval).

The State's expert explained that it was not an easy problem to
determine the proper ratio estimators, and that he had examined all
ratio estimators he could find without successfully identifying one that
seemed directly applicable to the situation at hand.  He stated that his
own efforts aimed at attempting to modify the existing ratio estimators
to fit this situation were "stymied due to lack of time."  Id. at 3.
Thus, he presented no calculations to show that use of ratio estimators
consistent with his analysis would result in the State being found to
have met the 75 percent standard for any particular criterion.

The State asserted that OCSE had failed to meet its burden to establish,
at the 95 percent confidence level, that the State had failed to achieve
substantial compliance with the service-related audit criteria at issue.
The State based this assertion on its expert's opinion that-- Until the
     correct analysis is put forward, . . . no one can conclude
     that New Mexico is not in compliance.  One may perform 1,000
     incorrect analyses that arrive at a common conclusion, but
     that conclusion is not valid until a correct analysis is
     performed.

Id. at 5 (emphasis in original).

   2.  Analysis

As we stated in Ohio, the issue here is properly viewed as an
evidentiary question:  whether the sample findings are reliable evidence
that the State did not meet the 75 percent standard for either of the
two criteria at issue.  Ohio at 15.  To evaluate this evidence, we must
determine what inferences can validly be drawn from the sample case
findings, in accordance with principles of statistical sampling.

The State here focused on the method used for calculating the 95 percent
confidence interval, since OCSE had chosen to adopt that degree of
certainty for its findings.

Based on our examination of the record as a whole, we conclude that OCSE
has shown, with the requisite degree of certainty, that the State did
not meet either the paternity or the locate audit criterion.  First, we
find that OCSE Methodologies #2 and #3 are valid methods, of a type
which would ordinarily be relied on by statisticians, and that the
assumptions underlying them are generally sound.  More important, as we
discuss in detail later, we find that the limited modifications
ultimately proposed by the State's expert would not result in a finding
of substantial compliance. 17/

OCSE's expert was well-qualified and persuasively attested to the
validity of the methods OCSE used, providing supporting analyses.  While
the State's expert was also well-qualified, we find his affidavit to be
inadequate to rebut OCSE's expert's opinion.

The State's expert described the assumptions underlying OCSE Methodology
#2 as requiring a constant relationship between total number of cases
requiring review for each criterion and the total caseload (which he
called a "deterministic" relationship). 18/  See note 16 above.  OCSE's
expert, however, had said the underlying assumption which rendered the
method valid was merely that there was a high positive correlation
between the two numbers; he did not provide calculations for these
correlations, as he did in Ohio.  In Ohio we found that, although the
correlation coefficients did not show a deterministic relationship
between total caseload and cases requiring review for a particular
criterion, they ranged from .92 to .98 out of 1.0, which we concluded
showed a high positive correlation.  Ohio at 18.  In the present case,
the graph for the paternity criterion essentially graphed in a straight
line.  (No calculations were presented, but visually the correlation
appeared much higher than that of any of the criteria in Ohio.)  This
establishes that OCSE's assumption that the larger the total number of
cases, the larger the number of cases requiring review for each
criterion, was valid for the paternity criterion for New Mexico. 19/

Moreover, the State's expert's affidavit is insufficient to rebut OCSE's
expert's opinion on the validity of OCSE Methodology #3 for the
following reasons:

o  The State's expert focused on only one part of the variance formula
in alleging that OCSE Methodology #3 failed to take into account the
sampling error associated with the fact that the denominator of the
ratio (the number of cases requiring review for a particular criterion)
was an estimate.  He completely ignored the fact that other parts of the
formula for calculating the variance of a stratum specifically recognize
that the number of cases requiring review for a criterion (the
denominator of the ratio in question) is an estimate.  Methodology #3
includes formulas for calculating a relative variance for a stratum for
both the numerator and denominator of the ratio, as well as a relative
covariance.  See OCSE's Expert's September 28, 1990 Declaration,
Appendix #2 at 3.  Moreover, an examination of OCSE's calculations shows
that OCSE also calculated the variance for each of the denominators in
the ratios for the political subdivisions.  See OCSE's Expert's
September 28, 1990 Declaration, Methodology #3, lines 97-111 for both
paternity and locate.

o  The State's expert asserted that there were  assumptions underlying
use of OCSE Methodology #3, but again failed to provide any reference or
analysis to support his assertions.  Indeed, his description in his
affidavit of the underlying assumption is inconsistent with his
description in his report of the underlying assumption.

o  In his affidavit, the State's expert describes the underlying
assumption as being that the proportion of actions taken is the same for
all political subdivisions and says this is untrue since they "differ by
a factor of two."  He says this invalidates the formula's use of the
factor M/m (the total number of political subdivisions in a stratum
divided by the number of political subdivisions sampled in that stratum)
in calculating the variance for a stratum.  See November 9, 1990
Affidavit at 4.  The factor M/m is relevant in the variance calculations
only for the Combined Strata #2 and #3.  Strata #1 consisted of only one
political subdivision (which comprised over one third of the State's
entire caseload).  The proportion of actions taken does not vary by a
factor of two for the two political subdivisions examined in the
Combined Strata.  The State's expert did not specifically address
whether the differences between those two political subdivisions were
statistically significant, and it appears to us that they are not.

Moreover, even if we accepted the State's expert's affidavit as
sufficient to establish that some further refinements to OCSE's
Methodology #3 would lead to a more precise calculation, we would find
that any consequent widening of the confidence interval would be
immaterial.

While the State's expert contended that OCSE Methodology #3 was not
precise enough, he did not contend that further refinements would
significantly alter the confidence interval.  OCSE's calculations showed
as the upper limits based on sample results of the preliminary review
63.6 percent for the cases requiring review for establishing paternity,
and 58.4 percent for the locate cases.  Ex. 3 to OCSE Expert's October 1
Report.  This means that, for the State to have achieved the 75 percent
standard for both of these criteria in the program results audit, so
that the initial notice of failure to achieve substantial compliance
would not have been issued, the confidence interval would have to
increase from the amounts calculated using OCSE Methodology #3 by 11.4
percentage points for establishing paternity, and by 16.6 for locating
absent parents.

The increase in the standard error required to widen the confidence
interval sufficiently would be substantially greater than the standard
error OCSE calculated using its Methodology #3 (which the State's expert
admitted was based on a commonly used ratio estimation technique).  The
standard error for establishing paternity using Methodology #3 is .0299
and would have to increase by .0564 to widen the confidence interval
enough to make a difference here.  (As explained above, to determine the
upper limit you add to the efficiency rate -- here 57.7 percent -- 1.96
times the standard error.)  OCSE's October 1 Report at Appendix 2, p. 4.
Similarly, the standard error for locate would have to more than triple
-- increasing from .0306 to .1153 -- to achieve an upper limit of 75.
Since the State's expert acknowledged that OCSE had moved in the right
direction in its Methodology #3, it is logical to conclude that the
further modifications to the ratio estimator the State's expert said
were necessary would not result in increasing a standard error by nearly
three times the amount of the standard error calculated using
Methodology #3.

Finally, the State's expert had an opportunity to review our analysis of
this statistical problem in the Ohio decision and, while he has offered
an affidavit in support of a reconsideration of that decision in which
he contends that our analysis there was faulty, he did not address here
the problems with his analysis that we identified in Ohio.
Consequently, we draw the reasonable inference that he cannot prove that
further refinements or a different methodology would show that this
State substantially complied with the subject criteria.

As we noted previously, the State's second expert alleged that "[u]ntil
the correct analysis is put forward, . . . no one can conclude that New
Mexico is not in compliance."  State's Expert's November 9 Report at 5.
In complex estimates such as the one at issue here, however, further
analysis and fine-tuning of the computation may always be possible.  The
question we must ask is whether further refinements would make any
difference in the ultimate outcome, that is, whether the refinements
might enable a failing state to pass.

In this case, we simply have no reason to think that the further
modifications the State expert proposed would show that the State was in
substantial compliance.  The State's expert did not himself perform any
calculations using what he called the "correct analysis."  Thus, his
affidavit at most establishes that such an analysis would "widen" the
confidence interval for each criterion; he did not express the opinion,
however, that any such widening would be substantial or would have a
reasonable possibility of resulting in findings that the 75 percent
standard was met for either criterion, nor can we infer from his
affidavit that he held this opinion.

The State's second expert attempted to justify his failure to perform
the "correct analysis" by asserting, as he had before in Ohio, that he
lacked sufficient time.  However, the original problem had been
presented to him at least eleven months before his final affidavit.
Moreover, the State's expert had by that time participated in several
Board proceedings where the same question arose.  The State's expert did
not even go as far as expressing what the likelihood would be that a
correct analysis would make any difference with respect to this
criterion.  OCSE's expert had expressed his professional opinion that a
statistician has an obligation, when he raises a question about a
technique, to address the question of whether any alleged incorrect
calculation is material to the decision to be based on it.  OCSE
Expert's October 1 Report at 3-4.  In our view, the State at the very
least had the burden to show that recalculations were required and might
make a difference in the ultimate conclusion, especially given the raw
data here.

Accordingly, we conclude that the record supports a finding, with the 95
percent degree of confidence, that the State did not meet the 75 percent
standard, for either the paternity or the locate criterion, in both  the
program results and the follow-up reviews.

III.  The auditors were not required to exclude any Native American
cases, cases opened during the audit period, or cases considered "not
cost effective."

    A.  Native American cases

The State essentially took the position here that the auditors should
have excluded from the sample in the follow-up review each case
containing any indication that the absent parent was a Native American
reported to be living on a reservation. 20/  The State contended that
its child support program had a longstanding written policy (of which,
it alleged, OCSE was aware) of deference to Native American Tribal
sovereignty.  The State alleged that based on that policy, it did not
routinely attempt to enforce support obligations against Native
Americans residing on a reservation.  See State Ex. 14.  The State
argued that OCSE had never issued any regulation or action requiring
enforcement against Native Americans.

OCSE's audit evaluation guide provided that states were encouraged to
enter into written agreements with Native American Tribes to provide
IV-D services, and that, where no agreement exists (as is the situation
here), cases should be excluded from the audit if the State's records
show that the absent parent lives and works on a reservation. 21/  Even
if we accepted the State's alleged standard requiring only living on the
reservation (rather than OCSE's standard requiring living and working on
the reservation), the State must still demonstrate that this standard
was properly invoked in individual cases.  In other words, the State
must demonstrate in each instance, as a threshold requirement, that the
absent parents were in fact living on a reservation for the period in
question.  Without adequate documentation to this effect, it clearly
would be unreasonable for the State to forego any action on these cases.

When we look at the State's documentation to determine whether the
absent parent was living on a reservation during the audit period, we
find that no case should be excluded; there is no evidence that the
State ascertained that any of these absent parents were living on the
reservation at the relevant time. 22/

The State challenged the inclusion by the auditors of eight cases in the
sample, arguing that documentation for these cases, which it provided
with its appeal brief, showed that the missing parent was living on a
reservation.  At best, the State documented in individual cases that the
absent parent was reported by the custodial parent as possibly living on
a reservation at some time prior to the audit period.  In each of the
cases even this information was at least one year old; in one case, the
last reported address was from 1981.  See State Ex. 20.

It is clear from our review that the cases here were simply shelved
indefinitely as soon as the custodial parent stated that the absent
parent might be living on a reservation.  Since the State has not shown
that it made any effort to verify that the absent parents in these cases
actually resided on a reservation (and indeed since the State lacked
even a current allegation from the custodial parent that the absent
parent might be living on an Indian reservation, we find no basis for
excluding these cases from the sample as Native American cases.

Moreover, the State argued only that its Native American policy would
preclude it from taking enforcement actions, not from attempting to
locate the absent parent.  All but one of the cases the State sought to
exclude were cases involving locating an absent parent.  This action
would not appear to be barred by tribal sovereignty.  Thus, this factor
is an additional basis for not excluding the locate cases at issue here.

    B.  Cases Opened During the Audit Period

The State contended that the auditors should have excluded without
further examination all randomly selected cases that were opened during
the review period covered by the follow-up review (the period between
October 1, 1986 through September 30, 1987).  (Although this argument
could have applied to the review period for the program results audit as
well, the State did not challenge any individual case findings covered
by that audit.)  The State claimed that, based on its reading of 45
C.F.R. 305.20, only cases that were open before an audit period began
were subject to review for compliance with State IV-D procedures. 23/
The State also argued that the audit was faulty because, although OCSE
articulated a policy regarding cases opened during the audit period in
the preamble to its final regulations, it never issued any regulation or
instruction to guide its auditors.  The State maintained that the
auditor in this case seemed uncertain about whether to include such
cases; according to the State, he noted on his worksheets when cases
were opened during the audit period, but subsequently did not decide to
exclude any such cases on this basis.

The State did not cite to specific language in 45 C.F.R. 305.20 as
support for its position, and we can find no support in the provision
for the proposition that all cases opened during an audit period should
automatically be excluded from the sample.  In fact, the State admitted
that OCSE had publicly stated a contrary position.  OCSE stated in the
preamble to the final regulations, in its response to comments on the
proposed regulations --

 A case will be reviewed . . . unless the case . . . was opened
 near the end of the audit period such that time was insufficient
 to take an action . . . .

50 Fed. Reg. 40132.

While the State faulted OCSE for having set forth that policy only in
the preamble to its regulations, and not in a separate regulation or
audit guideline, OCSE's policy authorizes leniency not otherwise
provided by the regulations.  In referring to cases reviewed for
particular audit periods, 45 C.F.R. 305.20 reasonably covers all of the
cases opened prior to or during the audit period.  OCSE's preamble
exception allows a state to demonstrate that a case was opened so close
to the end of the period that there was insufficient time to take an
action otherwise required by the regulations.  The policy is therefore a
reasonable interpretation of the regulatory requirements and is
appropriately placed in the preamble.

Although the State argued that OCSE should have adopted a much broader
policy excluding all cases opened at any time during the audit period,
such a policy would clearly conflict with the basic purposes and
requirements of the statute.  If all new cases were excluded, as the
State urged here, perhaps the most significant aspect of the State's
performance during the corrective action period would not be evaluated.
In addition, adoption of the State's position would put a premium on
working on older cases first and would effectively doom applicants for
IV-D services to inaction until the fiscal year following their
application.  This is clearly contrary to the spirit of the 1984
Amendments.  We note that the State did not argue that the IV-D program
did not demand and value prompt performance in newly opened cases.  The
State also did not argue that it was physically impossible to take
action; in fact, it provided evidence that it was able to initiate a
location action in as few as eight days.  See State Ex. 32.  Thus, we
reject the State's contentions concerning the validity of OCSE's policy
for newly opened cases. 24/

The State provided documentation for a few cases for which it received
credit for action, as well as for all "no action" cases that it wanted
excluded, stipulating that fairness required that, if its position on
recently opened cases were accepted, all cases opened during the audit
period should be excluded from the sample.  Having rejected the State's
broader position that all cases opened in the audit period should be
excluded, we reviewed the State's documentation for the "no action"
cases to see whether the auditor properly applied OCSE's policy that
cases should be excluded when they were opened so near to the end of the
audit period that insufficient time remained to take an action.

The audit period sampled ran from October 1, 1986 through September 30,
1987.  All but two of the cases briefed by the State were opened during
the first seven months of the year. 25/  In the documentation it
provided for some "action" cases, the State showed that occasionally it
was able, within this time, to at least send a letter to the custodial
parent, seeking information about the location of the absent parent
(State Ex. 41), or to send a letter to the postmaster for the absent
parent's address (State Ex. 32).  The auditor's notes indicate, however,
that often these cases were not even assigned to a caseworker during the
audit period, see, e.g., State Ex. 31, or that the State did not even
send a letter seeking basic information to the custodial parent, whose
address the State certainly knew, see, e.g., State Ex. 26.  The State
did not offer any argument that it was unable to take the minimal action
on these cases during the audit period.  We therefore conclude that
these cases were properly included in the sample by the auditor. 26/

There were two cases (one locate and one paternity) that we identified
as having been opened more than seven months into the audit period.  The
locate case, State Ex. 34, was opened on September 15, 1987, 15 days
before the end of the audit period.  No action was taken even though the
absent parent's post office address, social security number and date of
birth were all known.  The State here made no effort to demonstrate why
it lacked sufficient time to take action within the remaining audit
period.  As we noted above, the State's records showed that it was
capable of sending an initial contact letter within eight days of case
opening.  See State Ex. 32 (case opened 1/26/87, postmaster letter sent
on 2/3/87).  Consequently, in view of this evidence of the State's
ability to take action and the absence of any demonstration of
insufficient time for this particular case from the State, we conclude
that the auditor's judgment was correct.

As for the only paternity case (State Ex. 39) that we identified as
having been opened more than seven months into the audit period, we find
that it should have been excluded, but not because it was a newly opened
case.  Although the State identified the case as "opened during the
audit period" (State Appeal Br., p. 50), it actually seems not to have
been opened until 1988, according to the documents in the record before
us.  Since the auditor gave the State credit for action in this case,
however, and since the exclusion of this case from the sample would only
favor OCSE and not the State, we do not require a recomputation on that
basis.

     C.  Cases deemed by the State to be "not cost effective"

Although the State did not specifically explain its rationale for
excluding this category of cases, at various points in its submissions
it identified particular cases as being properly excluded from the
sample because they allegedly should have been closed before the audit
period as "not cost effective." 27/  Notes on these cases made by State
employees subsequent to the review period indicated that by "not cost
effective," the State apparently meant that no action reasonably needed
to be taken because it was difficult to pursue the matter or the State
was unlikely to pursue the matter or because the client received AFDC
funds only for a short time.

The State did not identify any authority as support for its contention
that specific cases should have been closed before the audit period.
The Board supplied the State with documents about OCSE's case closing
policy that had been submitted in another case, and gave the State an
opportunity to comment on whether that policy was applicable to any of
the contested cases.  The State declined to file any such submission.
Moreover, the State did not even allege that these cases should have
been closed under its own policy or explain why they had not been closed
under the State policy.

Finally, although the State did not specifically explain its rationale
for excluding this category of cases, we examined all three cases that
allegedly should have been closed before the audit period as "not cost
effective."

As for the single paternity case, the only documentation provided by the
State for this case was the OCSE auditor's notes.  See State Ex. 29.
Those notes specify that the AFDC case was "certified" on November 1,
1984, the enforcement case was opened on January 21, 1985, and it was
still open on October 1, 1986.  The auditor noted that no activity was
reported on the case since opening. Id.

There are no documents for this case from the State's files, unlike many
of the other disputed cases.  We have only the State's assertion in its
appeal brief that--

  This was an AFDC case listed as no action.  This case
  had been closed in July, 1984 for being non-cost
  effective.  The case had properly been closed in July
  1984 and should have been excluded from the audit sample
  for that reason.

State Appeal Br., p. 47.

Obviously, the State's allegation could not be true if the case was not
opened until January 1985 as the audit notes indicate.  Since the
State's allegation conflicts with the only evidence it introduced into
the record about this case, we reject the State's position.

As for the two locate cases, the record indicates that these involved
clients who had been AFDC recipients for less than a year, prior to the
audit period.  See State Ex. 23 (8 months); Ex. 24 (2 months).  Despite
the State's allegation that these cases should have been closed prior to
the audit period, the State did not close State Ex. 23 until October
1988, over one year after the audit period, and State Ex. 24 was closed
in March 1988, about six months after the audit period.  We therefore
reject the State's allegation about these cases since the allegation is
not based on any identified OCSE or State policy and conflicts with the
State's own subsequent treatment of these cases.

IV.  Only two "no action" cases were incorrectly determined.

The State argued that the auditor erred in finding that appropriate
action had not been taken in one paternity case, State Ex. 43, and in
one locate case, State Ex. 25.  We examined the documents provided by
the State in this appeal (although OCSE argued (see note 22 above) that
they were offered too late in the process).  It is not clear whether
these documents were available to the auditors.  We agree with the State
that it did take action in both of these cases.  The State provided in
State Ex. 43 an acknowledgment of paternity dated July 6, 1987.
Moreover, documents provided with State Ex. 25 show that there was a
location effort made on this case during the audit period.
Consequently, we conclude that the State should be given credit for one
additional paternity action and one additional locate action.

In spite of these findings, however, we are not requiring OCSE to
recalculate the State's efficiency range for either criterion.  The
change from 12 to 13 actions out of 28 cases requiring paternity
services amounts to a change in the efficiency rate from 42.9 to 46.4.
The change from 22 to 23 actions out of 50 cases requiring locate
services amounts to a change in the efficiency rate for that criterion
from 44.0 to 46.0.  Even under the most favorable calculation
methodology, this would not change the ultimate conclusion that the
State failed to meet the 75 percent standard for either criterion.


Conclusion

For the reasons discussed above, we uphold OCSE's decision to reduce by
one percent the State's AFDC funding for the one-year period beginning
April 1, 1987.

 

 

 Judith A. Ballard

 

 

 Alexander G. Teitz

 

 

 Donald F. Garrett Presiding Board Member.1.   In its notice of
 appeal, the State asserted that OCSE erred in assessing
 penalties for two quarters not covered by the audit.  The State
 did not, however, make any arguments in support of this
 assertion in its later briefs, and did not dispute OCSE's
 assertion that the State had apparently abandoned this position.
 The statute calls for imposition of the penalty beginning in the
 first quarter ending after the State's corrective action period,
 which closed on June 23, 1987.  See section 402(h)(A) and
 (h)(2)(C)(iii).  OCSE regulations limit the penalty period to
 one year.  45 C.F.R. 305.100(a)(1).  If the State passes the FY
 1988 compliance review, OCSE will rescind the part of this
 penalty that includes the first two quarters of that fiscal
 year, i.e., for the period October 1, 1987 through March 31,
 1988.  See OCSE Response Brief (Br.), pp. 11-12.

2.   The 1985 regulations also provided an expanded list of
service-related audit criteria for subsequent audit periods, and added
new performance-related indicators for use beginning with the FY 1988
audit period.

3.   This was not the State's first notice of problems with its program.
OCSE began compliance audits in 1977, and the record contains evidence
that the State's IV-D program was audited each year from 1977 through
1983.  OCSE Ex. 6.  Although these audits revealed serious problems with
the State's IV-D program, no penalty was ever imposed due to
Congressional moratoria.  The State was apprised of OCSE's findings each
time and promised efforts to improve its performance.

4.   We note that the auditors examined whether any efforts were made by
the State in these cases to attempt to locate absent parents and
establish paternity.  The success of these efforts, while noted for
statistical purposes, was not determinative as to whether the State was
found to be in substantial compliance; the State received credit for
"action" so long as it took some action consistent with the State's
written procedures.  See OCSE Response Brief, p. 29, n. 11; 50 Fed. Reg.
40132 (1985).

5.   In addition to the usual briefing provided by the Board's
procedural regulations, the Board provided an opportunity for oral
argument (see Transcript of August 22, 1989 Oral Argument) and scheduled
an evidentiary hearing to be held in Santa Fe, New Mexico, at the
State's request.  The State subsequently withdrew its request, and the
Board gave the State permission to provide affidavits from State
officials who would have testified at an evidentiary hearing.  The State
never submitted any such affidavits.

6.   Our conclusion here closely parallels our analysis of several
virtually identical arguments made by the parties in the Board's recent
decision, Ohio Department of Human Services, DAB No. 1202 (1990).  A
copy of that decision was furnished to the parties in this case for
comment on any issues that were applicable.  Neither party chose to
comment.

7.   In spite of the statutory language, the State argued that
legislative history of the 1984 Amendments shows that Congress intended
that OCSE's implementing regulations would have prospective effect only.
The legislative history on which the State relied, however, does not
refer to OCSE's implementation of the substantial compliance standard;
instead, it refers to the expectation by Congress that OCSE would issue
new regulations focusing on whether states were effectively attaining
program objectives (in addition to meeting the existing state plan
requirements).  S.REP. No. 378, 98th Cong., 2d Sess. 32-33 (1984).

8.   The existing regulations required the states to have and be
utilizing written procedures detailing step by step actions to be taken.
45 C.F.R. 305.1, 305.24(a), 305.25(a), 305.33; 45 C.F.R. Part 303
(1983).  Although no reduction had actually been imposed based on the
existing audit criteria, this was due to the moratoria.  The states had
no guarantee that Congress would continue to delay imposition of the
reductions.

9.   We note that the percentages given in OCSE Ex. 8 (a draft analysis
by OCSE of 1980 and 1981 audit results) are derived simply by dividing
the number of complying sample cases by the total number reviewed.  If
OCSE had instead used the same method for estimating compliance levels
it used in the 1984 and 1985 audits for all states (see our discussion
below), the compliance percentages shown on Exhibit 8 for the earlier
years would have been higher.  Moreover, contrary to what the State
argued, the report on audits for FYs 1984 and 1985 (State's Ex. 16)
does not show that the 75 percent standard was not attainable.  That
report shows that 21 states or territories met all the criteria
initially and at least 15 others met them after a corrective action
period.  (Some states had not yet had a follow-up review, and some have
appealed the results of their follow-up reviews to this Board.)  See
also OCSE Br., pp. 27-28.  In Maryland, the Secretary had acknowledged
that some errors in making eligibility determinations were unavoidable
due to the complex nature of the requirements.  Here, the State did not
argue that the service-related requirements were complex or that there
was any barrier to meeting those requirements which could not be
overcome.

10.   Moreover, although the State asserted that its failings were
technical, State Br., p. 32, it never justified its position in the
context of its overall performance or in the context of individual case
findings.

11.   For a systematic random sample, the auditor first selects a case
at random and then selects every nth case thereafter to achieve the
desired sample size.  For example, in a universe of 6,000 cases where a
sample size of 100 was desired, the auditor would select every sixtieth
case after the first randomly selected one.

12.   As we discuss below, however, OCSE maintained that these
regulations were interpretive, not legislative regulations, so that
OCSE's choice to promulgate them using notice and comment procedures was
optional, not mandatory.

13.   Thus, the rules do not change what the states are required to do
in administering their programs, and if anything, lessen the burden of
an audit on states by permitting sampling rather than 100 percent
review.

14.   For example, in a case involving the District of Columbia, the
District is contending that it should be credited with substantial
compliance with the locate criterion based on its submission of 21,000
names to the Federal Parent Locator Service during its corrective action
period.  See Board Docket No. 89-229 (appeal brief).

15.   Although OCSE's expert's affidavit stated that scatter diagrams
were prepared, there were none attached to his October 1 Report.  It was
the State's second expert who supplied the scatter diagrams at our
request.  See State's December 11, 1990 Letter with Attachments.

16.   The State's expert described the following two assumptions (which
he called "deterministic linear assumptions"):

 (1)  the proportion of cases requiring action in a particular
 subdivision relative to the total number of cases in that
 subdivision is constant for all political subdivisions within
 strata (homogeneity of ratios of cases requiring action relative
 to total case load for all political subdivisions within strata)
 and

 (2)  the proportion of cases requiring action in a stratum
 relative to the total number of cases in that stratum is
 constant between strata (homogeneity of ratios of cases
 requiring action relative to the total caseload between strata).

November 19, 1990 Report at 2.  Stated differently, this means that the
total number of cases requiring review for a particular criterion could
always be determined from a multiple of the total caseload, so that the
ratio between the two numbers would show up on a graph as a straight
line.

17.   Since OCSE's expert said that Methodology #2A was advanced solely
in support of Methodology #2, we do not discuss Methodology #2A
separately.

18.   He offered the graphs from the three criteria involved in the Ohio
case, as well as similar graphs for the two criteria at issue in the
present case to show that this deterministic relationship did not exist.

19.   On the other hand, the evidence is inconclusive as to whether
OCSE's assumption of a high positive correlation is valid for the locate
criterion, and we do not rely on Methodology #2 as evidence that the
State failed that criterion.  The State's expert provided a graph
showing a curve for the three points on the scatter diagram for the
locate criterion, and may thereby have drawn into question OCSE's
finding of a high positive correlation for this criterion.  OCSE's
expert never provided actual calculations of the relationship and never
discussed the implications of the State's scatter diagram.
Consequently, the record is insufficient for us to make a finding
concerning the validity of Methodology #2 for this criterion.

20.   In its general arguments on this subject, the State indicated that
one paternity and five locate "no action" cases fit this category for
the follow-up audit period.  Appeal br. at 38.  In the subsequent
case-by-case discussion, however, the State identified eight cases as
being excludable on this basis.  State Appeal Br., pp. 41-53.  Those
cases were documented in State Exs. 17, 18, 19, 20, 21, 44, 45, and 47.
The State did not provide supporting records to contest any cases from
the program results audit on this or any other basis.

21.   This is consistent with the State's Deputy Attorney General's 1982
memorandum (introduced as OCSE Ex. 11) that states that there is no
legal impediment to the State's establishment of paternity and
enforcement of child support orders where the defendant lives or works
off the reservation.

22.   OCSE argued that the State should not be permitted to challenge
sampled cases here or to provide further documentation, since it did not
make these arguments in response to the draft audit report.  We do not
agree.  There was no notice to the State that it might be waiving
arguments not made immediately in response to the draft audit report.
Moreover, Board regulations contemplate a de novo proceeding, and this
Board has routinely permitted both parties to raise new arguments during
an appeal before it, so long as the new matters are raised early enough
in the proceedings to avoid undue prejudice to the other party or undue
delay in the process.  Furthermore, this policy operated to OCSE's
benefit when it became apparent that corrections to the statistical
calculations were needed.  We therefore consider all of the State's
arguments in connection with specific cases that it briefed.

23.   The State admitted that in order for it to be consistent on this
position, "action" cases fitting into this category would also have to
be excluded from the sample; it stated that it was willing to so
stipulate and provided documentation for some examples.  (It did not do
the same for Native American cases, however, as the Agency pointed out.)

24.   Moreover, contrary to the State's assertions, the auditor's
worksheets do not show confusion as to whether newly opened cases should
be included.  While the auditor did use a space following "Reason
excluded" to place a footnote indicating that the case was opened during
the audit period, he obviously decided to include these cases.

25.   There were a number of discrepancies between the State's
characterization of particular cases in its brief and the documents
which it produced in support of its argument.  For example, the State
contended that State Ex. 47 was opened on November 15, 1987; a close
examination of the documents shows that the correct date is January 15,
1987.  We examined all of the State's documents carefully and accepted
the dates on them as the best evidence as to when the case was opened.
Thus, we used the opening dates indicated on the documents provided for
State Exs. 26, 35, 39, 42, and 47, as opposed to the dates alleged by
the State.

26.   This ruling covers State Exs. 22, 26, 27, 28, 30, 31, 33, 35, 36,
38, 42, 46 and 47.

27.   These cases were State Exs. 23, 24, and