Quality Assurance Systems in Agency Adjudication

Type
Recommendation
Publication Date
December 31, 2021

Executive Summary

This Recommendation provides guidance for agencies in developing and implementing quality assurance (QA) systems to proactively identify problems in case adjudication, including misapplied legal standards, inconsistent applications of the law by different adjudicators, procedural violations, and systemic barriers to participation in adjudicatory proceedings. Identifying and correcting such problems promotes fairness, the perception of fairness, accuracy, inter-decisional consistency, timeliness, efficiency, and other relevant goals. Among other things, it recommends:

  • Agencies should systematically review the work of adjudicators and all related personnel who have important roles in case adjudication.
  • Agencies should ensure that QA personnel are, and are perceived as, impartial, and that they have sufficient expertise and time to perform their functions.
  • Agencies should consider at what points in the adjudication process QA review should occur. 
  • Agencies should consider a layered approach to QA that employs more than one methodology, such as peer review, random sampling, and targeted review. 
  • Agencies should consider whether to use data analytics and artificial intelligence tools to help identify quality issues.
  • Agencies should use data and other information obtained from QA systems to provide individualized feedback for adjudicators and other personnel, as well as to identify systemic recurring or emerging problems for policymakers and other personnel, as appropriate.
  • Agencies should periodically assess whether their QA systems achieve the goals they are intended to accomplish.

See also: Recommendation 2020-3Agency Appellate SystemsRecommendation 2018-3Electronic Case Management in Federal Agency AdjudicationRecommendation 73-3Quality Assurance Systems in the Adjudication of Claims of Entitlement to Benefits or Compensation


Note: This summary is prepared by staff within the Office of the Chairman to help readers understand the Recommendation adopted by the Assembly, which appears in full below. Only the Assembly is authorized to make recommendations to administrative agencies, the President, Congress, or the Judicial Conference of the United States (5 U.S.C. § 595(a)(1)).

Recommendation of the ACUS Assembly

A quality assurance system is an internal review mechanism that agencies use to detect and remedy both problems in individual adjudications and systemic problems in agency adjudicative programs. Through well-designed and well-implemented quality assurance systems, agencies can proactively identify both problems in individual cases and systemic problems, including misapplied legal standards, inconsistent applications of the law by different adjudicators, procedural violations, and systemic barriers to participation in adjudicatory proceedings (such as denials of reasonable accommodation). Identifying such problems enables agencies to ensure adherence to their own policies and improve the fairness (and perception of fairness), accuracy, inter-decisional consistency, timeliness, and efficiency of their adjudicative programs.[1]

In 1973, the Administrative Conference recommended the use of quality assurance systems to evaluate the accuracy, timeliness, and fairness of adjudication of claims for public benefits or compensation.[2] Since then, many agencies, including those that adjudicate other types of matters, have implemented or considered implementing quality assurance systems, often to supplement other internal review mechanisms such as agency appellate systems.[3] Unlike agencies’ appellate systems, quality assurance systems are not primarily concerned with error correction in individual cases, and they may assess numerous adjudicatory characteristics that are not typically subject to appellate review, such as effective case management. Nor are they avenues for collateral attack on individual adjudicatory dispositions. Also, quality assurance systems are distinct from agencies’ procedures that deal with allegation of judicial misconduct. This Recommendation accounts for these developments and provides further guidance for agencies that may wish to implement new or to improve existing quality assurance systems. 

How agencies structure their quality assurance systems can have important consequences for their success. For example, quality assurance systems that overemphasize timeliness as a measure of quality may overlook problems of decisional accuracy. Quality assurance personnel must have the expertise and judgment necessary to accurately and impartially perform their responsibilities. Quality assurance personnel must use methods for selecting and reviewing cases that allow them to effectively identify case-specific and systemic problems. Agencies must determine how they will use information collected through quality assurance systems to correct problems that threaten the fairness (and perception of fairness), accuracy, inter-decisional consistency, timeliness, and efficiency of their adjudicative programs. Agencies also must design quality assurance systems to comply with all applicable requirements, such as the statutory prohibition against rating the job performance of or granting any monetary or honorary award to an administrative law judge.[4]

There are many methods of quality review that agencies can use, independently or in combination, depending upon the needs and goals of their adjudicative programs. For example, agencies can adopt a peer review process by which adjudicators review other adjudicators’ decisions and provide feedback before decisions are issued. Agencies can prepare and circulate regular reports for internal use that describe systemic trends identified by quality assurance personnel. Agencies can also use information from quality assurance systems to identify training needs and clarify or improve policies. 

Agencies, particularly those with large caseloads, may also benefit from using data captured in electronic case management systems. Through advanced data analytics and artificial intelligence techniques (e.g., machine-learning algorithms), agencies can use such data to rapidly and efficiently identify anomalies and systemic trends.[5]

This Recommendation recognizes that agencies have different quality assurance needs and available resources. What works best for one agency may not work for another. What quality assurance techniques agencies may use may also be constrained by law. Agencies must take into account their own unique circumstances when implementing the best practices that follow.

RECOMMENDATION

Review and Development of Quality Assurance Standards

1. Agencies with adjudicative programs that do not have quality assurance systems—that is, practices for assessing and improving the quality of decisions in adjudicative programs—should consider developing such systems to promote fairness, the perception of fairness, accuracy, inter-decisional consistency, timeliness, efficiency, and other goals relevant to their adjudicative programs.

2. Agencies with adjudicative programs that have quality assurance systems should review them in light of the recommendations below.

3. Agencies’ quality assurance systems should assess whether decisions and decision-making processes: 

a. Promote fairness and the appearance of fairness;

b. Accurately determine the facts of the individual matters;

c. Correctly apply the law to the facts of the individual matters;

d. Comply with all applicable requirements;

e. Are completed in a timely and efficient manner; and

f. Are consistent across all adjudications of the same type. 

4. Agencies should consider both reviews that address decisions’ likely outcomes before reviewing tribunals, and reviews of adjudicators’ decisional reasoning, which address policy compliance, consistency, and fairness.

5. A quality assurance system should review the work of adjudicators and all related personnel who have important roles in the adjudication of cases, such as attorneys who assist in drafting decisions, interpreters who assist in hearings, and staff who assist in developing evidence.

6. Analyzing decisions of agency appellate and judicial review bodies may help quality assurance personnel assess whether the adjudicatory process is meeting the goals outlined in Paragraph 3. But agencies should not rely solely on such decisions to set and assess standards of quality because appealed cases may not be representative of all adjudications.

Quality Assurance Personnel

7. Agencies should ensure that quality assurance personnel can perform their functions in a manner that is, and is perceived as, impartial, including being able to perform such functions without pressure, interference, or expectation of employment consequences from the personnel whose work they review.

8. Agencies should ensure that quality assurance personnel understand all applicable substantive and procedural requirements and have the expertise necessary to review the work of all personnel who have important roles in adjudicating cases.

9. Agencies should ensure that quality assurance personnel have sufficient time to fully and fairly perform their assigned functions.

10. Agencies should consider whether quality assurance systems should be staffed by permanent or temporary personnel, or some combination of the two. Personnel who perform quality assurance functions on a permanent basis may gain more experience and institutional knowledge over time than will personnel who perform on a temporary basis. Personnel who perform quality assurance on a temporary basis, however, may be more likely to contribute different experiences and new perspectives.

Timing of and Process for Quality Assurance Review

11. Agencies should consider at what points in the adjudication process quality assurance review should occur. In some cases, review that occurs before adjudicators issue their decisions, or during a period when agency appellate review is available, could allow errors to be corrected before decisions take effect. However, agencies should take care that pre-disposition review does not interfere with adjudicators’ qualified decisional independence and comports with applicable restrictions governing ex parte communications, internal separation of decisional and adversarial personnel, and decision making based on an exclusive record.

12. Agencies should consider implementing peer review programs in which adjudicators can provide feedback to other adjudicators.

13. Agencies should consider a layered approach to quality assurance that employs more than one methodology. As resources allow, this may include formal quality assessments and informal peer review on an individual basis, sampling and targeted case selection on a systemic basis, and case management systems with automated adjudication support tools. 

14. In selecting cases for quality assurance review, agencies should consider the following methods:

a. Review of every case, which may be useful for agencies that adjudicate a small number of cases but impractical for agencies that adjudicate a high volume of cases;

b. Random sampling, which can be more efficient for agencies that decide a high volume of cases but may cause quality assurance personnel to spend too much time reviewing cases that are unlikely to present issues of concern;

c. Stratified random sampling, a type of random sampling that over-samples cases based on chosen characteristics, which may help quality assurance personnel focus on specific legal issues or factual circumstances associated with known problems, but may systematically miss certain types of problems; and

d. Targeted selection of cases, which allows agencies to directly select decisions that contain specific case characteristics and may help agencies study known problems but may miss identifying other possible problems.

Data Collection and Analysis

15. Agencies, particularly those with large caseloads, should consider what data would be useful and how data could be used for quality assurance purposes. Agencies should ensure that, for each case, an electronic case management or other system includes the following information:

a. The identities of adjudicators and any personnel who assisted in evaluating evidence, writing decisions, or performing other case-processing tasks;

b. The procedural history of the case, including any actions and outcomes on administrative or judicial review;

c. The issues presented in the case and how they were resolved; and

d. Any other data the agency determines to be helpful.

16. Agencies should regularly evaluate their electronic case management or other systems to ensure they are collecting the data necessary to assess and improve the quality of decisions in their programs.

17. Agencies, particularly those with large caseloads, should consider whether to use data analytics and artificial intelligence (AI) tools to help quality assurance personnel identify potential errors or other quality issues. Agencies should ensure that they have the technical capacity, expertise, and data infrastructure necessary to build and deploy such tools; that any data analytics or AI tools the agencies use support, but do not displace, evaluation and judgment by quality assurance personnel; and that such systems comply with legal requirements for privacy and security and do not create or exacerbate harmful biases.

Use of Quality Assurance Data and Findings

18. Agencies should not use information gathered through quality assurance systems in ways that could improperly influence decision making or personnel matters.

19. Agencies should provide, consistent with Paragraph 11, individualized feedback for adjudicators and other personnel who assist in evaluating evidence, writing decisions, or performing other case-processing tasks within a reasonable amount of time and include any relevant positive and negative feedback.

20. Agencies should establish regular communications mechanisms to facilitate the dissemination of various types of quality assurance information within the agency. Agencies should:

a. Communicate information about systemic recurring or emerging problems identified by quality assurance systems to all personnel who participate in the decision-making process and to training personnel; 

b. Communicate, as appropriate, with agency rule-writers and operations support personnel to allow them to consider whether recurring problems identified by quality assurance systems should be addressed or clarified by rules, operational guidance, or decision support tools; and

c. Consider whether to communicate information to appellate adjudicators or other agency officials who are authorized to remedy problems identified by quality assurance systems in issued decisions.

Public Disclosure and Transparency

21. Agencies should provide access on their websites to all rules and any associated explanatory materials that apply to quality assurance systems, including standards for evaluating the quality of agency decisions and decision-making processes.

22. Agencies should consider whether to publicly disclose data in case management systems in a de-identified form (i.e., with all personally identifiable information removed) to enable continued research by individuals outside of the agency.

Assessment and Oversight

23. Agencies with quality assurance systems should assess periodically whether those systems achieve the goals they were intended to accomplish, including by affirmatively soliciting feedback from the public, adjudicators, and other agency personnel concerning the functioning of their quality assurance systems.


[1] Daniel E. Ho, David Marcus & Gerald K. Ray, Quality Assurance Systems in Agency Adjudication (Nov. 30, 2021) (report to the Admin. Conf. of the U.S.).

[2] Admin. Conf. of the U.S., Recommendation 73-3, Quality Assurance Systems in the Adjudication of Claims of Entitlement to Benefits or Compensation, 38 Fed. Reg. 16840 (June 27, 1973).

[3] Admin. Conf. of the U.S., Recommendation 2020-3, Agency Appellate Systems, 86 Fed. Reg. 6618 (Jan. 22, 2021).

[4] See, e.g., 5 U.S.C. § 4301; 5 C.F.R. § 930.206.

[5] Admin. Conf. of the U.S., Statement #20, Agency Use of Artificial Intelligence, 86 Fed. Reg. 6616 (Jan. 22, 2021); Admin. Conf. of the U.S., Recommendation 2018-3, Electronic Case Management in Federal Administrative Adjudication, 83 Fed. Reg. 30686 (June 29, 2018).

Recommended Citation: Admin. Conf. of the U.S., Recommendation 2021-10, Quality Assurance Systems in Agency Adjudication, 87 Fed. Reg. 1722 (Jan. 12, 2022).