Managing Mass, Computer-Generated, and Falsely Attributed Comments

Type
Recommendation
Publication Date
June 30, 2021

Executive Summary

With some exceptions, the Administrative Procedure Act requires agencies to give interested persons an opportunity to participate in rulemakings through submission of written data, views, or arguments. Agencies must consider all relevant submissions and make them available in a public docket. This Recommendation identifies strategies for handling three types of comments that present distinctive management challenges: mass, computer-generated, and falsely attributed comments. Among other things, it recommends:

  • Agencies should manage mass comments by using tools that allow them to de-duplicate comments, encouraging the inclusion of multiple signatures on a single comment, and considering various ways to display comments, such as posting a single representative sample of nearly identical comments received.
  • Agencies should manage computer-generated comments by flagging comments they have identified as computer-generated, displaying and storing them separately from other comments, and using reCAPTCHA or similar tools to ensure comments are submitted by humans.
  • Agencies should manage falsely attributed comments by giving people the opportunity to flag for the agency that they have been falsely attributed in a comment and to request removal of the comment. 
  • Agencies should promote transparency in the comment process by making their policies pertaining to mass, computer-generated, and falsely attributed comments publicly available.

See also: Recommendation 2018-7Public Engagement in RulemakingRecommendation 2013-5Social Media in RulemakingRecommendation 2011-8Agency Innovations in eRulemakingRecommendation 2011-2Rulemaking Comments

This summary is prepared by the Office of the Chair to help readers understand the Recommendation adopted by the Assembly, which appears in full below.


Recommendation of the ACUS Assembly

Under the Administrative Procedure Act (APA), agencies must give members of the public notice of proposed rules and the opportunity to offer their “data, views, or arguments” for the agencies’ consideration.[1] For each proposed rule subject to these notice-and-comment procedures, agencies create and maintain an online public rulemaking docket in which they collect and publish the comments they receive along with other publicly available information about the proposed rule.[2] Agencies must then process, read, and analyze the comments received. The APA requires agencies to consider the “relevant matter presented” in the comments received and to provide a “concise general statement of [the rule’s] basis and purpose.”[3] When a rule is challenged on judicial review, courts have required agencies to demonstrate that they have considered and responded to any comment that raises a significant issue.[4] The notice-and-comment process is an important opportunity for the public to provide input on a proposed rule and the agency to “avoid errors and make a more informed decision” on its rulemaking.[5]

Technological advances have expanded the public’s access to agencies’ online rulemaking dockets and made it easier for the public to comment on proposed rules in ways that the Administrative Conference has encouraged.[6] At the same time, in recent high-profile rulemakings, members of the public have submitted comments in new ways or in numbers that can challenge agencies’ current approaches to processing these comments or managing their online rulemaking dockets.

Agencies have confronted three types of comments that present distinctive management challenges: (1) mass comments, (2) computer-generated comments, and (3) falsely attributed comments. For the purposes of this Recommendation, mass comments are comments submitted in large volumes by members of the public, including the organized submission of identical or substantively identical comments. Computer-generated comments are comments whose substantive content has been generated by computer software rather than by humans.[7] Falsely attributed comments are comments attributed to people who did not submit them.

These three types of comments, which have been the subject of recent reports by both federal[8] and state[9] authorities, can raise challenges for agencies in processing, reading, and analyzing the comments they receive in some rulemakings. If not managed well, the processing of these comments can contribute to rulemaking delays or can raise other practical or legal concerns for agencies to consider.

In addressing the three types of comments in a single recommendation, the Conference does not mean to suggest that agencies should treat these comments in the same way. Rather, the Conference is addressing these comments in the same Recommendation because, despite their differences, they can present similar or even overlapping management concerns during the rulemaking process. In some cases, agencies may also confront all three types of comments in the same rulemaking.

The challenges presented by these three types of comments are by no means identical. With mass comments, agencies may encounter processing or cataloging challenges simply as a result of the volume as well as the identical or substantively identical content of some comments they receive. Without the requisite tools, agencies may also find it difficult or time-consuming to digest or analyze the overall content of all comments they receive.

In contrast with mass comments, computer-generated comments and falsely attributed comments may mislead an agency or raise issues under the APA and other statutes. One particular problem that agencies may encounter is distinguishing computer-generated comments from comments written by humans. Computer-generated comments may also raise potential issues for agencies as a result of the APA’s provision for the submission of comments by “interested persons.”[10] Falsely attributed comments can harm people whose identities are appropriated and may create the possibility of prosecution under state or federal criminal law. False attribution may also deceive agencies or diminish the informational value of a comment, especially when the commenter claims to have situational knowledge or the identity of the commenter is otherwise relevant. The informational value that both of these types of comments provide to agencies is likely to be limited or at least different from comments that have been neither computer-generated nor falsely attributed.

This Recommendation is limited to how agencies can better manage the processing challenges associated with mass, computer-generated, and falsely attributed comments.[11] By addressing these processing challenges, the Recommendation is not intended to imply that widespread participation in the rulemaking process, including via mass comments, is problematic. Indeed, the Conference has explicitly endorsed widespread public participation on multiple occasions,[12] and this Recommendation should help agencies cast a wide net when seeking input from all individuals and groups affected by a rule. The Recommendation aims to enhance agencies’ ability to process comments they receive in the most efficient way possible and to ensure that the rulemaking process is transparent to prospective commenters and the public more broadly.

Agencies can advance the goals of public participation by being transparent about their comment policies or practices and by providing educational information about public involvement in the rulemaking process.[13] Agencies’ ability to process comments can also be enhanced by digital technologies. As part of its eRulemaking Program, for example, the General Services Administration (GSA) has implemented technologies on the Regulations.gov platform that make it easier for agencies to verify that a commenter is a human being.[14] GSA’s Regulations.gov platform also includes an application programming interface (API)—a feature of a computer system that enables different systems to communicate with it—to facilitate mass comment submission.[15] This technology platform allows partner agencies to better manage comments from identifiable entities that submit large volumes of comments. Some federal agencies also use a tool, sometimes referred to as de-duplication software, to identify and group identical or substantively identical comments.

New software and technologies to manage public comments will likely emerge in the future, and agencies will need to keep apprised of them. Agencies might also consider adopting alternative methods for encouraging public participation that augment the notice-and-comment process, particularly to the extent that doing so ameliorates some of the management challenges described above.[16] Because technology is rapidly changing, agencies will need to stay apprised of new developments that could enhance public participation in rulemaking.

Not all agencies will encounter mass, computer-generated, or falsely attributed comments. But some agencies have confronted all three, sometimes in the same rulemaking. In offering the best practices that follow, the Conference recognizes that agency needs and resources will vary. For this reason, agencies should tailor the best practices in this Recommendation to their particular rulemaking programs and the types of comments they receive or expect to receive.

RECOMMENDATION

Managing Mass Comments

1. The General Services Administration’s (GSA) eRulemaking Program should provide a common de-duplication tool for agencies to use, although GSA should allow agencies to modify the de-duplication tool to fit their needs or to use another tool, as appropriate. When agencies find it helpful to use other software tools to perform de-duplication or extract information from a large number of comments, they should use reliable and appropriate software. Such software should provide agencies with enhanced search options to identify the unique content of comments, such as the technologies used by commercial legal databases like Westlaw or LexisNexis.

2. To enable easier public navigation through online rulemaking dockets, agencies may welcome any person or entity organizing mass comments to submit comments with multiple signatures rather than separate identical or substantively identical comments.

3. Agencies may wish to consider alternative approaches to managing the display of comments online, such as by posting only a single representative example of identical comments in the online rulemaking docket or by breaking out and posting only non-identical content in the docket, taking into consideration the importance to members of the public to be able to verify that their comments were received and placed in the agency record. When agencies decide not to display all identical comments online, they should provide publicly available explanations of their actions and the criteria for verifying the receipt of individual comments or locating identical comments in the docket and for deciding what comments to display.

4. When an agency decides not to include all identical or substantively identical comments in its online rulemaking docket to improve the navigability of the docket, it should ensure that any reported total number of comments (such as in Regulations.gov or in the preambles to final rules) includes the number of identical or substantively identical comments. If resources permit, agencies should separately report the total number of identical or substantively identical comments they receive. Agencies should also consider providing an opportunity for interested members of the public to obtain or access all comments received.

Managing Computer-Generated Comments

5. To the extent feasible, agencies should flag any comments they have identified as computer-generated or display or store them separately from other comments. If an agency flags a comment as computer-generated, or displays or stores it separately from the online rulemaking docket, the agency should note its action in the docket. The agency may also choose to notify the submitter directly if doing so does not violate any relevant policy prohibiting direct contact with senders of “spam” or similar communications.

6. Agencies that operate their own commenting platforms should consider using technology that verifies that a commenter is a human being, such as reCAPTCHA or another similar identity proofing tool. The eRulemaking Program should continue to retain this functionality.

7. When publishing a final rule, agencies should note any comments on which they rely that they know are computer-generated and state whether they removed from the docket any comments they identified as computer-generated.

Managing Falsely Attributed Comments

8. Agencies should provide opportunities (including after the comment deadline) for individuals whose names or identifying information have been attached to comments they did not submit to identify such comments and to request that the comment be anonymized or removed from the online rulemaking docket.

9. If an agency flags a comment as falsely attributed or removes such a comment from the online rulemaking docket, it should note its action in the docket. Agencies may also choose to notify the purported submitter directly if doing so does not violate any agency policy.

10. If an agency relies on a comment it knows is falsely attributed, it should include an anonymized version of that comment in its online rulemaking docket. When publishing a final rule, agencies should note any comments on which they rely that are falsely attributed and should state whether they removed from the docket any falsely attributed comments.

Enhancing Agency Transparency in the Comment Process

11. Agencies should inform the public about their policies concerning the posting and use of mass, computer-generated, and falsely attributed comments. These policies should take into account the meaningfulness of the public’s opportunity to participate in the rulemaking process and should balance goals such as user-friendliness, transparency, and informational completeness. In their policies, agencies may provide for exceptions in appropriate circumstances.

12. Agencies and relevant coordinating bodies (such as GSA’s eRulemaking Program, the Office of Information and Regulatory Affairs, and any other governmental bodies that address common rulemaking issues) should consider providing publicly available materials that explain to prospective commenters what types of responses they anticipate would be most useful, while also welcoming any other comments that members of the public wish to submit and remaining open to learning from them. These materials could be presented in various formats—such as videos or FAQs—to reach different audiences. These materials may also include statements within the notice of proposed rulemaking for a given agency rule or on agencies’ websites to explain the purpose of the comment process and explain that agencies seriously consider any relevant public comment from a person or organization.

13. To encourage the most relevant submissions, agencies that have specific questions or are aware of specific information that may be useful should identify those questions or such information in their notices of proposed rulemaking.

Additional Opportunities for Public Participation

14. Agencies and relevant coordinating bodies should stay abreast of new technologies for facilitating informative public participation in rulemakings. These technologies may help agencies to process mass comments or identify and process computer-generated and falsely attributed comments. In addition, new technologies may offer new opportunities to engage the public, both as part of or as a supplement to the notice-and-comment process. Such opportunities may help ensure that agencies receive input from communities that may not otherwise have an opportunity to participate in the conventional comment process.

Coordination and Training

15. Agencies should work closely with relevant coordinating bodies to improve existing technologies and develop new technologies to address issues associated with mass, computer-generated, and falsely attributed comments. Agencies and relevant coordinating bodies should share best practices and relevant innovations for addressing challenges related to these comments.

16. Agencies should develop and offer opportunities for ongoing training and staff development to respond to the rapidly evolving nature of technologies related to mass, computer-generated, and falsely attributed comments and to public participation more generally.

17. As authorized by 5 U.S.C. § 594(2), the Conference’s Office of the Chairman should provide for the “interchange among administrative agencies of information potentially useful in improving” agency comment processing systems. The subjects of interchange might include technological and procedural innovations, common management challenges, and legal concerns under the Administrative Procedure Act and other relevant statutes.


[1] 5 U.S.C. § 553. This requirement is subject to a number of exceptions. See id.

[2] See E-Government Act § 206, 44 U.S.C. § 3501 note (establishing the eRulemaking Program to create an online system for conducting the notice-and-comment process); see also Admin. Conf. of the U.S., Recommendation 2013‑4, Administrative Record in Informal Rulemaking, 78 Fed. Reg. 41358 (July 10, 2013) (distinguishing between “the administrative record for judicial review,” “rulemaking record,” and the “public rulemaking docket”).

[3] 5 U.S.C. § 553.

[4] Perez v. Mortg. Bankers Ass’n, 575 U.S. 92, 96 (2015) (“An agency must consider and respond to significant comments received during the period for public comment.”).

[5] Azar v. Allina Health Services, 139 S. Ct. 1804, 1816 (2019).

[6] See Admin. Conf. of the U.S., Recommendation 2018-7, Public Engagement in Rulemaking, 84 Fed. Reg. 2146 (Feb. 6, 2019); Admin. Conf. of the U.S., Recommendation 2013-5, Social Media in Rulemaking, 78 Fed. Reg. 76269 (Dec. 17, 2013); Admin. Conf. of the U.S., Recommendation 2011-8, Agency Innovations in eRulemaking, 77 Fed. Reg. 2264 (Jan. 17, 2012); Admin. Conf. of the U.S., Recommendation 2011-2, Rulemaking Comments, 76 Fed. Reg. 48791 (Aug. 9, 2011).

[7] The ability to automate the generation of comment content may also remove human interaction with the agency and facilitate the submission of large volumes of comments in cases in which software can repeatedly submit comments via Regulations.gov.

[8] See Permanent Subcommittee on Investigations, U.S. Senate Comm. on Homeland Security and Gov’t Affairs, Staff Report, Abuses of the Federal Notice-and-Comment Rulemaking Process (2019); U.S. Gov’t Accountability Off., GAO-20-413T, Selected Agencies Should Clearly Communicate How They Post Public Comments and Associated Identity Information (2020); U.S. Gov’t Accountability Off., GAO-19-483, Selected Agencies Should Clearly Communicate Practices Associated with Identity Information in the Public Comment Process (2019).

[9] N.Y. State Off. of the Att’y Gen., Fake Comments: How U.S. Companies & Partisans Hack Democracy to Undermine Your Voice (2021).

[10] 5 U.S.C. § 553.

[11] This Recommendation does not address what role particular types of comments should play in agency decision making or what consideration, if any, agencies should give to the number of comments in support of a particular position.

[12] See Recommendation 2018-7, supra note 6; Admin. Conf. of the U.S., Recommendation 2017-3, Plain Language in Regulatory Drafting, 82 Fed. Reg. 61728 (Dec. 29, 2017); Admin. Conf. of the U.S., Recommendation 2017-2, Negotiated Rulemaking and Other Options for Public Engagement, 82 Fed. Reg. 31040 (July 5, 2017); Admin. Conf. of the U.S., Recommendation 2014-6, Petitions for Rulemaking, 79 Fed. Reg. 75117 (Dec. 17, 2014); Recommendation 2013-5, supra note 6; Recommendation 2011-8, supra note 6; Admin. Conf. of the U.S., Recommendation 2011-7, Federal Advisory Committee Act: Issues and Proposed Reforms, 77 Fed. Reg. 2261 (Jan. 17, 2012); Recommendation 2011-2, supra note 6.

[13] For an example of educational information on rulemaking participation, see the “Commenter’s Checklist” that the eRulemaking Program currently displays in a pop-up window for every rulemaking webpage that offers the public the opportunity to comment. See Commenter’s Checklist, Gen. Servs. Admin., https://www.Regulations.gov (last visited May 24, 2021) (navigate to any rulemaking with an open comment period; click comment button; then click “Commenter’s Checklist”). In addition, the text of this checklist appears on the project page for this Recommendation on the ACUS website.

[14] This software is distinct from identity validation technologies that force commenters to prove their identities.

[15] See Regulations.gov API, Gen. Servs. Admin., https://open.gsa.gov/api/regulationsgov/ (last visited May 24, 2021).

[16] See Steve Balla, Reeve Bull, Bridget Dooling, Emily Hammond, Michael Herz, Michael Livermore, & Beth Simone Noveck, Mass, Computer-Generated, and Fraudulent Comments 43–48 (June 1, 2021) (report to the Admin. Conf. of the U.S.).

Separate Statement for Administrative Conference Recommendation 2021-1 by Senior Fellow Randolph J. May  

Filed June 18, 2021

I attended several of the Committee meetings that considered the preparation of this Recommendation. So, I have a good sense of the hard work that went into the preparation of the Recommendation by the Consultants, the Rulemaking Committee Chair Cary Coglianese, the Committee members, and the ACUS staff, and I am grateful for their dedication.

I support adoption of the Recommendation in the context of the express limitation of the scope of the project as stated: “This Recommendation does not address what role particular types of comments should play in agency decision making or what consideration, if any, agencies should give to the number of comments in support of a particular position.”

I wish to associate myself generally with the Comment of Senior Fellow Richard Pierce, dated May 25, 2021, especially his concern that the ACUS Recommendation not be misconstrued to foster “the widespread but mistaken public belief that notice and comment rulemaking can and should be considered a plebiscite in which the number of comments filed for or against a proposed rule is an accurate measure of public opinion that should influence the agency’s decision whether to adopt the proposed rule.”

I have submitted comments and/or reply comments in every “net neutrality” proceeding, however denominated, the Federal Communications Commission has conducted over the last fifteen years – and, yes, the back-and-forth battle over various “net neutrality” proposals has been going on that long and there have been at least a dozen comment cycles. However, especially in the last two “net neutrality” rulemaking cycles, in 2014 – 2015 and 2017, there has been a major escalation – you could call it exercising the “nuclear option” – in the effort, by both opposing sides, to generate as many mass, computer-generated form comments as possible. By “form comments” I mean comments that concededly contain little or no information beyond cursorily stating a “pro” or “con” position.

The startling results of going nuclear, in terms of generating the sheer number of mass, computer-generated form comments in the latest “net neutrality” round are now well-known. The phenomenon has been the subject of federal and state studies cited in the Recommendation’s Preamble, with some of the most significant details cited in Professor Pierce’s separate statement. Aside from any other concerns, I can personally testify that the deluge of approximately 22 million mass, computer-generated form comments often overwhelmed the FCC’s ability to keep its electronic filing system operating properly and often rendered the ability to search for comments that might possibly contain relevant data and information well-nigh impossible.

And, of course, the huge costs expended by private parties engaging in the effort that led to the submission of approximately 22 million mass, computer-generated form comments (including the 18 million “fake” comments) were enormous, not to mention the direct and indirect costs imposed on the government merely to compile, process, and review the comments.

It is blinking reality not to recognize that the pro- and con- net neutrality interests responsible for generating 22 million comments assumed, in some significant way, that the outcome of the rulemaking would be impacted by which side “won” the comment battle. In other words, it must have been assumed that, in some meaningful sense, the rulemaking would be decided on the basis of a plebiscite, “counting comments,” not on the basis of the quality of the data, evidence, and arguments submitted.

So, while I accept the constraints imposed by the parameters of this Recommendation – which, on its own terms, contains useful guidance to assist agencies – I hope that, going forward, ACUS will initiate a project that considers the appropriateness of curbing the submission of mass, computer-generated form comments, and, if so, how best to accomplish this. Certainly public education, including by government officials, and especially the pertinent agency officials, regarding the objectives of the rulemaking process in general, and specific rulemakings in particular, can play an important role.

I wish to make clear that I recognize the value of widespread participation by “interested persons,” as the Administrative Procedure Act puts it, in the rulemaking process, not only because of the value of the evidence put on the record through such participation, but because of the instrumental value bestowed upon interested persons by the opportunity to participate in government decision-making processes that affect them.

With due deliberation, with recognition of the need to exercise care in drawing relevant distinctions among various types of rulemaking proceedings and their objectives, there ought to be a proper way to discourage the type of “comment war” that occurred in the two most recent FCC net neutrality proceedings, while, at the same time, encouraging the type of widespread public participation that is most helpful to agencies in promulgating sound public policies. 

Separate Statement for Administrative Conference Recommendation 2021-1 by Senior Fellow Nina A. Mendelson

Filed June 27, 2021 (This is an abbreviated version of a statement that is available on the ACUS website here.)

This Recommendation, the product of much hard work, will help guide agencies managing mass comments and addressing falsely attributed and computer-generated comments. But these rulemaking-related challenges raise very different concerns. Comments from ordinary individuals, whatever their volume, and whether they supply situated knowledge or views, can be relevant, useful, and even important to many rulemakings. The Recommendation correctly does not imply otherwise. The Conference should address the proper agency response to such comments separately, and soon.

First, public comment’s function encompasses more than the purely “technical,” whether that is supplying data or critiquing an agency’s economic analysis. For some statutory issues, certainly, public comments transmitting views are less relevant. Under the Endangered Species Act, for example, an agency determining whether an animal is endangered must assess its habitat and likelihood of continued existence. Public affection for a species is not directly relevant.

But agencies address numerous issues that, by statute, extend far beyond technocratic questions, encompassing value-laden issues. An agency deciding what best serves public-regarding statutory goals must balance all such considerations.

Nonexclusive examples relevant to agency statutory mandates include:

  • The importance of nearby accessible bathrooms to the dignity of wheelchair users, at issue in a 2010 Americans with Disabilities Act regulation.
  • Weighing potential public resource uses. For multiple-use public lands, the Bureau of Land Management must, by regulation, balance recreation and “scenic, scientific and historical values” with resource extraction uses, including timbering and mining.
  • Potential public resistance to an action, such as the Coast Guard’s ultimately abandoned decision creating live-fire zones in the Great Lakes for weapons practice in the early 2000s. Had the agency seriously sought out public comment, it would have detected substantial public resistance to this action, which, without the benefit of participation, the agency considered justified and minimally risky.
  • Public resistance to a possible mandate as unduly paternalistic, burdensome, or exclusionary, whether ignition interlock or a vaccine passport requirement. Justice Rehnquist identified this issue in Motor Vehicles Mfg. Ass’n v. State Farm Mutual Auto Ins., 463 U.S. 29 (1983). Though Justice Rehnquist’s dissent linked the issue to presidential elections, he underscored its relevance to rulemaking.
  • Environmental justice/quality of life matters. In a 2020 rule implementing the National Environmental Policy Act, the Council on Environmental Quality decided that an agency need no longer assess a proposed action’s cumulative impacts in its environmental impact analysis. This decision will especially impact low-income communities and communities of color, including Southwest Detroit, where multiple polluting sources adjoin residential neighborhoods. Whether to require cumulative impacts analysis is not a technical issue. It is a policy decision whether community environmental and quality of life concerns are important enough to justify lengthier environmental analyses. The comment process enables communities to express directly the importance of these issues.

Rulemaking is certainly not a plebiscite. Besides representativeness concerns, that is mainly because statutes typically require agencies to consider multiple factors, not only public views. But ordinary people’s views and preferences are nonetheless relevant and thus appropriately communicated to the agency. The text of 5 U.S.C. 553(c) is express here: “interested persons” are entitled to submit “data, views, or arguments.”

Second, the identity of individual commenters may provide critical context. That a comment on a proposed ADA regulation’s importance is from a wheelchair user should matter.  The same is true for religious group members describing potential interference with their practices, residents near a pipeline addressing safety or public notice requirements, or Native American tribal members speaking to spiritual values and historical significance of public lands.

Third, a meaningfully open comment process supports broader public engagement by otherwise underrepresented individuals and communities, whether because of race, ethnicity, gender identity, or something else. Studies consistently show that industry groups and regulated entities, with disproportionate resources, access to agency meetings, and ability to exert political pressure, punch above their weight in the comment process. Suggesting that agencies can appropriately ignore comments from individuals would simply reinforce this disparate influence. It would also undercut the Conference’s position in Recommendation 2018-7, Public Engagement in Rulemaking, that agencies should act to broaden and enhance public participation.

Moreover, while groups can support participation, agencies should not assume that group action sufficiently conveys individual views. Many individual interests—even important ones—are underrepresented. With respect to employees such as truck drivers, for example, unions represent only 10% of U.S. wage workers.

Where groups do support individual comment submission, their involvement should not be understood to taint participation. Well-funded regulated entities typically hire attorneys to draft their comments. We nonetheless attribute those views to the commenters. We should treat individual comments similarly even if they incorporate group-suggested language.

Fourth, although mass comments in certain rulemakings may have encouraged computer-generated and falsely attributed comments, agencies should directly tackle these latter problems. And while comments from individuals vary in usefulness and sophistication, that is true of all comments. In short, agencies should respond to large volumes of individual comments not by attempting to deter them but instead, following Recommendation paragraphs 11-13, by providing clear, visible public information on how to draft a valuable comment.

Finally, the most difficult issue is how, exactly, agencies should respond to individual comments that convey views as well as, or instead of, specific information regarding a rule’s need or impacts. Large comment volumes, most pragmatically, may signal an agency regarding the rule’s political context, including potential congressional concern. Further, large comment quantities can alert agencies to underappreciated or undercommunicated issues or reveal potential public resistance. Such comments might constitute a yellow flag for an agency to investigate, including by reaching out to particular communities to assess the basis and intensity of their views.

At a minimum, an agency should acknowledge and answer such comments, even briefly. The agency might judge that particular public views are outweighed by other considerations. But an answer will communicate, importantly, that individuals have been heard. The Federal Communication Commission’s responses to large comment volumes in recent net neutrality proceedings are reasonable examples.

I urge the Conference to consider these issues soon and provide guidance to rulemaking agencies.

Separate Statement for Administrative Conference Recommendation 2021-1 by Senior Fellow Richard J. Pierce, Jr.

Filed June 29, 2021 (This is an abbreviated version of a statement that is available on the ACUS website here.)

These three phenomena and the many problems that they create have only one source—the widespread but mistaken public belief that notice and comment rulemaking can and should be considered a plebiscite in which the number of comments filed for or against a proposed rule is an accurate measure of public opinion that should influence the agency’s decision whether to adopt the proposed rule. I believe that ACUS can and should assist agencies in explaining to the public why the notice and comment process is not, and cannot be, a plebiscite, and why the number of comments filed in support of, or in opposition to, a proposed rule should not, and cannot, be a factor in an agency’s decision making process.

1. The Notice and Comment Process Allows Agencies to Issue Rules that Are Based on Evidence

The notice and comment process is an extraordinarily valuable tool that allows agencies to issue rules that are based on evidence. It begins with the issuance of a notice of proposed rulemaking in which an agency describes a problem and proposes one or more ways in which the agency can address the problem by issuing a rule.

The agency then solicits comments from interested members of the public. The comments that assist the agency in evaluating its proposed rule are rich in data and analysis. Some support the agency’s views with additional evidence, while others purport to undermine the evidentiary basis for the proposed rule. The agency then makes a decision whether to adopt the proposed rule or some variant of the proposed rule in light of its evaluation of all of the evidence in the record, including both the studies that the agency relied on in its notice and the data and analysis in the comments submitted in response to the notice. Courts require agencies to address all of the issues that were raised in all well-supported substantive comments and to explain adequately why the agency issued, or declined to issue, the rule it proposed or some variation of that rule in light of all of the evidence the agency had before it. If the agency fails to fulfill that duty, the court rejects the rule as arbitrary and capricious.

ACUS has long supported efforts to assist the intended beneficiaries of rules in their efforts to overcome the obstacles to their ability to participate effectively in rulemakings. ACUS should continue to help members of the public file comments that assist an agency in crafting a rule that addresses a problem effectively.

2. Mass Comments Are Not Helpful to Agency Decision Making and Create Major Problems

Sometimes the companies and advocacy organizations that support or oppose a proposed rule organize campaigns in which they induce members of the public to file purely conclusory comments in which they merely state their support for or opposition to a proposed rule. The proponents or opponents then argue that the large number of such comments prove that there is strong public support for the position taken in those comments. Comments of that type have no value in an agency’s decision-making process. Every scholar who has studied the issue has concluded that the number of comments filed for or against a proposed rule is not, and cannot be, a reliable measure of the public’s views with respect to the proposed rule.

Mass comment campaigns create major problems in the notice and comment process. Many of those problems were evident in the 2017 net neutrality rulemaking. The New York Attorney General documented the results of the well-orchestrated mass comment campaign in that rulemaking in the report that she issued on May 6, 2021. She labeled as “fake” 18 million of the 22 million comments that were filed in the docket. The number of “fake” comments filed in support of net neutrality were approximately equal to the number of “fake” comments filed by the opponents of net neutrality. One college student filed 7.7 million comments in support of net neutrality, while ISPs paid consulting firms 8.2 million dollars to generate comments against net neutrality.

Two things are easy to predict if the public continues to believe that the number of comments for or against a proposed rule is an important factor in an agency’s decision-making process. First, the next net neutrality rulemaking will elicit even more millions of comments as the warring parties on both sides escalate their efforts to maximize the “vote” on each side of the issue. Second, the firms that have a lot of money at stake in other rulemakings will begin to replicate the behavior of the firms that are on each side of the net neutrality debate. The results will be massive, unmanageable dockets in which the “noise” created by the mass comments will make it increasingly difficult for agencies and reviewing courts to focus their attention on the substantive comments that provide the evidence that should be the basis for the agency’s decision.   

3. ACUS Should Initiate Another Project to Address Mass Comments in Rulemakings

I think that ACUS should initiate a new project in which it decides whether to discourage mass comments, computer-generated comments and fraudulent comments and, if so, how best to accomplish that. I believe that ACUS can and should discourage these practices by, for instance, encouraging agencies to assist in educating the public about the types of comments that can assist agencies in making evidence-based decisions and the types of comments that are not helpful to agencies and that instead create a variety of problems in managing the notice and comment process.       

Recommended Citation: Admin. Conf. of the U.S., Recommendation 2021-1, Managing Mass, Computer-Generated, and Falsely Attributed Comments, 86 Fed. Reg. 36,075 (July 8, 2021).