Independent and cutting-edge analysis on global affairs

The proliferation of algorithmic harms has become one of the most urgent civil and human rights challenges in the United States. Emerging technologies such as artificial intelligence, machine learning and similar algorithmic-driven technologies are used to regulate access to essential goods and services including housing, employment, healthcare, public benefits, financial services, and education.[1] These technologies are also routinely used to influence judicial decision-making, expand police surveillance, determine parental rights, and facilitate immigration enforcement among other rights-impacting activities.[2] In each of these domains, the introduction of algorithmic technologies have facilitated systemic civil and human rights abuses that are often focused on Black communities and other racially marginalized groups.[3] It is increasingly clear that the concentrated impact of algorithmic harms on racially marginalized communities threatens to erode essential democratic values and norms by reinforcing historic patterns of racial hierarchy through systemic civil and human rights violations.[4]

In recent years, algorithmic harms have captured the attention of lawmakers in the United States interested in developing comprehensive legal protections to address these challenges. Nascent reform efforts have largely coalesced around a legislative framework that centers algorithmic auditing – a periodic technical evaluation of algorithmic systems by private entities to determine whether the technology facilitates adverse impact.[5] However, the challenge is that algorithmic auditing is a relatively new and largely unregulated field which relies on technical methods that often fail to holistically assess the full spectrum of racial justice implications of emerging technologies. In fact, in some instances, algorithmic auditing can reinforce racialized harms through the privatization of civil and human rights law enforcement. 

More specifically, audit-centric legislative reforms can be a poor fit to address a challenge that this paper will refer to as “black coding.” Black coding is a term I use to refer to any sociotechnical practice where algorithmic technologies are used to evade, displace, or erode existing civil and human rights protections in ways that structure racial hierarchy and social inequality. Black coding is present when public and private entities rely on algorithmic decision-making systems to determine the legal rights, status, and freedoms of marginalized communities in ways that contravene existing civil and human rights norms. In this sense, black coding fits within a broader historical pattern where the absence of rigorous public oversight and democratic governance facilitates the erosion of fundamental rights for Black people and other racially marginalized communities in the United States. Left unchecked, the cumulative effect of black coding can result in a collection of practices that effectively return racially marginalized communities to second-class citizenship in American life. 

This article proceeds in three parts. Part I provides a general overview of nascent legislative proposals designed by U.S. lawmakers to address algorithmic discrimination. Part II explores black coding in more detail by comparing these practices to 19th Black Codes to demonstrate the insufficiencies of audit-centric legislative proposals. Part III concludes by outlining the following three guiding principles to aid policymakers in crafting solutions that address black coding, algorithmic discrimination, and other racialized algorithmic harms. First, emerging technologies used in the context of fundamental rights should be subject to agency preauthorization requirements. A preauthorization regime would require that covered entities receive public authorization prior to deployment of any algorithmic technology used in the context of fundamental rights and freedoms. Second, proposals should adopt global standards and best practices related to algorithm fairness and antidiscrimination. Third, reparative justice and transformative justice should be integrated into legislative remedial schemes to address the systemic impact of racialized algorithmic harms.

Nascent Legislative Responses to Algorithmic Discrimination in the United States

In recent years, policymakers and advocates have led nascent efforts to develop legislative solutions to address algorithmic discrimination in the United States.[6] At the state level, lawmakers in California, Maryland, New Jersey, and Washington state have introduced legislation aimed at prohibiting certain discriminatory uses of algorithmic technologies.[7] Similarly, local lawmakers in New York City and Washington D.C. have introduced ordinances addressing this issues – New York’s representing one of the first enacted algorithmic anti-discrimination ordinances in the country.[8] At the federal level, Congressional lawmakers have introduced measures such as the Algorithmic Accountability Act and the Consumer Online Privacy Act which also contain algorithmic antidiscrimination provisions.[9] The centerpiece of nearly ever algorithmic antidiscrimination legislative proposal is the requirement for covered entities to conduct algorithmic audits to assess their technologies for disparate demographic impact among other factors.[10] Algorithmic audits build upon existing social science auditing techniques designed to examine commercial practices in order to eliminate discrimination and other social harms. For example, civil rights audits have been historically used as part of larger legal and regulatory schemes to identify discriminatory commercial practices and promote fair housing and fair lending.[11]

The legislative gaps in audit-centric proposals effectively delegate the responsibility of governing rights-affecting technologies to the actors and entities that design and profit from them.

The auditing requirements of existing legislative proposals are a laudable attempt to address algorithmic discrimination. Indeed, there are examples where algorithmic audits have been effective in identifying bias in certain digital platforms and technologies.[12] Unfortunately, the algorithmic auditing provisions of many legislative proposals raise at least two critical concerns. First, these provisions often fail to either define “algorithmic auditing” or articulate the required elements of a legally satisfactory audit. Second, these proposals fail to meaningfully respond to the growing use of algorithmic technologies to corrode democratic values and undermine fundamental rights. Audits are designed to respond to the technical dimensions of algorithmic systems to ensure that models produce fair, accurate and unbiased outcomes. However, algorithmic audits are not equipped to decide the legal, political, and ideological choices necessary to govern algorithmic systems, especially in the context of fundamental rights. In other words, an audit can be instructive in explaining how an algorithmic system operates, but not necessarily whether the system should be permitted for use in the first instance. The legislative gaps in audit-centric proposals effectively delegate the responsibility of governing rights-affecting technologies to the actors and entities that design and profit from them. As a result, bad actors routinely and flagrantly defy existing anti-discrimination laws and constitutional protections because existing systems of accountability lack the necessary legal tools to effectively govern the use of these technologies. Without urgent legislative intervention, these are the precise conditions that have historically facilitated the degradation of fundamental rights for Black communities and other marginalized groups. 

Black Coding 

The first Black Codes emerged in states across the former confederacy in the years immediately following the Civil War. Black Codes were state laws designed to preserve the social relations and economic conditions of chattel slavery despite the formal abolition of slavery by the Thirteenth Amendment of the United States Constitution.[13] The development of Black Codes despite the constitutional abolition of enslavement is instructive in understanding contemporary practices of algorithmic discrimination. As previously noted, “black coding” is a term I use to refer to any practice where algorithmic technologies are used to evade, displace, or erode existing civil and human rights protections in ways that structure racial hierarchy and social inequality.[14] Historic Black Codes and contemporary black coding share at least three critical features. First, both practices reflect political strategies that preserve existing patterns of racial disadvantage despite clear statutory and constitutional prohibitions – especially in the interest of preserving the economic benefits of racialized wealth extraction. Second, both practices leverage fast-paced innovation to protect existing systems of structural racial disadvantage. The original Black Codes came into effect almost immediately after the end of the Civil War.[15] Southern lawmakers enacted Black Codes at a pace that federal lawmakers could not match which provided space for these statutes to flourish. Relatedly, contemporary black coding relies on innovative technologies that are rapidly deployed in rights-impacting contexts to replace or augment human decision-making with automated digital platforms. The shift towards automated decision-making raises novel legal, regulatory, and juridical questions that policymakers have been unable to address in real-time. The temporal gap between the advent of new discriminatory technologies and regulatory responses has allowed for discriminatory systems to similarly proliferate throughout communities nationwide. Third, both systems give legal effect to private customs that protect discriminatory societal norms. Black codes were designed to preserve the sociolegal customs, norms and hierarchies developed in the antebellum era to uphold systems of white supremacy. Black codes were understood as a necessary legal intervention to sustain these customs despite the revolutionary shift in the legal and economic status of Black people in the wake of Emancipation. Similarly, many algorithmic technologies protect existing patterns of structural racial disadvantage by relying on technical methods which analyze historic datasets to make projections and predictions on future outcomes.[16] These technical methods constrain upward mobility and social progress for Black communities and other groups seeking to overcome historic and contemporary structural disadvantages. 

Black coding functions as both a practice and as a system. For example, social media companies that use automated content moderation platforms that disproportionately censor Black activists is an example of black coding as a pracitice.[17] Another example is when a police department secretly piloted an aerial surveillance system in the wake of a police-involved death of a Black person without democratic authorization or public notice.[18] In each instance, these actors evade democratic accountability by using algorithmic technologies to displace traditional democratic functions (e.g. speech regulation and democratic oversight of law enforcement) in constitutionally protected domains.[19] In both instances, these evasive practices facilitate rights abuses that disproportionately burden Black communities. These practices represent a popular variety of techno-illiberalism that has infected nearly every social, political and economic institution in American life. 

In essence, the advent of emerging technologies has effectively produced a “law-free zone” where civil and human rights legal protections are severely limited, if not abrogated altogether.

The collective impact of black coding practices shape systemic outcomes. The overwhelming majority of people in the United States rely on digital technologies to secure housing, employment, credit, and educational opportunities.[20] This is a major paradigm shift from the means through which opportunities were secured when many landmark civil rights protections were enacted. Today, algorithmic technologies that facilitate access to economic opportunities and political rights are largely unregulated and technologically inscrutable – both of which are features that impair public awareness of the risks and harms of these systems. The absence of clear legal standards and public oversight leaves nearly every person exposed to the possibility of discriminatory treatment with no clear legal remedy. In essence, the advent of emerging technologies has effectively produced a “law-free zone” where civil and human rights legal protections are severely limited, if not abrogated altogether. The lawless and illiberal nature of contemporary black coding is probably the most alarming characteristic shared with historic Black Codes. 

While there are clear differences between Black Codes and contemporary black coding practices, the historic resonance between both phenomena is helpful in identifying the legal, political and ideological functions of algorithmic technologies in relation to civil and human rights. Black coding offers a useful conceptual framework that enables policymakers and advocates to scrutinize the relationship between emerging technologies, fundamental rights, democratic values, and racial justice. 

Guiding Principles to Combat Black Coding and Algorithmic Discrimination 

The abolition of black coding requires policy intervention. Unfortunately, existing, audit-centric legislative proposals will likely fail to meaningfully address black coding in several ways. First, audit-centric proposals enable the production of discriminatory customary law by assigning private actors the responsibility of defining, measuring, and evaluating algorithmic systems for civil and human rights violations. In the absence of public oversight and enforcement, critical questions related to how algorithmic practices map onto existing civil and human rights legal obligations will be determined on an ad hoc basis by private actors – including actors with longstanding histories of civil and human rights abuses.[21] Second, audit-centric proposals are primarily focused on the conduct of individual actors rather than the interoperability of these technologies and the challenges that those interactions present to vulnerable communities. Third, many audit-centric proposals fail to articulate clear standards for determining the conditions that give rise to impermissible use or unlawful discriminatory conduct. And finally, audit-centric proposals often fail to articulate appropriate remedies that respond to algorithmic harms. 

In designing comprehensive legislative proposals to address algorithmic discrimination, black coding, and related harms, lawmakers should embrace the following three guiding principles. 

Democratic Preauthorization

Algorithmic technologies effecting fundamental rights should only be deployed after such technologies are explicitly authorized by a public enforcement agency. The enforcement agency should determine whether the design or deployment of the technology impermissibly burdens civil and human rights among other factors. Democratic preauthorization has been effective legislative approach in other contexts in the United States, especially in uprooting discriminatory voting practices in the American South.[22] To be clear, democratic preauthorization is compatible with legislative frameworks that impose algorithmic audit requirements. Relatedly, it is critical to note that public oversight standing alone is not sufficient to address the full range of challenges raised by black coding. For example, democratic preauthorization may raise additional challenges in jurisdictions hostile to civil and human rights enforcement. 

Democratic preauthorization better positions public actors to keep pace with the innovative nature of algorithmic technologies and the harms that flow from them. Democratic preauthorization also enables the enforcement agency to examine the interoperability of algorithmic systems, as well as their collective impact on marginalized groups. Most importantly, democratic preauthorization mitigates the ability of bad actors to use algorithmic technologies to evade existing civil and human rights standards by imposing an ex ante enforcement regime, rather than exclusively relying on post-hoc remedies. Each of these features responds to critical gaps in audit-centric legislative proposals.

Policy Alignment with Global Standards and Best Practices 

Policymakers in the United States are not developing algorithmic antidiscrimination legal standards in a vacuum. The development of legislative frameworks addressing algorithmic harms should build upon existing global efforts and incorporate best practices into domestic legal standards. This is particularly true for lawmakers at the state and local levels. Building upon the standards and principles articulated through frameworks such as the UNESCO Recommendation on AI Ethics (2021) and the OECD AI Principles (2019) among others is essential to securing the most robust legal protections against algorithmic discrimination.[23]

Adequate Remedies for Systemic, Racialized Harms

Finally, policymakers should incorporate principles of reparative and transformative justice into algorithmic antidiscrimination legislative proposals. These remedial frameworks can be helpful in addressing the societal implications of racialized algorithmic harms. Existing anti-discrimination laws are largely designed to remedy harms to the individuals, groups and class members directly injured by the discriminatory conduct, policy, or practice of an identifiable defendant. These laws largely exclude the possibility for remedies to parties who are indirectly harmed by the systemic consequences of discriminatory practices. A growing number of researchers, scholars and technologists are beginning to imagine incorporating reparative and transformative justice into algorithmic design and governance. Reparative justice is the process where state actors work to remedy historic atrocities and related harms directed at historically marginalized groups.[24] While reparations are generally understood as a state-led endeavor, recent efforts by private industry and civil society suggest a broader societal interest in these approaches.[25] Some advocates have sought to incorporate those ideas into “reparative algorithms” – an approach to algorithmic governance and design that would impose an obligation for these systems to actively ameliorate existing patterns of structural disadvantage. Similarly, transformative justice is a remedial approach that seeks to identify the root causes of social harms and design solutions that not only address direct harms but underlying conditions as well.[26] Both reparative and transformative justice frameworks hold promise in combatting the historic and systemic dimensions of black coding.

Concluding Remarks

The implications of black coding extend beyond the United States. Societies with similar histories of human rights atrocities directed at racial and ethnic minorities will also likely be susceptible to these challenges. Further, the broader trend of rights privatization through digital platforms will likely emerge across national and regional boundaries. Black coding is a direct threat to civil and human rights principles that safeguard human dignity and uphold democratic governance. It is critical that lawmakers in the United States and elsewhere design effective policy interventions to address this urgent challenge. 

[1] See e.g., Julia Angwin et. al., “Machine Bias,” ProPublica (23 May 2016),; Manish Raghavan & Solon Barocas, “Challenges for mitigating bias in algorithmic hiring,” Brookings (6 December 2019),; Kate Crawford et. al., “AI NOW 2019 Report,” AI NOW Institute (December 2019),

[2] See e.g., Kesha Moore, “Pretrial Justice Without Money Bail or Risk Assessments,” Thurgood Marshall Institute (January 2022),

[3] See e.g., “Data Capitalism and Algorithmic Racism,” Demos (17 May 2021),; “Automating Banishment: The Surveillance and Policing of Looted Land” STOP LAPD Spying Coaltion,” (November 2021),; “#NoTechForICE,” NotechForICE (last accessed March 1, 2022),

[4] See generally, Safiya Umoja Noble, Algorithms of Oppression (NYU Press 2018); Ruha Benjamin, Race After Technology (Polity Press 2019); Rashida Richardson, Jason Schultz, Kate Crawford “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review, Vol. 94, No. 15 (May 2019). 

[5] Mona Sloane, “The Algorithmic Auditing Trap,” OneZero (17 March 2021),; Inioluwa et. al., “Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing,” Arixiv:2001.00973 [cs.CY] (3 January 2020). 

[6] Todd Feathers, “Why It’s So Hard to regulate Algorithms,” The Markup (4 January 2022),; Liz O’Sullivan, “2022 promises to bring massive changes to AI regulation,” Fast Company (3 January 2022),

[7] Id. See also, Automated Decision Systems Accountability Act of 2020, Assem. Bill 2269, 2019-2020 Reg. Sess. (Cal. 2020); New Jersey Algorithmic Accountability Act, Assem. Bill 5430, 2018-2019 Reg. Sess. (NJ 2019); House Bill 1323, 2021 Reg. Sess. (Md. 2021); Sen. Bill 5116, 2021 Reg. Sess (Wash. 2021).

[8] New York City Council Int. 1894-2020 (2021)(enacted); “AG Racine Introduces legislation to Stop Discrimination in Automated Decision-Making Tools That Impact Individuals’ Daily Lives,” Office of the Attorney General for the District of Columbia (9 December 2021),

[9] “Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems,” accessed on 1 March 2022,; “Klobuchar, Kennedy, Manchin, Burr Introduce Bipartisan Legislation to Protect Privacy of Consumers’ Online Data,” U.S. Senator Amy Klobuchar, accessed 1 March 2022,

[10] See e.g., New York City Admin. Code §§ 20-871(a) (“It shall be unlawful for an employer or an employment agency to use an automated employment decision tool to screen a candidate for an employment decision unless [s]uch too has been the subject of a bias audit conducted no more than one year to the use of such tool.”)

[11] Briana Vecchione, Karen Levy, and Solon Barocas, “Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies,” in Equity and Access in Algorithms, Mechanisms, and Optimization, Vol. 19 (New York, NY, USA: Association for Computing Machinery, 2021): p. 1–9,

[12] Inioluwa Deborah Raji et al., “Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20 (New York, NY, USA: Association for Computing Machinery, 2020): p. 33–44,; “Civil-Rights-Audit-Final-Report.Pdf,” accessed on 1 March 2022,

[13] “Black Codes,” South Carolina Encyclopedia (blog), accessed on 1 March 2022,; “Black Codes,” Mississippi Encyclopedia, accessed on 1 March 2022,

[14] Black coding builds upon theoretical concepts in critical digital studies that explore the relationships between race, technology, and systems of inequality such as technological redlining, dirty data and the new Jim Code. See generally, Safiya Umoja Noble, Algorithms of Oppression (NYU Press 2018); Ruha Benjamin, Race After Technology (Polity Press 2019); Rashida Richardson, Jason Schultz, Kate Crawford “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review, Vol. 94, No. 15 (May 2019). 

[15] Supra note 15.

[16] Rashida Richardson, Jason Schultz, Kate Crawford “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review, Vol. 94, No. 15 (May 2019).

[17] Ángel Díaz & Laura Hecht-Felella, “Double Standards in Social Media Content Moderation,” Brennan Center for Justice (4 August 2021),

[18] J Cavanaugh Simpson & Ron Cassie, “Under Watch: The police spy plane experiment is over, but the growing surveillance of Baltimore continues,” Baltimore Magazine (March 2021),

[19] Tim Prudente, “Federal Appeals Court Rebukes Baltimore Spy Plane Program,” Washington Post, accessed on 1 March 2022,

[20] Kaveh Waddell,“How Tenant Screening Reports Make It Hard for People to Bounce Back From Tough Times,” Consumer Reports (11 March 2021),; “Real Estate in a Digital Age,” accessed on 15 February 2022,; Rey Mashayekhi, “Nearly 90 percent of Americans Now Use Fintech—with Boomers the Fastest-Growing Demo,” Fortune, accessed (12 October 2021),

[21] Rick Jones, “The Siren Song of Objectivity: Risk Assessment Tools and Racial Disparity,” NACDL (blog), 21 August 2018,

[22] German Lopez, “How the Voting Rights Act Transformed Black Voting Rights in the South, in One Chart,” Vox, 6 March 2015,

[23] “Recommendaiton on the ethics of artificial intelligence,” UNESCO (24 November 2021),; “OECD AI Principles overview,” OECD (May 2019),

[24] Jenny L. Davis, Apryl Williams, Michael W. Yang, “Algorithmic Reparation,” Big Data & Society, Vol. 8, No. 2 (4 October 2021); Rashida Richardson, “Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities,” Berkely Technology Law Journal, Vol. 36, No. 3 (May 2021).

[25] See e.g., Will Wright, “Seminary Built on Slavery and Jim Crow Labor Has Begun Paying Reparations,” The New York Times, 31 May 2021, sec. U.S.,; Rachel Treisman, “In Likely First, Chicago Suburb Of Evanston Approves Reparations For Black Residents,” NPR, 23 March 2021, sec. Code Switch,

[26] Richardson, (May 2021).

Clarence Okoh
Clarence Okoh

Clarence Okoh is an civil rights attorney and Equal Justice Works Fellow at the NAACP Legal Defense & Educational Fund, Inc. where he leads a multidisciplinary advocacy project to challenge the use of emerging technologies that violate the civil and human rights of Black communities.

The Premium Corporate Sponsor of TPQ
Yapı Kredi
Foreword After the violent dissolution of the former Yugoslavia in the 1990s, there had been a shared sense of hope for a more peaceful future for the European continent. Unfortunately, this comfortability disappeared after Russian President Vladimir Putin ordered his troops to march against the Ukrainian forces throughout the border on 24 February 2022. This marked a turning point not only for the region...