The Keys Communities Hold and the Ones AI Keeps
A Community Advocacy Brief on AI, Law, and Equity
Artificial Intelligence and Unequal Exposure
Artificial intelligence affects our lives in ways many people do not notice. It can decide who gets a job interview, guide doctors, affect students' futures, and even determine which neighborhoods are more closely watched. Why are Black and Brown communities in underserved areas often the first to deal with these systems, yet have little say in how they are made or controlled? These communities are frequently targeted for early deployment because institutions view them as “high‑risk” environments, while structural inequities limit their access to the policymaking spaces where AI decisions are made.
The United Nations report Governing AI for Humanity warns that AI will deepen inequality if governments fail to anticipate potential harms and include marginalized groups in the process (United Nations, 2023). The NIST AI Risk Management Framework echoes this concern, stressing the need for early risk detection, transparency, and genuine community involvement (National Institute of Standards and Technology, 2023). The warnings from both organizations make one issue unavoidable: This raises a crucial question: Who truly gains from the way AI is managed today, and who is left vulnerable?
Plain Language Overview of AI Law
AI law may seem complex, but in the United States there is no single federal AI law. Instead, governance is shaped by a patchwork of state regulations, sector‑specific federal rules, and voluntary frameworks like the NIST AI Risk Management Framework. Together, these efforts aim to keep AI systems safe, fair, and accessible, even as the legal landscape remains fragmented.
Core Functions of AI Law
• Protect people from harm
• Make automated decisions understandable
• Limit how much personal data can be collected
• Hold someone accountable when AI causes harm
• Prevent discrimination and unfair treatment
The United Nations says transparency and accountability are basic rights that should guide how we use AI. The NIST framework adds that fairness and community voice should be part of AI from the start, not added later.
Equity as a Foundational Requirement
AI learns from the world as it is, picking up the same racial and economic divides we have. Instead of simply reproducing these patterns, AI can intensify them when trained on biased or incomplete data (National Institute of Standards and Technology, 2023). This leads us to consider what it would mean for AI to be designed by those who have historically been marginalized. In practice, this involves datasets built with community consent, data that reflects their lived experiences, and labeling processes shaped by their cultural knowledge. It also requires community control over how the data is governed—who can access it, how it is used, and how harms are identified and corrected. Imagining an alternative dataset built by and for marginalized groups shows that neutrality in AI is a choice, not a default. This raises a pressing question about who gets to shape the systems that increasingly shape our lives. This sparks a pressing question: Can AI ever be truly neutral if the world it learns from is already unequal?
Examples of Inequity in AI Systems
• Training data that underrepresents Black and Brown communities, a pattern documented in Buolamwini and Gebru’s research showing how major AI datasets skew toward lighter‑skinned, male, and Western populations
• Systems built without meaningful community participation
• Automated decisions that repeat historical discrimination
• Surveillance tools that target neighborhoods of color
The NIST framework says we should check AI systems for systemic and historical bias, but these checks are often incomplete or overlooked.
Impact of AI on Disadvantaged Communities
The harms from AI are not equal. Its most serious effects often hit hardest in communities that already face inequality.
Economic Impact
• Job displacement in industries where Black and Brown workers are concentrated, such as transportation, warehousing, retail, and food service. Research from McKinsey estimates that automation could affect up to 4.5 million Black workers by 2030, with the highest exposure in transportation, production, and service‑sector jobs.
• Hiring algorithms that quietly filter out applicants based on name, color, or neighborhoods. Studies show that résumé‑screening tools can downgrade applicants with ethnic‑sounding names or ZIP codes associated with communities of color, replicating long‑standing hiring discrimination in automated form.
• Limited access to AI-powered opportunities due to a lack of broadband and training (LeRoy, 2025)
Educational Impact
• AI tools reach well-funded districts long before underfunded schools, because wealthier districts have the budget, broadband infrastructure, and trained staff to adopt new AI-powered learning platforms early. Underfunded schools, often serving Black and Brown students, face outdated devices, limited internet access, and fewer technology specialists, delaying their ability to use the same tools.
• Student profiling systems that limit academic opportunities (The Potential Impact of Artificial Intelligence on Equity and Inclusion in Education, n.d.)
Health Impact
• Medical algorithms that misread symptoms or underestimate risk for Black and Brown patients, as shown in Obermeyer et al. (2019), where a widely used hospital algorithm systematically underrated the health needs of Black patients because it relied on health‑care spending as a proxy for illness
• Telehealth systems that require internet access, which many families lack
The United Nations Department of Economic and Social Affairs warns that AI-generated misinformation also harms marginalized communities by distorting public discourse, weakening trust, and reinforcing existing barriers such as economic exclusion, educational gaps, and health inequities (United Nations, 2023). For example, AI‑generated deepfake videos have been used to spread false information about immigrant communities, fueling public fear and policy backlash. Misinformation campaigns have also circulated fabricated crime statistics about Black neighborhoods, shaping public opinion and influencing local policing decisions. In health contexts, AI‑generated false claims about vaccines have disproportionately targeted communities of color, worsening existing health disparities.
The NIST framework adds that once harmful systems are deployed, fixing the damage becomes significantly harder (National Institute of Standards and Technology, 2023). A clear example is when automated content‑moderation systems incorrectly flagged posts from Indigenous activists as “misinformation,” suppressing their visibility for months before the error was discovered. Another example is when biased predictive‑policing tools were deployed in several cities, leading to over‑policing of Black neighborhoods. Even after the tools were removed, the community impacted arrests, surveillance, and mistrust.
Policy Resolutions and Advocacy Priorities
Communities most affected by AI should be pivotal in designing its rules. By embracing participatory budgeting for AI projects, these communities become co-creators, ensuring their unique insights and needs are prioritized. Every voice is vital, and genuine inclusion is essential. Fairness must guide all future AI decisions.
Recommended Policy Actions
• Create community-based AI learning centers. Programs in Detroit and Chattanooga show this works, though many communities lack stable funding. When residents document harms through these centers, their findings inform local digital equity policies.
• Require equity impact assessments before public-sector AI deployment. New York City’s hiring audit law shows these assessments can surface bias early, but agencies often lack technical capacity. Community testimony has directly shaped how these assessments are written and enforced.
• Establish community oversight boards with real authority. Seattle’s model demonstrates that community review can slow or stop harmful tools. The barrier is that many boards elsewhere lack enforcement power. Resident participation pushes cities to adopt stronger procurement and transparency rules.
• Invest in broadband, device access, and technical support. The Affordable Connectivity Program expanded access, but gaps remain. Community coalitions mapping local broadband deserts have successfully secured state and federal funding.
• Require companies to explain how their AI systems function and allow independent audits. The EU’s AI Act shows this is feasible, while U.S. companies often resist. Civil rights groups have pressured firms to release model cards and bias reports.
• Prohibit high-risk AI systems that disproportionately harm marginalized groups. Cities like Boston and San Francisco banned police facial recognition after community organizing documented harms. State preemption remains a barrier, but local action shows how community pressure drives policy change.
• Fund HBCUs and community colleges to lead equitable AI research. Howard and Morgan State have launched fairness-focused AI centers, though HBCUs remain underfunded. Community partnerships help shape research agendas and strengthen advocacy for federal investment.
AI’s impact depends on the choices society makes now. Community involvement is not symbolic—it is the mechanism through which harmful systems are challenged, resources are secured, and equitable policy becomes possible.
References
LeRoy, M. (2025). 261 algorithmic bias in hiring: Amending title VII to. Scholarship Law. https://scholarship.law.nd.edu/cgi/viewcontent.cgi?article=1791&context=jleg
National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce.
https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
United Nations. (2023). Governing AI for humanity. United Nations.
https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf