Making Artificial Intelligence Work for the People
We are at a critical juncture when it comes to artificial intelligence (“AI”).
People deserve to have control over their lives and futures. And increasingly, that means having a say in how AI will be used to shape the world around us.
We need to make sure companies and policymakers ask and answer the right questions about where, when, and how AI is used – including whether it should be used at all. If they don’t, AI will inevitably be used in ways that violate people’s rights, threaten their freedoms, and automate discrimination in areas that are crucial to our well-being, from housing and employment to medicine and policing.
AI doesn’t have to be a runaway train. It can be guided by fair, transparent, and humane laws that center people’s needs.
The ACLU of Northern California is working in the courts, in the California legislature, with companies, and with communities to steer AI towards justice.
To that end, we've created resources for policymakers, companies, and people who want to ensure AI is used the right way.
- Policymakers: Learn more about AI and how to make sure that technology protects your constituents and their civil rights.
- People: Speak up for laws and policies that ensure that AI is used by the government and private companies in ways that are open, equitable, and just.
- Companies: Find out how you can bake in privacy, free speech, and other civil rights protections from the beginning to avoid problems down the line.
Read on for a snapshot of our current work, and for an overview of how AI poses risks to our civil rights and liberties.
What is Artificial Intelligence?
Artificial intelligence refers to computer models, or algorithms, that mimic the cognitive functions of the human mind, such as language, learning, and problem-solving. To build these systems, AI companies train machine learning models on vast amounts of information. In many cases, model makers obtain information by scraping the internet and other sources. Whether you know it or not, your personal information has likely been used to help develop this technology. AI processes this information to learn to recognize patterns, map relationships between data, and – depending on the implementation – generate predictions or responses in text or other formats.
AI is not new. For many years, companies and government agencies trained algorithms on large amounts of information and then used those algorithms to make predictions. Your social media feed is an example.
The latest wave of AI is driven by generative AI (“GenAI”) systems such as Large Language Models (“LLMs”). These include services that can generate text or images in responses to prompts. But AI is not just about chatbots or image generators – it is also increasingly used for automated decision making – capable of analyzing truly staggering amounts of information, finding correlations, and attempting to make predictions about future outcomes. Many of these predictions and decisions can impact our civil rights and liberties, especially when in the hands of government agencies or corporations.
AI and Our Civil Rights
AI must be developed and used in ways that protect against discrimination and don’t violate people’s rights, including privacy, freedom of speech, due process, and more.
Surveillance
AI can supercharge invasive surveillance by state and private actors. In the hands of police and other government entities, AI has fueled biometric tracking systems such as facial recognition technology, which is unreliable and biased, and has frequently misidentified people of color and led to improper arrests. Even if accurate, AI-fueled biometric tracking systems massively expand the government’s power to infringe on our right to privacy.
Corporations also use AI to analyze vast quantities of our personal information to track us throughout our daily lives and subject us to targeted advertisements and invasive pseudoscience practices such as emotion recognition.
No matter who controls it, AI-powered surveillance has no place in a democracy.
Harvesting Personal Information & Privacy
AI systems require vast amounts of training information. In recent years, companies have harvested information from the internet to build AI training sets – often without people’s knowledge or consent. Internet scraping and information harvesting can have a drastic impact on people’s privacy – a right that is guaranteed by the California Constitution.
Lack of Transparency
Many AI companies are often secretive about how they build their models, how they update their models, and what information their models use and collect to make decisions. This prevents independent public scrutiny and a transparent understanding of whether AI systems are fair, biased, or reliable.
When AI is relied on to make decisions about people’s lives, the stakes can be high. AI is already being used to make decisions about housing, employment, healthcare, and policing. Without transparency into how these models work, it is very difficult to investigate these systems, root out bias, and challenge the decision-making process and errors.
Errors and Reliability
Some government entities already use AI to automate key state functions, like who receives public benefits, where to allocate police patrols, and how to administer social services. Similarly, companies might use AI to generate advertisements and prices for their products, including setting rent, healthcare costs, or car insurance rates.
Current AI models have been known to make plenty of mistakes. For example, AI has invented legal cases out of thin air and then doubled down on their validity. It has even mistaken the identity of a criminal suspect. Such mistakes can lead to professional nightmares, loss of public benefits, unfair insurance rates, or even criminal prosecution. Discussions of AI should acknowledge the limits of these systems so that we can take steps to prevent foreseeable harm.
Bias and Discrimination
Often, AI-based systems are built in ecosystems and with data that already contain discriminatory patterns. For example, AI trained by policing data may reflect the racially targeted enforcement of the War on Drugs. Attempting to predict outcomes based on this data can create feedback loops that further systemic discrimination.
President Biden’s White House AI Bill of Rights explains that “[a]lgorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based” protected classes such as race, ethnicity, sex, religion, or age.
AI discrimination can manifest in housing decisions that disfavor people of color, content moderation decisions that censor LGBTQ content creators, health decisions made by algorithms that worsen medical racism, immigration detention programs that track and target migrants, or even lead to the wrongful, violent arrest of misidentified Black people.