Here are some things you can do to make AI fair:

The Dutch government began using an algorithm in 2013 to ruin the lives of 25,000 parents.

Here are some things you can do to make AI fair:

The Dutch government began using an algorithm in 2013 to ruin the lives of 25,000 parents. Although the software was supposed to predict who is most likely to commit fraud in childcare-benefit frauds, the government didn't wait for proof to penalize families and demand that they repay years of allowances. Families were flagged based on 'risk factors' like low income or dual citizenship. Tens of thousands became needlessly poor, and over 1,000 children were placed into foster care.

Many regulations regarding artificial intelligence (AI), are being developed from New York City to California and across the European Union. It is intended to promote equity and accountability, transparency, and avoid tragedies like the Dutch childcare-benefits scam.

These will not be enough to make AI fair. Practical knowledge must be available on how to build AI that doesn't increase social inequality. This means that social scientists, developers, and communities affected by AI must be able to collaborate in clear ways.

Right now, developers of AI work in a different area than social scientists who can predict what could go wrong. I am a sociologist who studies inequality and technology. It is rare that I have a productive conversation, with technologists, or with my colleagues, that goes beyond highlighting problems. The same thing is true when I read conference proceedings: Very few projects combine social needs with engineering innovation.

Effectively designing mandates and approaches is key to fostering productive collaborations. These are the three principles technologists, social scientists, and communities can use together to create AI applications that are less likely warp society.

Include lived experience. Vague calls to expand AI system participation miss the point. Nearly everyone who interacts online, using Zoom or clicking on reCAPTCHA boxes, is contributing to AI training data. It is important to gather input from the most relevant participants.

Otherwise, we risk participation-washing: superficial engagement that perpetuates inequality and exclusion. The EU AI Alliance is an example. It's an open forum that anyone can join, and it provides democratic feedback to the European Commission appointed AI expert group. It was an unmoderated echo chamber consisting mostly of men exchanging opinions and not representative of the EU population, AI industry, or other relevant experts when I joined it in 2018.

Social-worker Desmond Patton, Columbia University in New York City, has developed a machine-learning algorithm that can help identify tweets related to gang violence. It relies on the expertise and experience of Black people in Chicago, Illinois. These experts correct the algorithm's errors and review it. Patton calls his approach Contextual Analysis of Social Media (see go.nature.com/3vnkdq7).

Shift power. AI technologies are often built upon the orders of those in power, such as employers, governments, and commerce brokers. This makes it vulnerable for job applicants, parolees, customers, and all other users. This must change. AI is not something that should be done without consultation. Instead, AI users should choose the problems they want to solve and then guide the process.

This type of equitable innovation has been pioneered by disability activists. The mantra "Nothing about us without you" means that people who are affected play a central role in creating technology, regulating it, and implementing it. Thisten, an app for transcription, was created by Liz Jackson, an activist who saw the need in her community to have real-time captions during the SXSW film festival.

Make sure you verify the assumptions made by AI. New York City's December 2021 law, which regulates AI sales used in hiring, is increasingly demanding that AI passes audits to detect bias. Some of these guidelines can be so broad that audits could validate oppression.

New York-based company pymetrics uses neuroscience-based games to evaluate job applicants by measuring their cognitive, social, and behavioral characteristics. The firm was not found to have violated the US Anti-Discrimination Act. However, the audit did not examine whether such games were a reasonable way of evaluating suitability for a job or what other inequity dynamics could be introduced. This audit is not what we need to make AI more fair.

To eliminate harmful tech, we need AI audits. With two colleagues, I created a framework that allows qualitative work to inspect the assumptions on which an AI is built and then uses these as a basis for technical parts of an AI audit. This audit has been used to assess Crystal and Humantic AI, two AI-driven personality tools that are used in hiring.

Each principle can be used intuitively, and they will become self-reinforcing as social scientists, technologists, and the general public learn how to apply them. Clear frameworks will work. Vague mandates won’t work. However, with clear frameworks we can eliminate AI that perpetuates discrimination and build AI that improves society.

NEXT NEWS