Clara Greo

10 equity and justice prompts for AI

We need to be better at having conversations about the equity and justice implications of our use of AI.

I'm working on a set of 10 equity and justice reflection prompts for teams who are using or considering AI. The aim is to help teams think through and talk about the consequences of this technology through an equity and justice lens.

Like consequence scanning, this can help teams mitigate possible harms, consider hidden effects and check in against organisational values.

You could use these 10 prompts as a set in a workshop context or individually in discussions. You could use them at the beginning of and AI journey, while you're in the process of implementing AI in your product or service or with a live product. You could check in against the prompts just once or regularly throughout a development lifecycle.

Download these 10 equity and justice prompts for AI as a pdf

1. Who will benefit most?
Which people, users, groups will benefit from this use of AI?
Who benefits financially, culturally, socially?
For who does this make work and life easier?

2. Who will be harmed?
Which people, users, groups might be harmed by this use of AI?
What negative impacts might there be, financially, culturally or socially?
For who does this make work and life worse?

3. Which biases will be amplified?
What biases, prejudices and paradigms are in the data the AI is trained on?
How will these biases manifest in people’s real lives through your product or service?
How this change over time?

4. What systematic inequalities might this AI mask?
AI not only amplifies structures it’s trained on, it can mask them and legitimise them.
What inequalities could be hidden so they can’t be seen, interrogated or traced?
Who is protected and who is harmed?

5. What will the environmental impact be?
How will this use of AI impact the environment?
How will the impacts change or compound over time and with scale?
Who will feel these impacts first?
Who will feel them most?

6. Whose work is being used?
What is the AI trained on?
Who owns that work and whose labour created it?
Do they consent?
Can they change their consent?
Are they being paid?
Are they being credited?

7. What impact will a lack of new ideas have?
AI cannot create new, original ideas - it recycles and remixes a subset of what already is. It has a normalising tendency.
What impact will this have?
What will this do over time?

8. Who has power?
Who does it empower?
Who does not have power?
Who understands how the AI works, and can change or update it?
Who created the algorithms and who owns them?

9. What politics will it manifest?
Tech is not neutral. AI is not a-political.
What politics will your use of AI have and embody?
How is this likely to change over time?

10. Does it limit or build ownership?
Does this use of AI create dependencies in users and communities?
Does it allow communities to adapt and change the technology to work for them?


I'd love to hear your feedback on the prompts and whether this is useful to you.


These prompts have been inspired mostly by: