Clara Greo

12 equity and justice questions about AI

Updated 7 Aug 2025 to add prompts 11 & 12

We need to be better at having conversations about the equity and justice implications of our use of AI.

I'm working on a set of 12 equity and justice reflection questions for teams who are using or considering AI. The aim is to help teams think through and talk about the consequences of this technology through an equity and justice lens.

Like consequence scanning, this can help teams mitigate possible harms, consider hidden effects and check in against organisational values.

You could use these 12 reflections as a set in a workshop context or individually in discussions. You could use them at the beginning of and AI journey, while you're in the process of implementing AI in your product or service or with a live product. You could check in against the prompts just once or regularly throughout a development lifecycle.

Download these 12 equity and justice questions about AI as a pdf

1. Who will benefit most?
Which people, users, groups will benefit from this use of AI?
Who benefits financially, culturally, socially?
For who does this make work and life easier?

2. Who will be harmed?
Which people, users, groups might be harmed by this use of AI?
What negative impacts might there be, financially, culturally or socially?
How will you monitor the impact?
For who does this make work and life worse?

3. Which biases will be amplified?
What biases, prejudices and paradigms are in the data the AI is trained on?
How will these biases manifest in people’s real lives through your product or service?
How this change over time?

4. What systematic inequalities might this AI mask?
AI not only amplifies structures it’s trained on, it can mask them and legitimise them.
What inequalities could be hidden so they can’t be seen, interrogated or traced?
Who is protected and who is harmed?

5. What will the environmental impact be?
How will this use of AI impact the environment?
How will the impacts change or compound over time and with scale?
Who will feel these impacts first?
Who will feel them most?

6. Whose work is being used?
What is the AI trained on?
Who owns that work and whose labour created it?
Do they consent?
Can they change their consent?
Are they being paid?
Are they being credited?

7. What impact will a lack of new ideas have?
AI cannot create new, original ideas - it recycles and remixes a subset of what already is. It has a normalising tendency.
What impact will this have?
What will this do over time?

8. Who has power?
Who does it empower?
Who does not have power?
Who understands how the AI works, and can change or update it?
Who created the algorithms and who owns them?

9. What politics will it manifest?
Tech is not neutral. AI is not a-political.
What politics will your use of AI have and embody?
How is this likely to change over time?

10. Does it limit or build ownership?
Does this use of AI create dependencies in users and communities?
Does it allow communities to adapt and change the technology to work for them?

11. Who is accountable?
What happens when something goes wrong?
Who can intervene, appeal, or correct AI-generated decisions?
What redress is available to those impacted?
Is there a feedback loop, and who controls it?

12. How is data used, shared and protected?
What data might users share?
How is it captured and stored?
How might it be re-used or released in future?
How is privacy protected?


I'd love to hear your feedback on the questions and whether this is useful to you.


These prompts have been inspired mostly by: