modified: 2021-11-16
- Prepared For
- Prepared By
- Date
- Version
- 1. Scope
- 2. Users
- 3. Training Data
- 4. Algorithms & Source Code
- 5. Decision Space
- 6. Key Stakeholders
- 7. Values & Interests
- 8. Personal Data Processing
- 9. Components & Subprocessing
- 10. Failure modes
- 11. Explainability
- 12. Human in the Loop (HITL)
- 13. Model Performance Metrics
- 14. Decision Feedback & Objection
- 15. Impact Assessment
- 16. Regulatory Landscape
- 17. Mitigation
- 18. Changes in Behavior
- 19. Group Interactions
- 20. Comments
- What is this product designed for?
- What context does it operate in?
- What type of users does this product have?
- What are their roles?
- How was the training data collected?
- How do you ensure its representativeness?
- Does your training dataset contain personal data?
- Who annotates the data and how quality is controlled?
- What is the data labeling process that you employ?
- Do you use open or proprietary sources? Which?
- Who in the team is setting the heuristics/rules that influence the output?
- How do you ensure the quality of used third-party codebases?
- What is your process of making the key architectural choices?
- What exactly does the product do?
- Can you provide the list of all possible outputs?
- How incorrectly supplied inputs are spotted?
- Is there anomaly detection in place?
- Who are the key stakeholders?
- What influence do they have over the product?
- How do stakeholders interact with each other?
- How is the power distributed?
- What values do stakeholders/users have?
- Where these values can clash or create tensions?
- What is known at the moment and how assumptions are tested?
- How can you align your technology to the values you want to support/people desire?
- Which personal data is collected by the product?
- What is the purpose of collecting personal data?
- How is this data processed? Used? Stored? Deleted?
- Which third parties are engaged by the product?
- How do you evaluate the potential impacts of API on the quality of your product’s output?
- How do you check the reliability of your data processing contractors?
- How are failures detected and monitored?
- What are the possible failures of the product?
- What happens when the product fails?
- How is interpretability defined for the system?
- What interpretability methods are used?
- What metrics are used in result interpretation?
- How are interpretations of the output communicated?
- What is the role of a human agent in the validation/verification of the outputs?
- What is the role of a human agent in refining the model performance?
- What is the decision-making power assigned to human agents responsible for the quality of output?
- Which metrics are used to evaluate the product performance?
- Which measures are used to re-evaluate Accuracy, Recall, Precision, and F1- Score?
- How does the product allow for structured feedback?
- How can the user challenge the application output?
- Which are the third parties involved in resolving claims and objections?
- What potential harms can your product cause? (loss of opportunity, discrimination, economic loss, social stigma, detriment, emotional distress, etc)?
- What are the risks of the product’s failure?
- What impact can the product cause when deployed at scale?
- How is the product influencing the existing markets?
- What is the regulatory context in which the product operates?
- Is the model portable to other market verticals?
- What regulatory risks are involved?
- How do you test for bias and fairness? What fairness definitions do you employ and why?
- Does your team reflect a diversity of opinions, backgrounds, and thoughts?
- Do you have a process for redress if people are harmed by the outputs?
- How fast can you shut down your product in production if it behaves badly?
- Who and how should be informed?
- Do the automated decisions have significant legal or similar effects on the users/stakeholders?
- How might users change their behavior after use?
- What are the potentials for power imbalance?
- What group interactions can you anticipate?
- What are potential changes in group behavior?
- How is the product addressing group interests?
- What new groups could be born due to the product deployment at scale?
The Open Ethics Canvas v1.0 © 2021 by Open Ethics contributors Designed by Nikita Lukianets, Alice Pavaloiu, Vlad Nekrutenko Licensed under Attribution-ShareAlike 4.0 International https://openethics.ai/canvas