Algorithmic Impact Assessment Structure: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
Algorithmic Impact Assessment | |||
This document provides a comprehensive framework for assessing algorithmic systems across their lifecycle. Its primary aim is to document and communicate key information about the AI system. The structure of this assessment supports the systematic collection of detailed, standardized information that benefits both internal stakeholders and external evaluators. The resulting document can accompany the AI system’s source code or technical documentation, serving as a transparent reference that reflects the system’s design rationale, implementation scope, and ethical considerations. In addition, the framework incorporates readiness metrics, offering visual and analytical cues to assess maturity levels across different operational categories. | This document provides a comprehensive framework for assessing algorithmic systems across their lifecycle. Its primary aim is to document and communicate key information about the AI system. The structure of this assessment supports the systematic collection of detailed, standardized information that benefits both internal stakeholders and external evaluators. The resulting document can accompany the AI system’s source code or technical documentation, serving as a transparent reference that reflects the system’s design rationale, implementation scope, and ethical considerations. In addition, the framework incorporates readiness metrics, offering visual and analytical cues to assess maturity levels across different operational categories. | ||
Revision as of 02:13, 11 April 2025
Algorithmic Impact Assessment This document provides a comprehensive framework for assessing algorithmic systems across their lifecycle. Its primary aim is to document and communicate key information about the AI system. The structure of this assessment supports the systematic collection of detailed, standardized information that benefits both internal stakeholders and external evaluators. The resulting document can accompany the AI system’s source code or technical documentation, serving as a transparent reference that reflects the system’s design rationale, implementation scope, and ethical considerations. In addition, the framework incorporates readiness metrics, offering visual and analytical cues to assess maturity levels across different operational categories.
1. General Information
This initial stage focuses on gathering essential background information regarding:
a) the organization or individuals responsible for the design and development of the AI system,
b) the team or person completing the assessment questionnaire,
c) the AI system itself, including abstract information about its role, design intent, and operational context.
The scope of this section is to clearly state the identity of the creator(s) of the system, provide essential contact details for follow-up or accountability purposes, and specify who is responsible for the information disclosed in this report. Furthermore, it aims to briefly introduce the broader AI paradigm under which the system operates and clarify the extent to which this paradigm influences the system's design and functionality.
1.1 Organization Details
Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.
1.2 Assessment Contributors
Clarifies whether the assessment was completed by an individual or a team, their relationship to the system, and the sources of their information.
1.3 System Details
Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.
2. Design Phase: Organizational Governance
This phase assesses the AI strategy, infrastructure, security, data governance, workforce, and organizational culture. It is intended to evaluate the overarching organizational governance framework, specifically focusing on how ethical principles are implemented in practice across the company Reference models of AI readiness—such as those developed by Cisco[1] and comparable institutional frameworks—have been employed to structure this evaluation, ensuring alignment with industry standards and best practices.—regardless of the specific AI project in question.These elements are closely tied to the overall operational philosophy and procedural norms of the organization responsible for producing AI systems. Importantly, this assessment is not limited to the current system under review but reflects the organization's general approach to software and AI system development.
2.1 Strategy
Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.
2.2 Infrastructure and Security
Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.
2.3 Data Practices
Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.
2.4 Governance
Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.
2.5 Personnel
Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.
2.6 Culture
Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.
3. Design Phase: Use Case Definition
The Design Phase constitutes a foundational component of the AI system lifecycle, emphasizing the systematic analysis of ethical, legal, and technical dimensions during the initial modeling process. This phase is essential for ensuring that the AI system is aligned with the organization’s core values, complies with relevant regulatory frameworks, and proactively addresses risks related to problem misdefinition or ethical oversights. Its scope encompasses the compilation of the system’s purpose, high-level objectives, and performance metrics, while also integrating governance structures and ethical principles into its conceptual design. Additionally, relevant software design patterns and development best practices may be documented to inform the system’s architectural choices and overall implementation strategy.
The structure can be divided into three key subcategories:
3.1 Application Objectives
Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.
3.2 Automation Justification
Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.
3.3 Ethical Risk Evaluation
Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" [2], this section evaluates societal and environmental risks.
4. Development Phase: Data
The Development Phase is a critical stage in the lifecycle of AI, during which there is a focus on data acquisition, preparation, and validation such that it is kept intact and suitable for AI model training. Throughout this stage, particular emphasis is placed on ensuring compliance with ethical and legal frameworks—such as the GDPR—while also addressing critical technical challenges, including data bias, incompleteness, provenance, and representativeness. Organizations must capture the source of data sources, assess their quality with respect to attributes like accuracy, completeness, and timeliness, and present how preprocessing techniques to address missing values or class imbalance problems were applied. Organizations need to state how protected attributes were identified in order to prevent discrimination and guarantee that data does not rely on proxy variables, which can introduce biases.
4.1 Source and Documentation
Assesses data types, selection processes, sources, and whether data were reused or newly collected.
4.2 Privacy and Legality
Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments [3].
4.3 Data Quality and Bias
Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.
5. Development Phase: Model
Divided into six critical subthemes:
5.1 Algorithm Description
Details the technology used, algorithmic implementation, and level of automation.
5.2 Fairness
Evaluates applied fairness principles, tools used, and accessibility for users with disabilities [4][5][6][7].
5.3 Explainability and Transparency
Assesses openness of code, explanation mechanisms, and user-level adaptability [8].
5.4 Traceability
Reviews change logs, decision traceability, and overall system auditability.
5.5 Robustness and Testing
Evaluates robustness tests performed and their methodologies [9][10][11][12].
5.6 Ethical Considerations
Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol [13] [14][15].
6. Evaluation Phase
Questions are divided into four key subthemes:
6.1 Testing
Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.
6.2 Deployment
Examines deployment strategies, risk mitigation procedures, and service management.
6.3 Compliance Controls
Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices[16].
6.4 Certification and Audits
Assesses auditability, third-party review procedures, and legal framework alignment.
7. Operation Phase
Consists of two primary subthemes:
7.1 Continuous Monitoring and Feedback
Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.
7.2 Security and Adversarial Threats
Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention [17][18][19][20][21][22].
8. Retirement Phase
Focuses on the decommissioning of the AI system, including risk assessment, stakeholder participation, data handling, and environmental evaluation [23][24].
8.1 Risk Assessment
Determines whether a formal risk assessment was conducted prior to system retirement.
8.2 Stakeholder Engagement
Assesses the degree of stakeholder involvement during the decommissioning process.
8.3 Data Management
Evaluates the handling of personal and sensitive data during decommissioning.
8.4 Environmental Considerations
Examines organizational efforts to reduce environmental impacts during system retirement.
8.5 Documentation and Transparency
Assesses the extent of documentation and transparency in the decommissioning process.
- ↑ Cisco 2024 AI Readiness Index https://www.cisco.com/c/m/en_us/solutions/ai/readiness-index.html
- ↑ Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. arXiv preprint arXiv:2407.01294.
- ↑ Art. 9 GDPR Processing of special categories of personal data https://gdpr-info.eu/art-9-gdpr/
- ↑ Fairness Metrics in AI: Your Step-by-Step Guide to Equitable Systems https://shelf.io/blog/fairness-metrics-in-ai/
- ↑ Binns, R. (2020, January). On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 514-524).
- ↑ Luo, L., Nakao, Y., Chollet, M., Inakoshi, H., & Stumpf, S. (2024). EARN Fairness: Explaining, Asking, Reviewing and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders. arXiv preprint arXiv:2407.11442.
- ↑ Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 1918-1921). IEEE.
- ↑ Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 1918-1921). IEEE.
- ↑ Tocchetti, A., Corti, L., Balayn, A., Yurrita, M., Lippmann, P., Brambilla, M., & Yang, J. (2025). Ai robustness: a human-centered perspective on technological challenges and opportunities. ACM Computing Surveys, 57(6), 1-38.
- ↑ Jie M. Zhang, Mark Harman, Lei Ma, and Yang Liu. 2020. Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering 48, 1 (2020), 1–36.
- ↑ Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2025). Toward trustworthy artificial intelligence (TAI) in the context of explainability and robustness. ACM Computing Surveys, 57(6), 1-49.
- ↑ Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., ... & Gao, W. (2023). Ai alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852.
- ↑ Algorithmic impact assessment: a case study in healthcare https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/
- ↑ INTERPOL and UNICRI, “Risk Assessment Questionnaire.” (Revised version February 2024) https://www.interpol.int/es/content/download/20929/file/Risk%20Assesment%20Questionnaire.pdf
- ↑ Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. arXiv preprint arXiv:2407.01294.
- ↑ G. Mentzas, M. Fikardos, K. Lepenioti, and D. Apostolou, “Exploring the landscape of trustworthy artificial intelligence: Status and challenges,” Intelligent Decision Technologies, vol. 18, no. 2, pp. 837–854, Jun. 2024, doi: 10.3233/idt-240366.
- ↑ Ministry of Digital Governance, “Cybersecurity Self-Assessment Tool for Organizations.” https://mindigital.gr/wp-content/uploads/2022/03/cybersecurity-self-assessment.xlsm
- ↑ Cyber Security Evaluation Tool (CSET) https://www.cisa.gov/resources-tools/services/cyber-security-evaluation-tool-cset
- ↑ Rahman, M. M., Kshetri, N., Sayeed, S. A., & Rana, M. M. (2024). AssessITS: Integrating procedural guidelines and practical evaluation metrics for organizational IT and Cybersecurity risk assessment. arXiv preprint arXiv:2410.01750.
- ↑ Xiong, W., Legrand, E., Åberg, O., & Lagerström, R. (2022). Cyber security threat modeling based on the MITRE Enterprise ATT&CK Matrix. Software and Systems Modeling, 21(1), 157-177.
- ↑ Ibitoye, O., Abou-Khamis, R., Shehaby, M. E., Matrawy, A., & Shafiq, M. O. (2019). The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey. arXiv preprint arXiv:1911.02621.
- ↑ Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., ... & Li, K. (2021). Artificial intelligence security: Threats and countermeasures. ACM Computing Surveys (CSUR), 55(1), 1-36.
- ↑ Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: The hourglass model of organizational AI governance. arXiv preprint arXiv:2206.00335.
- ↑ Cath, C., & Jansen, F. (2021). Dutch Comfort: The limits of AI governance through municipal registers. arXiv preprint arXiv:2109.02944.