Algorithmic Impact Assessment Structure: Difference between revisions

From Algorithmic Impact Assessment
Jump to navigation Jump to search
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Algorithmic Impact Assessment =
Algorithmic Impact Assessment
This document provides a comprehensive framework for assessing algorithmic systems across their lifecycle. Its primary aim is to document and communicate key information about the AI system. The structure of this assessment supports the systematic collection of detailed, standardized information that benefits both internal stakeholders and external evaluators. The resulting document can accompany the AI system’s source code or technical documentation, serving as a transparent reference that reflects the system’s design rationale, implementation scope, and ethical considerations. In addition, the framework incorporates readiness metrics, offering visual and analytical cues to assess maturity levels across different operational categories.
This document provides a comprehensive framework for assessing algorithmic systems across their lifecycle. Its primary aim is to document and communicate key information about the AI system. The structure of this assessment supports the systematic collection of detailed, standardized information that benefits both internal stakeholders and external evaluators. The resulting document can accompany the AI system’s source code or technical documentation, serving as a transparent reference that reflects the system’s design rationale, implementation scope, and ethical considerations. In addition, the framework incorporates readiness metrics, offering visual and analytical cues to assess maturity levels across different operational categories.


== 1. General Information ==
== General Information ==
This initial stage focuses on gathering essential background information regarding:
This initial stage focuses on gathering essential background information regarding:


Line 13: Line 13:
The scope of this section is to clearly state the identity of the creator(s) of the system, provide essential contact details for follow-up or accountability purposes, and specify who is responsible for the information disclosed in this report. Furthermore, it aims to briefly introduce the broader AI paradigm under which the system operates and clarify the extent to which this paradigm influences the system's design and functionality.
The scope of this section is to clearly state the identity of the creator(s) of the system, provide essential contact details for follow-up or accountability purposes, and specify who is responsible for the information disclosed in this report. Furthermore, it aims to briefly introduce the broader AI paradigm under which the system operates and clarify the extent to which this paradigm influences the system's design and functionality.


=== 1.1 Organization Details ===
=== Organization Details ===
Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.
Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.


=== 1.2 Assessment Contributors ===
=== Assessment Contributors ===
Clarifies whether the assessment was completed by an individual or a team, their relationship to the system, and the sources of their information.
Clarifies whether the assessment was completed by an individual or a team, their relationship to the system, and the sources of their information.


=== 1.3 System Details ===
=== System Details ===
Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.
Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.


== 2. Design Phase: Organizational Governance ==
== Design Phase: Organizational Governance ==
This phase assesses the AI strategy, infrastructure, security, data governance, workforce, and organizational culture. It is intended to evaluate the overarching organizational governance framework, specifically focusing on how ethical principles are implemented in practice across the company Reference models of AI readiness—such as those developed by Cisco<ref>Cisco 2024 AI Readiness Index https://www.cisco.com/c/m/en_us/solutions/ai/readiness-index.html</ref> and comparable institutional frameworks—have been employed to structure this evaluation, ensuring alignment with industry standards and best practices.—regardless of the specific AI project in question.These elements are closely tied to the overall operational philosophy and procedural norms of the organization responsible for producing AI systems. Importantly, this assessment is not limited to the current system under review but reflects the organization's general approach to software and AI system development.
This phase assesses the AI strategy, infrastructure, security, data governance, workforce, and organizational culture. It is intended to evaluate the overarching organizational governance framework, specifically focusing on how ethical principles are implemented in practice across the company Reference models of AI readiness—such as those developed by Cisco<ref>Cisco 2024 AI Readiness Index https://www.cisco.com/c/m/en_us/solutions/ai/readiness-index.html</ref> and comparable institutional frameworks—have been employed to structure this evaluation, ensuring alignment with industry standards and best practices.—regardless of the specific AI project in question.These elements are closely tied to the overall operational philosophy and procedural norms of the organization responsible for producing AI systems. Importantly, this assessment is not limited to the current system under review but reflects the organization's general approach to software and AI system development.


=== 2.1 Strategy ===
=== Strategy ===
Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.
Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.


=== 2.2 Infrastructure and Security ===
=== Infrastructure and Security ===
Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.
Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.


=== 2.3 Data Practices ===
=== Data Practices ===
Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.
Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.


=== 2.4 Governance ===
=== Governance ===
Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.
Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.


=== 2.5 Personnel ===
=== Personnel ===
Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.
Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.


=== 2.6 Culture ===
=== Culture ===
Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.
Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.


== 3. Design Phase: Use Case Definition ==
== Design Phase: Use Case Definition ==
The Design Phase constitutes a foundational component of the AI system lifecycle, emphasizing the systematic analysis of ethical, legal, and technical dimensions during the initial modeling process. This phase is essential for ensuring that the AI system is aligned with the organization’s core values, complies with relevant regulatory frameworks, and proactively addresses risks related to problem misdefinition or ethical oversights. Its scope encompasses the compilation of the system’s purpose, high-level objectives, and performance metrics, while also integrating governance structures and ethical principles into its conceptual design. Additionally, relevant software design patterns and development best practices may be documented to inform the system’s architectural choices and overall implementation strategy.
The Design Phase constitutes a foundational component of the AI system lifecycle, emphasizing the systematic analysis of ethical, legal, and technical dimensions during the initial modeling process. This phase is essential for ensuring that the AI system is aligned with the organization’s core values, complies with relevant regulatory frameworks, and proactively addresses risks related to problem misdefinition or ethical oversights. Its scope encompasses the compilation of the system’s purpose, high-level objectives, and performance metrics, while also integrating governance structures and ethical principles into its conceptual design. Additionally, relevant software design patterns and development best practices may be documented to inform the system’s architectural choices and overall implementation strategy.


The structure can be divided into three key subcategories:
The structure can be divided into three key subcategories:


=== 3.1 Application Objectives ===
=== Application Objectives ===
Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.
Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.


=== 3.2 Automation Justification ===
=== Automation Justification ===
Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.
Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.


=== 3.3 Ethical Risk Evaluation ===
=== Ethical Risk Evaluation ===
Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" <ref>Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. ''arXiv preprint arXiv:2407.01294''.</ref>, this section evaluates societal and environmental risks.
Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" <ref>Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. ''arXiv preprint arXiv:2407.01294''.</ref>, this section evaluates societal and environmental risks.


== 4. Development Phase: Data ==
== Development Phase: Data ==
The Development Phase is a critical stage in the lifecycle of AI, during which there is a focus on data acquisition, preparation, and validation such that it is kept intact and suitable for AI model training. Throughout this stage, particular emphasis is placed on ensuring compliance with ethical and legal frameworks—such as the GDPR—while also addressing critical technical challenges, including data bias, incompleteness, provenance, and representativeness. Organizations must capture the source of data sources, assess their quality with respect to attributes like accuracy, completeness, and timeliness, and present how preprocessing techniques to address missing values or class imbalance problems were applied. Organizations need to state how protected attributes were identified in order to prevent discrimination and guarantee that data does not rely on proxy variables, which can introduce biases.
The Development Phase is a critical stage in the lifecycle of AI, during which there is a focus on data acquisition, preparation, and validation such that it is kept intact and suitable for AI model training. Throughout this stage, particular emphasis is placed on ensuring compliance with ethical and legal frameworks—such as the GDPR—while also addressing critical technical challenges, including data bias, incompleteness, provenance, and representativeness. Organizations must capture the source of data sources, assess their quality with respect to attributes like accuracy, completeness, and timeliness, and present how preprocessing techniques to address missing values or class imbalance problems were applied. Organizations need to state how protected attributes were identified in order to prevent discrimination and guarantee that data does not rely on proxy variables, which can introduce biases.


=== 4.1 Source and Documentation ===
=== Source and Documentation ===
Assesses data types, selection processes, sources, and whether data were reused or newly collected.
Assesses data types, selection processes, sources, and whether data were reused or newly collected.


=== 4.2 Privacy and Legality ===
=== Privacy and Legality ===
Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments <ref>Art. 9 GDPR Processing of special categories of personal data https://gdpr-info.eu/art-9-gdpr/</ref>.
Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments <ref>Art. 9 GDPR Processing of special categories of personal data https://gdpr-info.eu/art-9-gdpr/</ref>.


=== 4.3 Data Quality and Bias ===
=== Data Quality and Bias ===
Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.
Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.


== 5. Development Phase: Model ==
== Development Phase: Model ==
Divided into six critical subthemes:
At this time in the assessment process, questions are organized into six main thematic subcategories which involve their own relatedness but each involve a unique aspect of algorithmic assessment. The structure provided for these items allows for a thorough and multidisciplinary assessment of the system being evaluated. The subcategories are:


=== 5.1 Algorithm Description ===
=== Algorithm Description ===
Details the technology used, algorithmic implementation, and level of automation.
Details the technology used, algorithmic implementation, and level of automation.


=== 5.2 Fairness ===
=== Fairness ===
Evaluates applied fairness principles, tools used, and accessibility for users with disabilities <ref>Fairness Metrics in AI: Your Step-by-Step Guide to Equitable Systems https://shelf.io/blog/fairness-metrics-in-ai/</ref><ref>Binns, R. (2020, January). On the apparent conflict between individual and group fairness. In ''Proceedings of the 2020 conference on fairness, accountability, and transparency'' (pp. 514-524).</ref><ref>Luo, L., Nakao, Y., Chollet, M., Inakoshi, H., & Stumpf, S. (2024). EARN Fairness: Explaining, Asking, Reviewing and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders. ''arXiv preprint arXiv:2407.11442''.</ref><ref>Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In ''2020 IEEE 36th international conference on data engineering (ICDE)'' (pp. 1918-1921). IEEE.</ref>.
Evaluates applied fairness principles, tools used, and accessibility for users with disabilities <ref>Fairness Metrics in AI: Your Step-by-Step Guide to Equitable Systems https://shelf.io/blog/fairness-metrics-in-ai/</ref><ref>Binns, R. (2020, January). On the apparent conflict between individual and group fairness. In ''Proceedings of the 2020 conference on fairness, accountability, and transparency'' (pp. 514-524).</ref><ref>Luo, L., Nakao, Y., Chollet, M., Inakoshi, H., & Stumpf, S. (2024). EARN Fairness: Explaining, Asking, Reviewing and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders. ''arXiv preprint arXiv:2407.11442''.</ref><ref>Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In ''2020 IEEE 36th international conference on data engineering (ICDE)'' (pp. 1918-1921). IEEE.</ref>.


=== 5.3 Explainability and Transparency ===
=== Explainability and Transparency ===
Assesses openness of code, explanation mechanisms, and user-level adaptability <ref>Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In ''2020 IEEE 36th international conference on data engineering (ICDE)'' (pp. 1918-1921). IEEE.</ref>.
Assesses openness of code, explanation mechanisms, and user-level adaptability <ref>Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In ''2020 IEEE 36th international conference on data engineering (ICDE)'' (pp. 1918-1921). IEEE.</ref>.


=== 5.4 Traceability ===
=== Traceability ===
Reviews change logs, decision traceability, and overall system auditability.
Reviews change logs, decision traceability, and overall system auditability.


=== 5.5 Robustness and Testing ===
=== Robustness and Testing ===
Evaluates robustness tests performed and their methodologies <ref>Tocchetti, A., Corti, L., Balayn, A., Yurrita, M., Lippmann, P., Brambilla, M., & Yang, J. (2025). Ai robustness: a human-centered perspective on technological challenges and opportunities. ''ACM Computing Surveys'', ''57''(6), 1-38.</ref><ref>Jie M. Zhang, Mark Harman, Lei Ma, and Yang Liu. 2020. Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering 48, 1 (2020), 1–36.</ref><ref>Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2025). Toward trustworthy artificial intelligence (TAI) in the context of explainability and robustness. ''ACM Computing Surveys'', ''57''(6), 1-49.</ref><ref>Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., ... & Gao, W. (2023). Ai alignment: A comprehensive survey. ''arXiv preprint arXiv:2310.19852''.</ref>.
Evaluates robustness tests performed and their methodologies <ref>Tocchetti, A., Corti, L., Balayn, A., Yurrita, M., Lippmann, P., Brambilla, M., & Yang, J. (2025). Ai robustness: a human-centered perspective on technological challenges and opportunities. ''ACM Computing Surveys'', ''57''(6), 1-38.</ref><ref>Jie M. Zhang, Mark Harman, Lei Ma, and Yang Liu. 2020. Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering 48, 1 (2020), 1–36.</ref><ref>Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2025). Toward trustworthy artificial intelligence (TAI) in the context of explainability and robustness. ''ACM Computing Surveys'', ''57''(6), 1-49.</ref><ref>Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., ... & Gao, W. (2023). Ai alignment: A comprehensive survey. ''arXiv preprint arXiv:2310.19852''.</ref>.


=== 5.6 Ethical Considerations ===
=== Ethical Considerations ===
Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol <ref>Algorithmic impact assessment: a case study in healthcare https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/</ref> <ref>INTERPOL and UNICRI, “Risk Assessment Questionnaire.” [https://www.interpol.int/es/content/download/20929/file/Risk%20Assesment%20Questionnaire.pdf (Revised version February 2024) https://www.interpol.int/es/content/download/20929/file/Risk%20Assesment%20Questionnaire.pdf]</ref><ref>Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. ''arXiv preprint arXiv:2407.01294''.</ref>.
Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol <ref>Algorithmic impact assessment: a case study in healthcare https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/</ref> <ref>INTERPOL and UNICRI, “Risk Assessment Questionnaire.” [https://www.interpol.int/es/content/download/20929/file/Risk%20Assesment%20Questionnaire.pdf (Revised version February 2024) https://www.interpol.int/es/content/download/20929/file/Risk%20Assesment%20Questionnaire.pdf]</ref><ref>Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. ''arXiv preprint arXiv:2407.01294''.</ref>.


== 6. Evaluation Phase ==
== Evaluation Phase ==
Questions are divided into four key subthemes:
The Evaluation Phase is an essential moment in the lifecycle of AI systems, frameworks, and platforms. This is where the design assumptions of the system, its functional capacity, and overall implications are scrutinized. It is more than a technical checkpoint; The Evaluation Phase is recognizably a multi-sided evaluation process that seeks to uncover the reliability, fairness, transparency, and legality of the AI system in live or contextualized conditions. This phase is methodologically designed to promote accountability and trust through the identification of risks, the testing of robustness, and the discovery of unintended consequences, before the system is fully launched into service. The method draws on interdisciplinary evaluation principles and best practices from around the world, promoting a balanced approach to quantitative testing and qualitative judgment. It should be noted that evaluating risk, performance across a diverse group of users, and evaluation of the process of traceability are emphasized as part of this phase.


=== 6.1 Testing ===
=== Testing ===
Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.
Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.


=== 6.2 Deployment ===
=== Deployment ===
Examines deployment strategies, risk mitigation procedures, and service management.
Examines deployment strategies, risk mitigation procedures, and service management.


=== 6.3 Compliance Controls ===
=== Compliance Controls ===
Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices<ref>G. Mentzas, M. Fikardos, K. Lepenioti, and D. Apostolou, “Exploring the landscape of trustworthy artificial intelligence: Status and challenges,” Intelligent Decision Technologies, vol. 18, no. 2, pp. 837–854, Jun. 2024, doi: 10.3233/idt-240366.</ref>.
Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices<ref>G. Mentzas, M. Fikardos, K. Lepenioti, and D. Apostolou, “Exploring the landscape of trustworthy artificial intelligence: Status and challenges,” Intelligent Decision Technologies, vol. 18, no. 2, pp. 837–854, Jun. 2024, doi: 10.3233/idt-240366.</ref>.


=== 6.4 Certification and Audits ===
=== Certification and Audits ===
Assesses auditability, third-party review procedures, and legal framework alignment.
Assesses auditability, third-party review procedures, and legal framework alignment.


== 7. Operation Phase ==
== Operation Phase ==
The Operation Phase focuses on the real-world deployment and continuous oversight of AI systems. At this phase, the evaluation concentrates on mechanisms for monitoring, feedback, cybersecurity, and resilience—ensuring that systems remain ethically aligned, secure, and responsive to emerging risks throughout their lifecycle.
 
Consists of two primary subthemes:
Consists of two primary subthemes:


=== 7.1 Continuous Monitoring and Feedback ===
=== Continuous Monitoring and Feedback ===
Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.
Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.


=== 7.2 Security and Adversarial Threats ===
=== Security and Adversarial Threats ===
Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention <ref>Ministry of Digital Governance, “Cybersecurity Self-Assessment Tool for Organizations.” https://mindigital.gr/wp-content/uploads/2022/03/cybersecurity-self-assessment.xlsm</ref><ref>Cyber Security Evaluation Tool (CSET) https://www.cisa.gov/resources-tools/services/cyber-security-evaluation-tool-cset</ref><ref>Rahman, M. M., Kshetri, N., Sayeed, S. A., & Rana, M. M. (2024). AssessITS: Integrating procedural guidelines and practical evaluation metrics for organizational IT and Cybersecurity risk assessment. ''arXiv preprint arXiv:2410.01750''.</ref><ref>Xiong, W., Legrand, E., Åberg, O., & Lagerström, R. (2022). Cyber security threat modeling based on the MITRE Enterprise ATT&CK Matrix. ''Software and Systems Modeling'', ''21''(1), 157-177.</ref><ref>Ibitoye, O., Abou-Khamis, R., Shehaby, M. E., Matrawy, A., & Shafiq, M. O. (2019). The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey. ''arXiv preprint arXiv:1911.02621''.</ref><ref>Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., ... & Li, K. (2021). Artificial intelligence security: Threats and countermeasures. ''ACM Computing Surveys (CSUR)'', ''55''(1), 1-36.</ref>.
Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention <ref>Ministry of Digital Governance, “Cybersecurity Self-Assessment Tool for Organizations.” https://mindigital.gr/wp-content/uploads/2022/03/cybersecurity-self-assessment.xlsm</ref><ref>Cyber Security Evaluation Tool (CSET) https://www.cisa.gov/resources-tools/services/cyber-security-evaluation-tool-cset</ref><ref>Rahman, M. M., Kshetri, N., Sayeed, S. A., & Rana, M. M. (2024). AssessITS: Integrating procedural guidelines and practical evaluation metrics for organizational IT and Cybersecurity risk assessment. ''arXiv preprint arXiv:2410.01750''.</ref><ref>Xiong, W., Legrand, E., Åberg, O., & Lagerström, R. (2022). Cyber security threat modeling based on the MITRE Enterprise ATT&CK Matrix. ''Software and Systems Modeling'', ''21''(1), 157-177.</ref><ref>Ibitoye, O., Abou-Khamis, R., Shehaby, M. E., Matrawy, A., & Shafiq, M. O. (2019). The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey. ''arXiv preprint arXiv:1911.02621''.</ref><ref>Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., ... & Li, K. (2021). Artificial intelligence security: Threats and countermeasures. ''ACM Computing Surveys (CSUR)'', ''55''(1), 1-36.</ref>.


== 8. Retirement Phase ==
== Retirement Phase ==
Focuses on the decommissioning of the AI system, including risk assessment, stakeholder participation, data handling, and environmental evaluation <ref>Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: The hourglass model of organizational AI governance. ''arXiv preprint arXiv:2206.00335''.</ref><ref>Cath, C., & Jansen, F. (2021). Dutch Comfort: The limits of AI governance through municipal registers. ''arXiv preprint arXiv:2109.02944''.</ref>.
The Retiring Phase is the final stage of the AI lifecycle. It is the formal final stage for a responsible ceasing process of an AI system and any residual risks and consequences. The Retiring Phase is important to protect the engaged stakeholders and ensure transparency and accountability even if an AI system is inactive. It also focuses on decommissioning the AI system and includes risk assessment, stakeholder engagement, data usage, and environmental assessment <ref>Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: The hourglass model of organizational AI governance. ''arXiv preprint arXiv:2206.00335''.</ref><ref>Cath, C., & Jansen, F. (2021). Dutch Comfort: The limits of AI governance through municipal registers. ''arXiv preprint arXiv:2109.02944''.</ref>.


=== 8.1 Risk Assessment ===
=== Risk Assessment ===
Determines whether a formal risk assessment was conducted prior to system retirement.
Determines whether a formal risk assessment was conducted prior to system retirement.


=== 8.2 Stakeholder Engagement ===
=== Stakeholder Engagement ===
Assesses the degree of stakeholder involvement during the decommissioning process.
Assesses the degree of stakeholder involvement during the decommissioning process.


=== 8.3 Data Management ===
=== Data Management ===
Evaluates the handling of personal and sensitive data during decommissioning.
Evaluates the handling of personal and sensitive data during decommissioning.


=== 8.4 Environmental Considerations ===
=== Environmental Considerations ===
Examines organizational efforts to reduce environmental impacts during system retirement.
Examines organizational efforts to reduce environmental impacts during system retirement.


=== 8.5 Documentation and Transparency ===
=== Documentation and Transparency ===
Assesses the extent of documentation and transparency in the decommissioning process.
Assesses the extent of documentation and transparency in the decommissioning process.

Latest revision as of 07:44, 14 April 2025

Algorithmic Impact Assessment This document provides a comprehensive framework for assessing algorithmic systems across their lifecycle. Its primary aim is to document and communicate key information about the AI system. The structure of this assessment supports the systematic collection of detailed, standardized information that benefits both internal stakeholders and external evaluators. The resulting document can accompany the AI system’s source code or technical documentation, serving as a transparent reference that reflects the system’s design rationale, implementation scope, and ethical considerations. In addition, the framework incorporates readiness metrics, offering visual and analytical cues to assess maturity levels across different operational categories.

General Information

This initial stage focuses on gathering essential background information regarding:

a) the organization or individuals responsible for the design and development of the AI system,

b) the team or person completing the assessment questionnaire,

c) the AI system itself, including abstract information about its role, design intent, and operational context.

The scope of this section is to clearly state the identity of the creator(s) of the system, provide essential contact details for follow-up or accountability purposes, and specify who is responsible for the information disclosed in this report. Furthermore, it aims to briefly introduce the broader AI paradigm under which the system operates and clarify the extent to which this paradigm influences the system's design and functionality.

Organization Details

Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.

Assessment Contributors

Clarifies whether the assessment was completed by an individual or a team, their relationship to the system, and the sources of their information.

System Details

Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.

Design Phase: Organizational Governance

This phase assesses the AI strategy, infrastructure, security, data governance, workforce, and organizational culture. It is intended to evaluate the overarching organizational governance framework, specifically focusing on how ethical principles are implemented in practice across the company Reference models of AI readiness—such as those developed by Cisco[1] and comparable institutional frameworks—have been employed to structure this evaluation, ensuring alignment with industry standards and best practices.—regardless of the specific AI project in question.These elements are closely tied to the overall operational philosophy and procedural norms of the organization responsible for producing AI systems. Importantly, this assessment is not limited to the current system under review but reflects the organization's general approach to software and AI system development.

Strategy

Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.

Infrastructure and Security

Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.

Data Practices

Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.

Governance

Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.

Personnel

Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.

Culture

Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.

Design Phase: Use Case Definition

The Design Phase constitutes a foundational component of the AI system lifecycle, emphasizing the systematic analysis of ethical, legal, and technical dimensions during the initial modeling process. This phase is essential for ensuring that the AI system is aligned with the organization’s core values, complies with relevant regulatory frameworks, and proactively addresses risks related to problem misdefinition or ethical oversights. Its scope encompasses the compilation of the system’s purpose, high-level objectives, and performance metrics, while also integrating governance structures and ethical principles into its conceptual design. Additionally, relevant software design patterns and development best practices may be documented to inform the system’s architectural choices and overall implementation strategy.

The structure can be divided into three key subcategories:

Application Objectives

Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.

Automation Justification

Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.

Ethical Risk Evaluation

Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" [2], this section evaluates societal and environmental risks.

Development Phase: Data

The Development Phase is a critical stage in the lifecycle of AI, during which there is a focus on data acquisition, preparation, and validation such that it is kept intact and suitable for AI model training. Throughout this stage, particular emphasis is placed on ensuring compliance with ethical and legal frameworks—such as the GDPR—while also addressing critical technical challenges, including data bias, incompleteness, provenance, and representativeness. Organizations must capture the source of data sources, assess their quality with respect to attributes like accuracy, completeness, and timeliness, and present how preprocessing techniques to address missing values or class imbalance problems were applied. Organizations need to state how protected attributes were identified in order to prevent discrimination and guarantee that data does not rely on proxy variables, which can introduce biases.

Source and Documentation

Assesses data types, selection processes, sources, and whether data were reused or newly collected.

Privacy and Legality

Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments [3].

Data Quality and Bias

Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.

Development Phase: Model

At this time in the assessment process, questions are organized into six main thematic subcategories which involve their own relatedness but each involve a unique aspect of algorithmic assessment. The structure provided for these items allows for a thorough and multidisciplinary assessment of the system being evaluated. The subcategories are:

Algorithm Description

Details the technology used, algorithmic implementation, and level of automation.

Fairness

Evaluates applied fairness principles, tools used, and accessibility for users with disabilities [4][5][6][7].

Explainability and Transparency

Assesses openness of code, explanation mechanisms, and user-level adaptability [8].

Traceability

Reviews change logs, decision traceability, and overall system auditability.

Robustness and Testing

Evaluates robustness tests performed and their methodologies [9][10][11][12].

Ethical Considerations

Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol [13] [14][15].

Evaluation Phase

The Evaluation Phase is an essential moment in the lifecycle of AI systems, frameworks, and platforms. This is where the design assumptions of the system, its functional capacity, and overall implications are scrutinized. It is more than a technical checkpoint; The Evaluation Phase is recognizably a multi-sided evaluation process that seeks to uncover the reliability, fairness, transparency, and legality of the AI system in live or contextualized conditions. This phase is methodologically designed to promote accountability and trust through the identification of risks, the testing of robustness, and the discovery of unintended consequences, before the system is fully launched into service. The method draws on interdisciplinary evaluation principles and best practices from around the world, promoting a balanced approach to quantitative testing and qualitative judgment. It should be noted that evaluating risk, performance across a diverse group of users, and evaluation of the process of traceability are emphasized as part of this phase.

Testing

Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.

Deployment

Examines deployment strategies, risk mitigation procedures, and service management.

Compliance Controls

Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices[16].

Certification and Audits

Assesses auditability, third-party review procedures, and legal framework alignment.

Operation Phase

The Operation Phase focuses on the real-world deployment and continuous oversight of AI systems. At this phase, the evaluation concentrates on mechanisms for monitoring, feedback, cybersecurity, and resilience—ensuring that systems remain ethically aligned, secure, and responsive to emerging risks throughout their lifecycle.

Consists of two primary subthemes:

Continuous Monitoring and Feedback

Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.

Security and Adversarial Threats

Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention [17][18][19][20][21][22].

Retirement Phase

The Retiring Phase is the final stage of the AI lifecycle. It is the formal final stage for a responsible ceasing process of an AI system and any residual risks and consequences. The Retiring Phase is important to protect the engaged stakeholders and ensure transparency and accountability even if an AI system is inactive. It also focuses on decommissioning the AI system and includes risk assessment, stakeholder engagement, data usage, and environmental assessment [23][24].

Risk Assessment

Determines whether a formal risk assessment was conducted prior to system retirement.

Stakeholder Engagement

Assesses the degree of stakeholder involvement during the decommissioning process.

Data Management

Evaluates the handling of personal and sensitive data during decommissioning.

Environmental Considerations

Examines organizational efforts to reduce environmental impacts during system retirement.

Documentation and Transparency

Assesses the extent of documentation and transparency in the decommissioning process.

  1. Cisco 2024 AI Readiness Index https://www.cisco.com/c/m/en_us/solutions/ai/readiness-index.html
  2. Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. arXiv preprint arXiv:2407.01294.
  3. Art. 9 GDPR Processing of special categories of personal data https://gdpr-info.eu/art-9-gdpr/
  4. Fairness Metrics in AI: Your Step-by-Step Guide to Equitable Systems https://shelf.io/blog/fairness-metrics-in-ai/
  5. Binns, R. (2020, January). On the apparent conflict between individual and group fairness. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 514-524).
  6. Luo, L., Nakao, Y., Chollet, M., Inakoshi, H., & Stumpf, S. (2024). EARN Fairness: Explaining, Asking, Reviewing and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders. arXiv preprint arXiv:2407.11442.
  7. Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 1918-1921). IEEE.
  8. Foulds, J. R., Islam, R., Keya, K. N., & Pan, S. (2020, April). An intersectional definition of fairness. In 2020 IEEE 36th international conference on data engineering (ICDE) (pp. 1918-1921). IEEE.
  9. Tocchetti, A., Corti, L., Balayn, A., Yurrita, M., Lippmann, P., Brambilla, M., & Yang, J. (2025). Ai robustness: a human-centered perspective on technological challenges and opportunities. ACM Computing Surveys, 57(6), 1-38.
  10. Jie M. Zhang, Mark Harman, Lei Ma, and Yang Liu. 2020. Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering 48, 1 (2020), 1–36.
  11. Chander, B., John, C., Warrier, L., & Gopalakrishnan, K. (2025). Toward trustworthy artificial intelligence (TAI) in the context of explainability and robustness. ACM Computing Surveys, 57(6), 1-49.
  12. Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., ... & Gao, W. (2023). Ai alignment: A comprehensive survey. arXiv preprint arXiv:2310.19852.
  13. Algorithmic impact assessment: a case study in healthcare https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/
  14. INTERPOL and UNICRI, “Risk Assessment Questionnaire.” (Revised version February 2024) https://www.interpol.int/es/content/download/20929/file/Risk%20Assesment%20Questionnaire.pdf
  15. Abercrombie, G., Benbouzid, D., Giudici, P., Golpayegani, D., Hernandez, J., Noro, P., ... & Waltersdorfer, L. (2024). A collaborative, human-centred taxonomy of ai, algorithmic, and automation harms. arXiv preprint arXiv:2407.01294.
  16. G. Mentzas, M. Fikardos, K. Lepenioti, and D. Apostolou, “Exploring the landscape of trustworthy artificial intelligence: Status and challenges,” Intelligent Decision Technologies, vol. 18, no. 2, pp. 837–854, Jun. 2024, doi: 10.3233/idt-240366.
  17. Ministry of Digital Governance, “Cybersecurity Self-Assessment Tool for Organizations.” https://mindigital.gr/wp-content/uploads/2022/03/cybersecurity-self-assessment.xlsm
  18. Cyber Security Evaluation Tool (CSET) https://www.cisa.gov/resources-tools/services/cyber-security-evaluation-tool-cset
  19. Rahman, M. M., Kshetri, N., Sayeed, S. A., & Rana, M. M. (2024). AssessITS: Integrating procedural guidelines and practical evaluation metrics for organizational IT and Cybersecurity risk assessment. arXiv preprint arXiv:2410.01750.
  20. Xiong, W., Legrand, E., Åberg, O., & Lagerström, R. (2022). Cyber security threat modeling based on the MITRE Enterprise ATT&CK Matrix. Software and Systems Modeling, 21(1), 157-177.
  21. Ibitoye, O., Abou-Khamis, R., Shehaby, M. E., Matrawy, A., & Shafiq, M. O. (2019). The Threat of Adversarial Attacks on Machine Learning in Network Security--A Survey. arXiv preprint arXiv:1911.02621.
  22. Hu, Y., Kuang, W., Qin, Z., Li, K., Zhang, J., Gao, Y., ... & Li, K. (2021). Artificial intelligence security: Threats and countermeasures. ACM Computing Surveys (CSUR), 55(1), 1-36.
  23. Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: The hourglass model of organizational AI governance. arXiv preprint arXiv:2206.00335.
  24. Cath, C., & Jansen, F. (2021). Dutch Comfort: The limits of AI governance through municipal registers. arXiv preprint arXiv:2109.02944.