Algorithmic Impact Assessment Structure: Difference between revisions

From Algorithmic Impact Assessment
Jump to navigation Jump to search
Created page with "= AI Lifecycle Impact Assessment: Phase Structure = == 1. Design Phase == The design phase represents the early stage in the development of an AI system, where the fundamental idea is defined, the problem to be addressed is articulated, and the feasibility of the intended goals is examined. The assessment questions in this phase are grouped into two subcategories: === 1.1. Governance === This section includes questions related to the organizational governance of the AI..."
 
No edit summary
Line 1: Line 1:
= AI Lifecycle Impact Assessment: Phase Structure =
= AI Lifecycle Impact Assessment Framework =


== 1. Design Phase ==
== 1. General Information ==
The design phase represents the early stage in the development of an AI system, where the fundamental idea is defined, the problem to be addressed is articulated, and the feasibility of the intended goals is examined. The assessment questions in this phase are grouped into two subcategories:
This initial stage focuses on gathering essential background information regarding the organization responsible for the AI system, the respondent(s) completing the questionnaire, and the AI system itself.


=== 1.1. Governance ===
=== 1.1 Organization Details ===
This section includes questions related to the organizational governance of the AI development process. Specifically, it focuses on whether ethical principles and values have been effectively communicated and agreed upon by relevant stakeholders and project personnel. It also examines the existence of accountability structures in cases of non-compliance.
Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.


=== 1.2. Use Case Definition ===
=== 1.2 Questionnaire Respondents ===
This section concerns questions about the use case requirements, including the project's goals, their alignment with ethical values, and potential performance criteria that may be used in later evaluation stages.
Clarifies whether the questionnaire was completed by an individual or a team, their relationship to the system, and the sources of their information.


== 2. Development Phase ==
=== 1.3 System Details ===
In this phase, developers use a variety of data sources to train the AI model and ultimately produce the final AI product. The assessment is divided into two subcategories:
Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.


=== 2.1. Data ===
== 2. Design Phase: Organizational Governance ==
This section focuses on ensuring that the data used for training is properly documented, compliant with relevant legal frameworks, and of high quality. Key aspects include the absence of bias and errors, representativeness of the data to the problem domain, completeness, and traceability of the data sources.
This phase draws on the Cisco AI Readiness Model, assessing AI strategy, infrastructure, data governance, workforce, and organizational culture.


=== 2.2. Model ===
=== 2.1 Strategy ===
This section revolves around ensuring that the AI model is trained to produce reliable outputs. It includes questions related to source transparency, fairness, explainability, robustness, potential risks, and the behavior of the model during training.
Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.


== 3. Evaluation Phase ==
=== 2.2 Infrastructure and Security ===
This phase ensures that the AI system's performance has been thoroughly tested and that it is considered sufficiently safe for market deployment. It consists of two stages:
Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.


=== 3.1. Testing ===
=== 2.3 Data Practices ===
This part includes questions regarding the testing strategies applied to the AI model and its corresponding performance under those strategies.
Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.


=== 3.2. Deployment Trial ===
=== 2.4 Governance ===
This part assesses the technical process of deploying the AI system in a trial production environment and evaluates the results of this deployment.
Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.


== 4. Operation Phase ==
=== 2.5 Personnel ===
This phase includes questions aimed at ensuring the AI system continues to function as intended over time and that measures are in place to prevent performance degradation. It consists of two subphases:
Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.


=== 4.1. Functionality Preservation ===
=== 2.6 Culture ===
This phase encompasses all activities related to maintaining the system’s operational capabilities.
Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.


=== 4.2. Maintenance ===
== 3. Design Phase: Use Case Definition ==
This phase ensures that regular updates are performed to preserve and improve the quality and relevance of the system's output.
Questions are divided into three key subcategories:


== 5. Retirement Phase ==
=== 3.1 Application Objectives ===
The final phase concerns the proper decommissioning of the AI system. It includes the evaluation of retirement-related risks and the identification of appropriate mitigation strategies to ensure a secure and responsible withdrawal from use.
Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.
 
=== 3.2 Automation Justification ===
Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.
 
=== 3.3 Ethical Risk Evaluation ===
Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" [38], this section evaluates societal and environmental risks.
 
== 4. Development Phase: Data ==
Questions are grouped into three subthemes:
 
=== 4.1 Source and Documentation ===
Assesses data types, selection processes, sources, and whether data were reused or newly collected.
 
=== 4.2 Privacy and Legality ===
Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments [39].
 
=== 4.3 Data Quality and Bias ===
Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.
 
== 5. Development Phase: Model ==
Divided into six critical subthemes:
 
=== 5.1 Algorithm Description ===
Details the technology used, algorithmic implementation, and level of automation.
 
=== 5.2 Fairness ===
Evaluates applied fairness principles, tools used, and accessibility for users with disabilities [40][41][42][43].
 
=== 5.3 Explainability and Transparency ===
Assesses openness of code, explanation mechanisms, and user-level adaptability [44].
 
=== 5.4 Traceability ===
Reviews change logs, decision traceability, and overall system auditability.
 
=== 5.5 Robustness and Testing ===
Evaluates robustness tests performed and their methodologies [45][46][47].
 
=== 5.6 Ethical Considerations ===
Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol [38][28][32].
 
== 6. Evaluation Phase ==
Questions are divided into four key subthemes:
 
=== 6.1 Testing ===
Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.
 
=== 6.2 Deployment ===
Examines deployment strategies, risk mitigation procedures, and service management.
 
=== 6.3 Compliance Controls ===
Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices [48].
 
=== 6.4 Certification and Audits ===
Assesses auditability, third-party review procedures, and legal framework alignment.
 
== 7. Operation Phase ==
Consists of two primary subthemes:
 
=== 7.1 Continuous Monitoring and Feedback ===
Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.
 
=== 7.2 Security and Adversarial Threats ===
Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention [49].
 
== 8. Retirement Phase ==
Focuses on the decommissioning of the AI system, including risk assessment, stakeholder participation, data handling, and environmental evaluation [50].
 
=== 8.1 Risk Assessment ===
Determines whether a formal risk assessment was conducted prior to system retirement.
 
=== 8.2 Stakeholder Engagement ===
Assesses the degree of stakeholder involvement during the decommissioning process.
 
=== 8.3 Data Management ===
Evaluates the handling of personal and sensitive data during decommissioning.
 
=== 8.4 Environmental Considerations ===
Examines organizational efforts to reduce environmental impacts during system retirement.
 
=== 8.5 Documentation and Transparency ===
Assesses the extent of documentation and transparency in the decommissioning process.

Revision as of 23:54, 10 April 2025

AI Lifecycle Impact Assessment Framework

1. General Information

This initial stage focuses on gathering essential background information regarding the organization responsible for the AI system, the respondent(s) completing the questionnaire, and the AI system itself.

1.1 Organization Details

Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.

1.2 Questionnaire Respondents

Clarifies whether the questionnaire was completed by an individual or a team, their relationship to the system, and the sources of their information.

1.3 System Details

Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.

2. Design Phase: Organizational Governance

This phase draws on the Cisco AI Readiness Model, assessing AI strategy, infrastructure, data governance, workforce, and organizational culture.

2.1 Strategy

Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.

2.2 Infrastructure and Security

Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.

2.3 Data Practices

Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.

2.4 Governance

Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.

2.5 Personnel

Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.

2.6 Culture

Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.

3. Design Phase: Use Case Definition

Questions are divided into three key subcategories:

3.1 Application Objectives

Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.

3.2 Automation Justification

Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.

3.3 Ethical Risk Evaluation

Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" [38], this section evaluates societal and environmental risks.

4. Development Phase: Data

Questions are grouped into three subthemes:

4.1 Source and Documentation

Assesses data types, selection processes, sources, and whether data were reused or newly collected.

4.2 Privacy and Legality

Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments [39].

4.3 Data Quality and Bias

Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.

5. Development Phase: Model

Divided into six critical subthemes:

5.1 Algorithm Description

Details the technology used, algorithmic implementation, and level of automation.

5.2 Fairness

Evaluates applied fairness principles, tools used, and accessibility for users with disabilities [40][41][42][43].

5.3 Explainability and Transparency

Assesses openness of code, explanation mechanisms, and user-level adaptability [44].

5.4 Traceability

Reviews change logs, decision traceability, and overall system auditability.

5.5 Robustness and Testing

Evaluates robustness tests performed and their methodologies [45][46][47].

5.6 Ethical Considerations

Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol [38][28][32].

6. Evaluation Phase

Questions are divided into four key subthemes:

6.1 Testing

Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.

6.2 Deployment

Examines deployment strategies, risk mitigation procedures, and service management.

6.3 Compliance Controls

Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices [48].

6.4 Certification and Audits

Assesses auditability, third-party review procedures, and legal framework alignment.

7. Operation Phase

Consists of two primary subthemes:

7.1 Continuous Monitoring and Feedback

Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.

7.2 Security and Adversarial Threats

Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention [49].

8. Retirement Phase

Focuses on the decommissioning of the AI system, including risk assessment, stakeholder participation, data handling, and environmental evaluation [50].

8.1 Risk Assessment

Determines whether a formal risk assessment was conducted prior to system retirement.

8.2 Stakeholder Engagement

Assesses the degree of stakeholder involvement during the decommissioning process.

8.3 Data Management

Evaluates the handling of personal and sensitive data during decommissioning.

8.4 Environmental Considerations

Examines organizational efforts to reduce environmental impacts during system retirement.

8.5 Documentation and Transparency

Assesses the extent of documentation and transparency in the decommissioning process.