Algorithmic Impact Assessment Structure: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
= | = Algorithmic Impact Assessment = | ||
== 1. General Information == | == 1. General Information == |
Revision as of 23:57, 10 April 2025
Algorithmic Impact Assessment
1. General Information
This initial stage focuses on gathering essential background information regarding the organization responsible for the AI system, the respondent(s) completing the questionnaire, and the AI system itself.
1.1 Organization Details
Captures the type of organization (e.g., provider, developer, distributor, importer), its name, description, website, mission statement, and contact details.
1.2 Questionnaire Respondents
Clarifies whether the questionnaire was completed by an individual or a team, their relationship to the system, and the sources of their information.
1.3 System Details
Requires the system's name, project phase, scope criteria, risk classification under the AI Act, purpose, intended use, and any supplementary references.
2. Design Phase: Organizational Governance
This phase draws on the Cisco AI Readiness Model, assessing AI strategy, infrastructure, data governance, workforce, and organizational culture.
2.1 Strategy
Assesses AI strategy, leadership, impact evaluation processes, financial planning, and budget allocation.
2.2 Infrastructure and Security
Evaluates scalability, GPU resources, network performance, cybersecurity awareness, and data protection measures.
2.3 Data Practices
Reviews data acquisition, preprocessing, accessibility, integration with analytical tools, and staff competence in handling these tools.
2.4 Governance
Checks for the existence of defined values, AI ethics boards, and mechanisms for bias identification and remediation.
2.5 Personnel
Assesses staffing levels, AI expertise, training programs, and accessibility measures for employees with disabilities.
2.6 Culture
Examines organizational readiness to adopt AI, leadership receptiveness, employee engagement, and change management strategies.
3. Design Phase: Use Case Definition
Questions are divided into three key subcategories:
3.1 Application Objectives
Explores the rationale for creating the AI application, the problem it aims to address, affected populations, and performance criteria.
3.2 Automation Justification
Evaluates whether the decision to use AI was informed and whether non-automated alternatives were considered.
3.3 Ethical Risk Evaluation
Using the taxonomy proposed in "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms" [38], this section evaluates societal and environmental risks.
4. Development Phase: Data
Questions are grouped into three subthemes:
4.1 Source and Documentation
Assesses data types, selection processes, sources, and whether data were reused or newly collected.
4.2 Privacy and Legality
Focuses on personal data use, anonymization/pseudonymization, GDPR compliance, and Data Protection Impact Assessments [39].
4.3 Data Quality and Bias
Evaluates collection methods, bias detection, completeness, accuracy, timeliness, consistency, relevance, and representativeness.
5. Development Phase: Model
Divided into six critical subthemes:
5.1 Algorithm Description
Details the technology used, algorithmic implementation, and level of automation.
5.2 Fairness
Evaluates applied fairness principles, tools used, and accessibility for users with disabilities [40][41][42][43].
5.3 Explainability and Transparency
Assesses openness of code, explanation mechanisms, and user-level adaptability [44].
5.4 Traceability
Reviews change logs, decision traceability, and overall system auditability.
5.5 Robustness and Testing
Evaluates robustness tests performed and their methodologies [45][46][47].
5.6 Ethical Considerations
Assesses best/worst case scenarios, societal and environmental impacts, and risks such as autonomy infringement, physical/psychological harm, reputational damage, economic harm, human rights violations, sociocultural and political disruption. Risk dimensions are adapted from the Ada Lovelace Institute and Interpol [38][28][32].
6. Evaluation Phase
Questions are divided into four key subthemes:
6.1 Testing
Assesses testing strategies, model performance documentation, outlier tests, and failure pattern recognition.
6.2 Deployment
Examines deployment strategies, risk mitigation procedures, and service management.
6.3 Compliance Controls
Evaluates alignment with IEEE and ISO standards, data protection impact assessments, and adherence to ethical codes and best practices [48].
6.4 Certification and Audits
Assesses auditability, third-party review procedures, and legal framework alignment.
7. Operation Phase
Consists of two primary subthemes:
7.1 Continuous Monitoring and Feedback
Assesses information dissemination, regular ethical reviews, user risk communication, and vulnerability reporting mechanisms.
7.2 Security and Adversarial Threats
Evaluates the complexity of the development environment, cybersecurity measures, backup systems, adversarial threat detection, and data poisoning prevention [49].
8. Retirement Phase
Focuses on the decommissioning of the AI system, including risk assessment, stakeholder participation, data handling, and environmental evaluation [50].
8.1 Risk Assessment
Determines whether a formal risk assessment was conducted prior to system retirement.
8.2 Stakeholder Engagement
Assesses the degree of stakeholder involvement during the decommissioning process.
8.3 Data Management
Evaluates the handling of personal and sensitive data during decommissioning.
8.4 Environmental Considerations
Examines organizational efforts to reduce environmental impacts during system retirement.
8.5 Documentation and Transparency
Assesses the extent of documentation and transparency in the decommissioning process.