Algorithmic Impact Assessment Structure

From Algorithmic Impact Assessment
Revision as of 23:40, 10 April 2025 by Vrettos (talk | contribs) (Created page with "= AI Lifecycle Impact Assessment: Phase Structure = == 1. Design Phase == The design phase represents the early stage in the development of an AI system, where the fundamental idea is defined, the problem to be addressed is articulated, and the feasibility of the intended goals is examined. The assessment questions in this phase are grouped into two subcategories: === 1.1. Governance === This section includes questions related to the organizational governance of the AI...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

AI Lifecycle Impact Assessment: Phase Structure

1. Design Phase

The design phase represents the early stage in the development of an AI system, where the fundamental idea is defined, the problem to be addressed is articulated, and the feasibility of the intended goals is examined. The assessment questions in this phase are grouped into two subcategories:

1.1. Governance

This section includes questions related to the organizational governance of the AI development process. Specifically, it focuses on whether ethical principles and values have been effectively communicated and agreed upon by relevant stakeholders and project personnel. It also examines the existence of accountability structures in cases of non-compliance.

1.2. Use Case Definition

This section concerns questions about the use case requirements, including the project's goals, their alignment with ethical values, and potential performance criteria that may be used in later evaluation stages.

2. Development Phase

In this phase, developers use a variety of data sources to train the AI model and ultimately produce the final AI product. The assessment is divided into two subcategories:

2.1. Data

This section focuses on ensuring that the data used for training is properly documented, compliant with relevant legal frameworks, and of high quality. Key aspects include the absence of bias and errors, representativeness of the data to the problem domain, completeness, and traceability of the data sources.

2.2. Model

This section revolves around ensuring that the AI model is trained to produce reliable outputs. It includes questions related to source transparency, fairness, explainability, robustness, potential risks, and the behavior of the model during training.

3. Evaluation Phase

This phase ensures that the AI system's performance has been thoroughly tested and that it is considered sufficiently safe for market deployment. It consists of two stages:

3.1. Testing

This part includes questions regarding the testing strategies applied to the AI model and its corresponding performance under those strategies.

3.2. Deployment Trial

This part assesses the technical process of deploying the AI system in a trial production environment and evaluates the results of this deployment.

4. Operation Phase

This phase includes questions aimed at ensuring the AI system continues to function as intended over time and that measures are in place to prevent performance degradation. It consists of two subphases:

4.1. Functionality Preservation

This phase encompasses all activities related to maintaining the system’s operational capabilities.

4.2. Maintenance

This phase ensures that regular updates are performed to preserve and improve the quality and relevance of the system's output.

5. Retirement Phase

The final phase concerns the proper decommissioning of the AI system. It includes the evaluation of retirement-related risks and the identification of appropriate mitigation strategies to ensure a secure and responsible withdrawal from use.