Maintenance Team

From Algorithmic Impact Assessment
Revision as of 06:27, 14 April 2025 by Vrettos (talk | contribs) (Created page with "This initiative is an extension of the research and infrastructure development efforts of the Distributed Systems Laboratory (DS Lab) at the National Technical University of Athens (NTUA), working in tandem with the Data & Cloud Research Group (DAC) at the University of Piraeus (UoP). The cooperative endeavor blends diverse areas of expertise in distributed systems, cloud computing, AI governance and regulatory compliance. A dedicated group of engineers, researchers, an...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This initiative is an extension of the research and infrastructure development efforts of the Distributed Systems Laboratory (DS Lab) at the National Technical University of Athens (NTUA), working in tandem with the Data & Cloud Research Group (DAC) at the University of Piraeus (UoP). The cooperative endeavor blends diverse areas of expertise in distributed systems, cloud computing, AI governance and regulatory compliance.

A dedicated group of engineers, researchers, and students from both institutions are responsible for the maintenance and continued enhancement of the UNIFAI framework, as well as outside counsel with specific expertise in digital regulation, data protection, and algorithmic transparency. Together, they underpin and advance the framework, ensuring that it is rooted in both contemporary academic research and practical applicability grounded in legal requirements and technical constraints.

The primary goal of the UNIFAI initiative is to support relevant organizations, public sector bodies, and enterprises in developing and producing reliable documentation and specifications for their AI systems. The framework benefits documentation quality by guiding users through structured assessments in predictable areas that consider ethical, legal, and technical factors toward advancing systems' accountability and regulatory compliance requirements, and fostering societal confidence.

In line with its open and collaborative spirit, the project adopts a permissive licensing model: all documentation and content are released under the Creative Commons Zero (CC0) license, while the technical implementation is distributed under the MIT License. This approach was intentionally selected to promote maximum reuse, interoperability, and dissemination. It removes legal and technical barriers, enabling stakeholders to freely integrate, adapt, and build upon the framework according to their specific needs—whether in research, policy, or operational deployment. Ultimately, these choices reflect a commitment to openness, transparency, and the democratization of responsible AI practices.