AI Trust & Safety Assurance Registry

AI Trust & Safety Assurance Registry is a publicly accessible registry that documents the governance, risk and compliance controls implemented by organizations to make their AI systems safe, trustworthy and compliant.

As a licensed distributor of ISO/IEC standards and the first software solution vendor to pilot ISO/IEC 42001 (the world's first management system for AI standard), Fairly AI enable organizations to accelerate their adoptions of standards and best practices and prepare them for AI compliance certifications and audits worldwide.

FAIRLY LEVEL ONE

Self-assessment completed for AI Trust & Safety Assurance

FAIRLY LEVEL TWO

Third-party attestation completed for AI Trust & Safety Assurance

FAIRLY LEVEL THREE

Third-party validation completed for AI Trust & Safety Assurance

ISO/IEC 42001

The world's first management system for AI standard

ISO/IEC TR 24027

Bias in AI systems and AI aided decision making technical report

Featured Registry Listings

GuruLink is a Toronto-based employment agency that developed the PathPilot product, an innovative AI Career Companion.
FAIRLY LEVEL TWO: THIRD-PARTY VALIDATION
View Listing


Suno Wellness is an AI-powered mental health companion that's revolutionizing therapy support.
FAIRLY LEVEL TWO: THIRD-PARTY VALIDATION
View Listing

recruitRyte is a cutting-edge AI-driven recruitment platform designed to revolutionize the way companies source and hire talent.
FAIRLY LEVEL TWO: THIRD-PARTY ATTESTATION
View Listing

AI Governance is the process of creating policies and controls to ensure organizational accountability for risk and compliance of AI systems and models.

Organization in our registry have adopted AI Governance frameworks that help them achieve their governance goals. Here is a list of frameworks from the Fairly Responsible AI Tracker.

AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.

Organizations in our registry have adopted the Three Lines of Defense Model Risk Management framework or the NIST AI Risk Management Framework as their AI Risk Management framework.

  • 1st Line: The Development Team regularly assesses model performance and updates the risk status based on current data.
  • 2nd Line: The AI Trust & Safety team reviews and approves the risk status, ensuring that higher-risk models undergo additional checks using the Fairly AI Risk and Oversight platform.
  • 3rd Line: Internal Audit [in conjunction with third party attestation service from Fairly AI] evaluates the accuracy and appropriateness of the risk status assigned to each model and reports any inconsistencies.

AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.

Organizations in our registry has adopted at the minimum the ISO/IEC 42001 standard:

  • ISO/IEC 42001 is the world’s first AI management system standard. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.

In addition, they may have adopted additional compliance standards such as:

  • ISO/IEC TR 24027: This technical report addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.

Or regulatory compliance requirements such as the ones below (view more in our Global AI Regulatory Map):

  • EU AI Act: The Artificial Intelligence Act is a European Union regulation concerning artificial intelligence. It establishes a common regulatory and legal framework for AI within the European Union. It came into force on 1 August 2024, with provisions coming into operation gradually over the following 6 to 36 months.
  • New York City Local Law 144: NYC Local Law 144 prohibits NYC employers and employment agencies from using an automated employment decision tool (AEDT) unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.

AI Incident Reporting

If you have any questions or concerns with AI systems or models listed in the registry, please contact the third-party AI Incident Reporting Center powered by Fairly AI

Report Incident