Use Cases

Algorithmic Fairness and Explainability Assessment of Anti-Money Laundering Detection AI models using Fairly AI’s Compliance-in-a-Box Simple Start Plan

September 6, 2023
Algorithmic Fairness and Explainability Assessment of Anti-Money Laundering Detection AI models using Fairly AI’s Compliance-in-a-Box Simple Start Plan

Executive Summary

Use Case:

Algorithmic Fairness and Explainability Assessment of Anti-Money Laundering Detection AI models using Fairly AI’s Compliance-in-a-Box Simple Start Plan

Purpose

To assess and monitor model fairness in AI outputs against compliance policies.

Details:

  • Conduct data quality and data drift analyses on model training data.
  • Conduct fairness tests to perform algorithmic bias assessments against selected models
  • Use Fairly’s explainability testing suite to ensure traceability of model outputs.
  • Provide a comprehensive report detailing identified issues
  • Set up policies and controls to continuously monitor and enhance model fairness

Outcome:

Identify data issues, biased outputs and fairness concerns as well as assess explainability to mitigate regulatory, reputational, and financial risk of financial crime risk models.

Overview

Anti-Money Laundering (AML) model vendors use AI models to screen financial transactions for fraudulent and anomalous activities. The question then remains:  who  tests these AI models for performance, fairness, explainability, and data quality? Fairly does. 

A major culprit in model anomalies is poor quality training data. Bad data can lead to false positives when generating a suspicious activity report resulting in reputational and financial harm to clients. As AML is seen as a cost center,  demonstrating a model testing-suite’s ability to save an organization time and money by enhancing performance is key in the wide adoption of AI-based AML solutions.

To proactively mitigate these risks, Fairly's platform tests to ensure:

  • Data quality issues are caught early.
  • Model results are explainable to mitigate regulatory risk.
  • Model inferences are fair and unbiased.

Fairly’s platform compiles the results of these tests into a comprehensive report detailing identified issues, which aids in setting up policies and controls to monitor and enhance model fairness and mitigate bias. As a result, Fairly saves both time and money by providing thorough testing, data quality analysis, and well-integrated policies for model fairness and explainability.

Key Perforance Indicators

1. Data quality Analysis: Fairly provides two types of data validation that allows the model development pipeline to be fully documented and identify potential sources of bias before any models are trained. First, the headers and data types are verified against a configuration file generated from a reference dataset. Second, Fairly scans datasets for protected features and applies bias identification techniques such as disparate impact measurement to assess the risk of bias. 

2. Fairness Metrics: Fairly’s fairness metrics enable a series of bias and fairness tests that conduct advanced statistical analyses on statistical groups in your data. This KPI ensures all groups have the same true positive, true negative, false positive, and false negative rates in order to screen potential bias. Fairly uses industry standards like ISO/IEC TR 24027:2021 Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making to govern how it conducts quantitative fairness analysis.

3. Explainability Testing: Fairly’s explainability testing employs methodologies that dissect the influence of each contributing factor, such as transaction amount or originating country, on the final decision. Additionally, it utilizes approaches that generate simplified, localized models to approximate the decision-making process for individual flagged cases. An improvement in this KPI indicates a more transparent and accountable decision-making process in your Anti-Money Laundering operations.

Process

1. Organizational Analysis: This is the initial step where an expert team from Fairly is tasked with analyzing the organization that develops, deploys, or operates an AI model to ensure the institutional infrastructure is in place to operate the model responsibly. This is then followed by an analysis of individual controls that make up Fairly’s fairness and explainability policies.

2. Analysis using Fairly’s Platform: This platform is used to analyze the outputs generated by the model for certain transactions. The platform tool checks the outputs against established policies, tracking any violation, bias, or explainability anomalies. 

3. Reporting and Policy Setup: After the analysis, a comprehensive report is prepared detailing identified bias incidents, policy violations, and anomalous explanations. This report is used as a base to set up new policies and controls, or to refine existing ones, with the aim to continuously monitor and enhance model fairness and transparency.

4. Continuous Monitoring and Improvement: With the implemented policies and controls, continuous monitoring is performed on the model’s outputs to identify any instances of bias or policy violations. Regular analysis is performed to ensure the ongoing improvement of model fairness.

Outcome

In this use case, the organization aims to identify and reduce bias while enhancing explainability in a model’s outputs, thereby preventing operational risks and reputational harm. The continuous monitoring and improvement process ensures that the model evolves over time to produce outputs that are more fair, unbiased, and compliant with the organization's policies. 

Target Users

1. AI and ML Teams: This includes data scientists, machine learning engineers, and AI practitioners who are involved in the design, development, and deployment of AI models. They will use the platform to assess, monitor, and mitigate biases in AI models.

2. Product Development Teams: Teams responsible for developing and refining products (that may incorporate AI models) will use the platform to ensure that their offerings align with ethical standards and policy guidelines.

3. Compliance and Governance Teams: These teams are responsible for ensuring that all processes and outputs are in compliance with relevant policies, standards, and regulations. They will use the platform to verify and document AI model compliance.

4. Risk Management Teams: These teams work on identifying, assessing, and mitigating potential risks, including reputational and operational risks arising from AI models. They will use the platform to ensure risk reduction related to algorithmic biases.

Beneficiaries

1. Financial Institutions: As the buyers of AML technology, financial institutions benefit from reduced risk, improved compliance, and enhanced reputation of the AI-based products they purchase.

2. Model Vendors: Vetted AML-detection models that have been thoroughly tested inspires confidence in model vendors who sell to financial institutions. A proactive approach to AI compliance reduces legal and operational risks.

3.  Customers: As the end-users of financial services, customers benefit from the reduction of bias which would otherwise lead to losing access to assets and financial downtime.

4. Society at large: The wider public benefits from the responsible use of AI, which considers individual rights and promotes fairness and non-discrimination.

Conclusion

By following the Fairly AI-Compliance-in-a-box Simple Start Plan, these users and beneficiaries can ensure that AI and ML deployments are handled in a responsible and ethical manner, thereby mitigating potential harm caused by bias, increasing transparency, and improving overall trust in AI.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help