Blog

AI TRiSM: AI Trust, Risk & Security Management

May 21, 2022
AI TRiSM: AI Trust, Risk & Security Management

The world of Artificial Intelligence is growing quickly as more people realize how powerful of a tool it can be. As the technology expands, professions, use cases, and businesses emerge alongside it. Gartner Inc. is a well-respected technology research and consulting company who recently organized emerging AI related technologies in "AI TRiSM" in an attempt to provide a better understanding of the emerging AI ecosystem.

AI TRiSM is shorthand for AI (T)rust, (Ri)sk, & (S)ecurity (M)anagement. In the words of Gartner, AI TRiSM “ensures AI model governance, trustworthiness, fairness, reliability, efficacy, security and data protection. This includes solutions and techniques for model interpretability and explainability, AI data protection, model operations, and adversarial attack resistance.

This concept is the very reason FAIRLY came into being and is an idea we hold in utmost importance. We see AI as the future of technology, and AI TRiSM as essential to helping us reach that future sooner.

The importance of AI TRiSM

Ai is a powerful, diverse tool that is used to solve countless problems in the world today. The recommended songs on Spotify? AI. The recommended routes on Google maps? AI. Fraud analysis, billions in traded securities, self-driving cars, accepting or rejecting loan applications and job applications? You guessed it, AI. While clearly powerful, we must realize that the benefit a tool provides from its success is often comparable to the impact of its failure. In the case of Artificial Intelligence, failure could mean millions in reputational, legal or financial losses.

This is why AI TRiSM is important – to prevent such errors.

How FAIRLY steps into the role of AI TRiSM

FAIRLY specializes in model risk management, governance, and compliance of artificial intelligence. Organizations who use our platform will find that AI models are better organized, team members are better coordinated, and that potential risk is minimized. We achieve AI TRiSM through a number of strategies:

  1. Guided documentation: As AI grows in complexity, so too do external pressures and expectations. Internal risk and external legal guidelines provide a long list of variables that must be managed and tested against, leading to long and tedious documentation processes. This is a crucial but far too difficult step in the risk management process; information must travel through long chains of staff, and somewhere along the line it’s often found that information is missing or incomplete, causing the cycle to repeat itself from the start. It’s no surprise that with such vast quantities of data things go wrong, but these errors can be minimized. FAIRLY offers guided documentation through guided checklists, document templates, and an automated report builder that pulls test results and formats them correctly within documentation. Artifacts from the codebase are analyzed, and a flag is thrown if any of these artifacts are missing within documentation. By making report building consistent and intuitive, more time is freed for developers to develop and strengthen the model.
  2. Automated risk and bias checks: Bias occurs when patterns are found where they should not be, often due to an insufficient data set. When this happens the AI model may begin making decisions based on unwanted parameters: Name length, race, gender to name a few. When decisions are made based on arbitrary parameters such as these, models become prone to error. Worse, they may become prone to discrimination. This is why it is important to monitor AI Bias and risk, to catch these errors before they are too deeply ingrained in a model's behavior. FAIRLY offers both behavior and bias checks within our platform that can be used with the click of a button, making risk analysis easier than ever before. In addition, these features hook into the automated report builder, allowing users to upload results into the API or directly into documentation.
  3. Transparency: One big issue today is a lack of trust in AI models, stemming from a lack of understanding. Many consumers today are uncomfortable interacting with machines rather than other people like they are used to. This issue is amplified when the decision making inside the black box is difficult or impossible to explain, leaving consumers without answers or comfort. By providing audit and comment trails, a timeline of discussions and decisions centered around the model can be seen, allowing for a thorough and clear understanding that can then be transferred from knowledgeable staff to interested consumers.

See our AI TRiSM diagram.

About FAIRLY

FAIRLY’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance SaaS solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world. Visit us at https://www.fairly.ai or follow us on Twitter and LinkedIn.


You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help