Trackers

AI Framework Tracker

December 11, 2024
AI Framework Tracker

Subscribe to our monthly AI Framework Tracker updates!

As AI technology matures, it’s likely that we will see the number of frameworks increase. This page consists of a collection of frameworks and international standards specifically about AI or in areas that are connected to it. The aim is to make it easier for you to choose frameworks that fit your particular use case.

Want to learn more about AI regulation that is focused on particular jurisdiction? Take a look at our free Global AI Regulations Map.

One question that you might have is how does this ‘fit’ into the work Fairly AI does? These frameworks are the kinds of documents that provide the foundation from which we build policies. Each policy is built up of individual controls which act as individual ‘questions’ about a particular issue in AI or other areas connected to it. When you use our system, you can choose an answer and provide evidence to support your answer. We then categorize our controls based on the AI lifecycle.

What this means is that for a given area, such as ‘development’ or ‘validation’, we can see where gaps exist across a number of different policies.What that tells us is that the source documents for policy (e.g. regulations, standards, and frameworks) have gaps. Since we are able to detect those gaps, we are better equipped to help you choose a series of policies that fit your use case.

We update the list below as we come across new frameworks and standards in the AI space, you can make suggestions on our GitHub and please don't forget to give us a star if you find our work useful!

General-use Responsible AI and Risk Management Frameworks

Concept-based Frameworks

Accountability
Benchmarking and Performance
Agentic Systems
Content Provenance
Cybersecurity and Safety
Data
Definitions, Terminology, and Classification
Design, UI, and UX
Environmental Impact
Explainability
Fairness and Bias
High Impact Risk
Human-Computer Interaction
Impact Assessments
Licensing
Robustness
System Management
Transparency
Trustworthiness
Validation

Industry-based Frameworks

Cognitive Technology
Education
Energy
Healthcare and Pharmaceuticals
Intelligence
Legal Services
Media

Public sector/Government

Role-based Frameworks

Investors
Boards
Startup Founders
Leadership and Executives
Information Technology
Procurement

Human Resources

Marketing & Advertising
Looking to operationalize a responsible AI framework? Work with Fairly to build your policy control center!

Notes:

Changelog

  • November 2024: Databricks AI Security Framework (DASF), LCO Human Rights AI Impact Assessment
  • October 2024: RIL's "Responsible AI Framework v2", Australia's "Policy for the Responsible use of AI in Government v.1.1", PAI's "CPAIS Human-AI Collaboration Framework", PAI's "Guidelines for AI and Shared Prosperity", PAI "Making AI Inclusive - Four Guiding Principles for Ethical Engagement"., PAI's "Framework for Promoting Workforce Well-being in the AI-Integrated Workplace", China's "AI Safety and Governance Framework", ISO's "ISO/IEC 38507:2022Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations", German-French "recommendations for the use of AI programming assistants", Japan AI Safety Institute's "Guide to Red Teaming Methodology on AI Safety", UK Information Commissioner's Office "Artificial Intelligence Toolkit", Ofcom's "Red Teaming for GenAI Harms - Revealing the Risks and Rewards for Online Safety" CSA Singapore's "Guidelines and Companion Guide on Securing AI Systems", World Economic Forum's "Advancing Data Equity: An Action-Oriented Framework"
  • September 2024: Samsung AI Ethics Principles
  • August 2024: ANA "Ethics Code of Marketing Best Practices - Section 6: Digital Innovation", OpenAI's "Practices for Governing Agentic AI Systems", Chan et al.'s "Visibility into AI Agents", UK Government "AI Cybersecurity Code of Practice", UNESCO [Draft] "Guidelines for the Use of AI Systems in Courts and Tribunals"
  • July 2024: PAI 'Data Enrichment Sourcing Guidelines', Bloomberg 'Best Practices for Managing Data Annotation Projects', Singapore PDPC - 'Privacy Enhancing Technology (PET): Proposed Guide on Synthetic Data Generation'
  • April 2024: The Coalition for Content Provenance and Authenticity (C2PA), ISO/IEC DIS 42005, IEEE P3157, IEEE P7008, IEEE P7009, IEEE P7014, IEEE P2840, ETSI GR SAI 006 v1.1.1, IEEE P3129, IEEE P3123, IEEE P3128, IEEE P2976, IEEE P2863, IEEE P2802, IEEE P2247.4, ISO/IEC TR 24029-1:2021, ISO/IEC 5338:2023, ISO/IEC TS 25058:2024, ETSI GR SAI 001 v 1.1.1, ISO/IEC 25012:2008, ISO/IEC 25024:2015, ISO 8000-2:2022, ANSI/CTA 2089, BS 30440:2023, ISO/IEC DTS 12791, ISO/IEC DIS 12792, ISO/IEC AWI TS 29119-11, ISO/IEC TR 24028:2020, ISO/IEC NP TS 12831, IEEE 7001-2021, ANSI/CTA 2096, XP Z77-101, NWIP TR N256, CAN/CIOSC 101:2019 (R2021), IEC SRD 63416:2023 ED1, ITU-T H.862.5 (06/2021), ISO/IEC TS 38501:2015, ISO/IEC TS 27022:2021,
  • March 2024: Anekanta Responsible AI Governance Framework for Boards, Design Principles for Generative AI Applications,
  • February 2024: Australia's Responsible AI Framework, OpenAI's Preparedness Framework (Beta), UNESCO's Guidance for Generative AI in Education and Research, Radical Ventures' Responsible AI for Startups (RAIS) framework, Responsible Innovation Labs Responsible AI Policy.

DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.

Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)

Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help