AI Trust & Safety Assurance Registry Listing for
Third-party validation completed for AI Trust & Safety Assurance
The world's first management system for AI standard
AI Governance is the process of creating policies and controls to ensure organization accountability for risk and compliance of their AI systems and models.
GuruLink has adopted the NIST AI Risk Management Framework (RMF) 1.0 which includes actions, references, and related guidance to achieve the outcomes for the four functions in the AI RMF: Govern, Map, Measure, Manage
AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.
GuruLink has adopted the NIST AI Risk Management Framework 1.0 as their AI Risk Management framework:
AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.
GuruLink has adopted ISO/IEC 42001 standards:
If you have any questions or concerns with GuruLink's AI systems or models, please contact the third-party AI Incident Reporting Center powered by Fairly AI at incidents@fairly.ai or use the button below
Report IncidentGuruLink is a Toronto-based employment agency that developed the PathPilot product, an innovative AI Career Companion. This AI-powered platform provides personalised career guidance and customised learning journeys, handling sensitive user data with a strong commitment to fairness and transparency—core principles of effective AI governance.
Fairly AI Trusty & Safety Assurance Registry is a publicly accessible registry that documents the governance, risk and compliance controls implemented by organizations to make their AI systems safe, trustworthy and compliant.