Blog

Balancing Act: Gaining a Competitive Edge Without Compromising on Responsible Data Sourcing

November 21, 2023
Balancing Act: Gaining a Competitive Edge  Without Compromising on Responsible Data Sourcing

By Matthew McMullen, the Senior Vice President and Head of Corporate Development at Cogito Tech, and Hassan Patel, the Director of Global AI Policy Compliance Engineering at Fairly AI and a practicing attorney specializing in technology law.

Nearly a year since the release of GPT, the Biden-Harris administration marked a potentially historic moment in technology policy by unveiling an executive order focused on “Safe, Secure, and Trustworthy Artificial Intelligence.” This order, spanning a broad spectrum, addresses AI safety, bias, data privacy, and the critical importance of content curation during development. The emphasis on these nuanced aspects of AI is a significant effort by the administration.

Several actions within the AI Executive Order impact the AI data and model development space:

  • Developers of foundation models must disclose safety test outcomes and essential data to the US government, emphasizing transparency in the model's training data and biases.
  • The Department of Commerce is tasked to develop guidelines for content authentication and watermarking, ensuring that AI-generated content is clearly labeled and curated.
  • The order calls for expanding privacy protections, emphasizing the need for transparent and ethical data collection and usage, a vital aspect of AI training.

Deep Rastogi, Product Manager at Business Wire, emphasizes, “For product managers, transparency in AI decision-making is not only a moral duty, but also a strategic advantage. By being transparent about how and why our AI systems make decisions, we can build trust with our customers, partners and regulators, and demonstrate our commitment to ethical and responsible AI. Transparency not only enables us to learn from feedback; it also enables us to anticipate whether actual results will align with desired outcomes of our tools. Transparency is not a trade-off, but a win-win situation for everyone.”

At Cogito Tech, we recognize the significance of quality, unbiased, and ethical data for training responsible AI. Our DataSum certification offers regulators and developers alike an insight into the intricate mesh of the model training phase. The DataSum label allows stakeholders to see embedded best practices, adhere to a fair use policy, and guarantee traceability of the data throughout the AI model's development.

For instance, the order's stipulation requiring companies to notify the federal government during the training phase of potentially threatening foundation models prompts several questions. Who establishes what constitutes a "serious risk"? How is it defined? Will there be public access to safety test outcomes? Furthermore, as AI technologies advance, what are the best practices for ensuring ethical integrity in datasets? How do we validate and prioritize the well-being of the workforce involved in data collection and processing? What measures are taken to guarantee the transparency and accuracy of technological tools used in data management? How can we assure consistent quality in data labeling and processing? And finally, what strategies are endorsed for efficient data processing without compromising on ethical and quality standards? The challenge lies in ensuring companies adhere to these stipulations rigorously, akin to the stringent FDA approval processes which prioritize demonstrable benefits over potential risks.

Addressing the plethora of questions raised by the order's stipulations, from defining 'serious risk' to ensuring ethical integrity in datasets, requires a robust infrastructure for AI safety. Bridging this gap, Cogito Tech has partnered with Fairly AI, to operationalize these protocols effectively. Fairly AI provides a platform where companies can import and build policies that are evidenced by documentation or automated testing, aligning with the Executive Order’s emphasis on repeatable testing. Utilizing such a platform ensures that companies can execute tests in a robust and repeatable manner. As companies instill a culture of benchmarking and testing their AI systems, they can also meet another directive of the Executive Order—establishing benchmarks for cybersecurity and biosecurity. The foundational aspect to this, however, lies in having a testing and policy compliance infrastructure that can support red-teaming efforts.

While the executive order sheds light on the vast complexities and challenges of the AI ecosystem, actionable guidance is essential to navigate this evolving landscape. In alignment with the globally recognized principles of transparency, safety, and ethical integrity, the following strategic steps are inspired by Cogito Tech’s DataSum certifications and Fairly AI’s policy library and AI compliance agent, Asenion. These can serve as a roadmap for the industry:

1. Ethical AI Algorithms: Implement AI algorithms infused with ethical considerations to guarantee that data-driven decisions uphold safety, privacy, and regulatory compliance. Making ethical AI a foundational pillar will shape our future technological trajectories.

2️. Robust Data Security Measures: Amplify data security measures to protect sensitive information. Employ encryption, strict access controls, and frequent security audits as mandatory components to ensure data integrity and protect user privacy.

3️. Continuous Training and Oversight: Offer exhaustive training on AI systems and consistent oversight by competent professionals. Regular updates and validations of AI algorithms are paramount to mitigate risks and optimize system performance.

4️. Transparent Decision-Making: Cultivate transparency in AI-powered decision-making processes. This not only builds trust among users, practitioners, and stakeholders but also promotes a culture of clarity and accountability, vital in today's rapidly evolving tech landscape.

5️. Comprehensive Content Authentication: Implement rigorous protocols for verifying and watermarking AI-generated content. By ensuring traceability and authenticity of content, we can combat misinformation, safeguard intellectual property, and further build public trust in AI outputs.

Vadiraj Hosur, Principal Engineer at Oracle AI & Health, emphasizes the critical importance of data handling in the healthcare AI sector: “The imperatives of data transparency and traceability are not just regulatory checkboxes but the bedrock of patient trust and safety. In healthcare AI, every piece of data holds the potential to alter a diagnosis or influence a treatment path. Ensuring the highest standards of data integrity and traceability is paramount, not just for regulatory compliance but for the sanctity of patient-centric care itself.”

While the executive order establishes a high bar for AI development, its efficacy will hinge on robust implementation. As AI's landscape continues to evolve, a proactive approach to its governance, particularly concerning training data, bias, data privacy, and content curation, becomes imperative. The journey to a transparent and responsible AI ecosystem is in its infancy, and collective efforts are needed to ensure its maturation benefits society at large.

More about Cogito Tech: Cogito Tech offers Enterprise Data Labeling Services (EDLS), specializing in data curation and labeling. Quality and ethical integrity is verified by the proprietary DataSum certification, ensuring transparency, ethical sourcing, and the highest data governance standards. With over a decade in the industry, services cater to various sectors, ranging from healthcare to finance, always ensuring relevance to domain. For Large Language Models (LLMs) and Generative AI, Cogito Tech provides specialized services such as Reinforcement Learning with Human Feedback (RLHF), Fine Tuning, Red Teaming, Prompt Engineering, multimodal data processing, data structuring, and intricate data curation. 

More about Fairly AI: Fairly AI is a leader in AI governance and risk management. Fairly AI‘s Asenion, an AI compliance agent designed to expedite the adoption of safe AI is a crucial ally, ensuring AI systems meet stringent compliance and safety standards. Developed through comprehensive trials with leading financial institutions and standardization bodies, Asenion offers a tailored solution to the growing complexity of AI compliance. This platform empowers teams to configure and deploy a custom AI agent that aligns with their specific engineering, compliance, and business requirements. With the ability to analyze datasets and models against both internal policies and external regulations, Asenion acts as a vigilant guardian in the cloud, seamlessly integrating into CI/CD pipelines for continuous assurance of AI system robustness and integrity.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help