Blog

AI in Heavily Regulated Spaces: Pharma as a Case Study

February 7, 2024
AI in Heavily Regulated Spaces: Pharma as a Case Study

AI’s first contact with an organization. It's a theme that’s made a splash this Q1. But what does that actually look like? By digging a little deeper and choosing a use case to examine, we can see how different parts of an organization might interact with AI-driven technology. This will then help us understand why automated AI risk management matters.

We’ll assume our imaginary organization has a few features:

  • It’s big. That means more organizational surface area (e.g. more roles, more departments, more teams) for AI to interact with.
  • It (can be) high risk. This means introducing AI into such an organization carries higher risks than it does for other organizations.
  • It’s highly regulated. Since it carries risk, it therefore operates in a space that has existing regulations for AI to interact with.

When thinking about which industries fit the mold, one that immediately stands out is the pharmaceutical industry. So how does AI in pharma look? The short answer is that it depends who you ask.

Recently, researchers from the U.K. and China published a paper titled Revolutionizing Pharma: Unveiling the AI and LLM Trends in the Pharmaceutical Industry.This article provided insights into what the horizon for what AI technology pharmaceuticals might be acquiring.

By knowing what AI technologies pharmaceutical companies are interested in, we can achieve insights into three key areas:

  1. The types of risks pharmaceutical companies are taking on.
  2. The regulations that affect pharmaceutical companies.
  3. The types of standards, policies, and tests that pharmaceutical companies need.

When you look at medicine, chances are that you want to be informed but also you want the company selling you a health product to make sure all of the relevant information for using that product is on the label. Researchers noted that AI is being used to synthesize and communicate health-related information. The authors mentioned that in Sweden, an AI system has been used to “harmonize product information (PI) across European medicinal products” so that pharmaceutical companies provide patients and healthcare providers with clear and consistent information. In terms of providing health information, they also noted that researchers in India used AI-driven chatbots to provide patients living in rural areas with “medical information, guidance on common health issues, and even preliminary diagnoses.” We can see that although there is a massive potential for AI to benefit underserved populations, there also comes with it the potential for harm in a high-risk domain. There lingers a question: ‘what if this chatbot were to direct someone to take a harmful action?’ 

In our work on chatbots, one insight we’ve gained is that although you can’t fully eliminate risk of anomalous output when using a generative AI chatbot, what you can do is rigorously test it to pull out anomalies early. This is known as ‘red teaming.’

You can deepen your understanding about red teaming by reading our case study on Suno, the AI-driven emotional support chatbot. 

Beyond the realm of chatbots, AI has also been used to analyze “operational data to identify patterns that can predict equipment failure”, the authors note. This would allow companies to address issues before they escalate in order to reduce costs associated with equipment repair. Since AI has been used for maintenance planning in the pharmaceutical industry, it’s not a stretch to imagine that it could be adopted by other industries as well.

What role would existing standards for automated decision making systems play in testing a system that aims to predict equipment failure? ISO/IEC TR 24027:2021 is a standard to address bias in AI-driven decision making systems. What might not be clear is how relevant is a standard on bias, an area that typically relates to discrimination against humans, to an equipment failure prediction AI? There is a lot of overlap.

Regulators don’t want factors like race to play a role in a system deciding whether someone gets a loan or not. Race, however, doesn’t always show up in the same way in a dataset. Sometimes, other things like someone’s zip code correlate to race. Unknown to you, by training an AI system to take zip codes into account, you might be training an AI system to make decisions based on race. How does that relate to fixing equipment? We have certain tools to detect bias, even if race isn’t explicitly shown in a model’s training data. 

Imagine this scenario:

  • There's a company called Widget Co. that makes machines to put medicine in capsules. Widget Co. buys their conveyor belts from Belty Inc. But there’s a problem. A few years ago there was a trade dispute with the country Belty Inc. operates out of. That meant it took longer for companies to get their conveyor belts when they needed replacement.
  • Widget Co. and companies like it had a standard procedure, they always logged when a machine broke down, when it was operational, and which company provided the replacement part. Now for those few years, it would seem like repairs for machines needing a Belty Inc. conveyor belt took longer than others.
  • If you now decide to start an AI company that uses this information as training data to try and predict equipment failure and repair time estimates, you might accidentally ‘bake in’ the trade dispute delay faced by Belty Inc.’s customers as a type of hidden bias.
  • What are the practical implications of that? In the future, customers who use your system on a machine that uses a Belty Inc. conveyor belt will get skewed figures on how long their repairs might take. Maybe they won’t want to buy their next pill-filling machine from a company that uses a Belty conveyor belt. So to protect against this, you can use the same tools to detect bias in one area to detect bias in another.
ISO’s bias policy isn’t the only policy we’ve got on the platform. Risk has layers. So does our solution. Check out our policy marketplace to see what other policies we have to minimize risk in AI.

Going further, the researchers noted that in India, an AI system has been used to analyze “large volumes of regulatory data, track changes in regulatory guidelines, and help companies ensure that their products comply with the latest standards.” Using AI for regulatory analysis allows companies to understand where they stand in different jurisdictions and anticipate changes early on. 

At Fairly, we aim to keep our readership informed, and we’ve prepared two resources that we regularly update:

  • The first is our map of global AI regulations which provides you with an insight into proposed, passed, and active AI (and AI-adjacent) regulations across the world.
  • The next resource is our responsible AI framework tracker that documents responsible AI frameworks and international standards for AI or areas that are connected to AI.

This is part of our initiative to demystify AI for the public by working in public

AI is moving fast and Fairly is mapping its frontiers. Don’t get lost in the rush. Talk to us, we can help you make sense of your use case. 

DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.

Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)

Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help