AI’s first contact with an organization. It's a theme that’s made a splash this Q1. But what does that actually look like? By digging a little deeper and choosing a use case to examine, we can see how different parts of an organization might interact with AI-driven technology. This will then help us understand why automated AI risk management matters.
We’ll assume our imaginary organization has a few features:
When thinking about which industries fit the mold, one that immediately stands out is the pharmaceutical industry. So how does AI in pharma look? The short answer is that it depends who you ask.
Recently, researchers from the U.K. and China published a paper titled Revolutionizing Pharma: Unveiling the AI and LLM Trends in the Pharmaceutical Industry.This article provided insights into what the horizon for what AI technology pharmaceuticals might be acquiring.
By knowing what AI technologies pharmaceutical companies are interested in, we can achieve insights into three key areas:
When you look at medicine, chances are that you want to be informed but also you want the company selling you a health product to make sure all of the relevant information for using that product is on the label. Researchers noted that AI is being used to synthesize and communicate health-related information. The authors mentioned that in Sweden, an AI system has been used to “harmonize product information (PI) across European medicinal products” so that pharmaceutical companies provide patients and healthcare providers with clear and consistent information. In terms of providing health information, they also noted that researchers in India used AI-driven chatbots to provide patients living in rural areas with “medical information, guidance on common health issues, and even preliminary diagnoses.” We can see that although there is a massive potential for AI to benefit underserved populations, there also comes with it the potential for harm in a high-risk domain. There lingers a question: ‘what if this chatbot were to direct someone to take a harmful action?’
In our work on chatbots, one insight we’ve gained is that although you can’t fully eliminate risk of anomalous output when using a generative AI chatbot, what you can do is rigorously test it to pull out anomalies early. This is known as ‘red teaming.’
You can deepen your understanding about red teaming by reading our case study on Suno, the AI-driven emotional support chatbot.
Beyond the realm of chatbots, AI has also been used to analyze “operational data to identify patterns that can predict equipment failure”, the authors note. This would allow companies to address issues before they escalate in order to reduce costs associated with equipment repair. Since AI has been used for maintenance planning in the pharmaceutical industry, it’s not a stretch to imagine that it could be adopted by other industries as well.
What role would existing standards for automated decision making systems play in testing a system that aims to predict equipment failure? ISO/IEC TR 24027:2021 is a standard to address bias in AI-driven decision making systems. What might not be clear is how relevant is a standard on bias, an area that typically relates to discrimination against humans, to an equipment failure prediction AI? There is a lot of overlap.
Regulators don’t want factors like race to play a role in a system deciding whether someone gets a loan or not. Race, however, doesn’t always show up in the same way in a dataset. Sometimes, other things like someone’s zip code correlate to race. Unknown to you, by training an AI system to take zip codes into account, you might be training an AI system to make decisions based on race. How does that relate to fixing equipment? We have certain tools to detect bias, even if race isn’t explicitly shown in a model’s training data.
Imagine this scenario:
ISO’s bias policy isn’t the only policy we’ve got on the platform. Risk has layers. So does our solution. Check out our policy marketplace to see what other policies we have to minimize risk in AI.
Going further, the researchers noted that in India, an AI system has been used to analyze “large volumes of regulatory data, track changes in regulatory guidelines, and help companies ensure that their products comply with the latest standards.” Using AI for regulatory analysis allows companies to understand where they stand in different jurisdictions and anticipate changes early on.
At Fairly, we aim to keep our readership informed, and we’ve prepared two resources that we regularly update:
This is part of our initiative to demystify AI for the public by working in public
AI is moving fast and Fairly is mapping its frontiers. Don’t get lost in the rush. Talk to us, we can help you make sense of your use case.
DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.
Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)
Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.