Use of AI in healthcare continues to solidify itself as a solid practice for the benefit of patients and healthcare providers. No more than a cursory search of the subject provides a number of existing examples, such as cancer detection, of AI intervention in AI. In some cases, such intervention exceeds the capabilities of even the most experienced practitioners.

However, each superhuman capability has its kryptonite, and AI is no exception. Due to limited datasets, biased training data, or ineffective algorithms, AI has the potential to miss signs that would otherwise have been caught or queried further. Late last year, for example, a study discovered that in its current state, AI has proven to be less effective at identifying skin cancer in persons of colour. Creators of AI and practitioners therefore must move forward with caution and ensure that an over-reliance on the technology doesn’t emerge.

In effort to address this, a number of initiatives around the world have emerged. In the EU, the proposed EU AI Regulation is set to enact a number of provisions aimed at the protection of individuals that come into contact with AI. Failure to adhere to the standards and protections set out in the Regulation has the potential to result in substantial fines and, in some cases, criminal sanctions. In the UK, the NHS and UK Government recently commenced a pilot implementing algorithmic impact assessments as a method of keeping AI applied to healthcare and treatment in check.

In a recent initiative, the International Federation of Pharmaceutical Manufacturers and Associations (“IFPMA”) have released their own input into shaping the use of AI and healthcare in the form of principles (“Principles”), focusing on the ethical use of AI in its general application.

Principles on Ethical Guidance

The IFPMA highlight that the Principles aim to provide those within healthcare a set of guardrails that they should consider, adapt, and operationalise where possible, in their organisations. They have been designed to align with the current data ethics principles released by the IFPMA in May 2021 and work in harmony with the existing principles to establish a safe and ethical environment for data use (particularly in the case of algorithmic decision making).

The principles have been split into 6 distinct considerations:

Empowering Humans

AI systems should be designed and used with the respect for rights and dignity of persons at all time. When developing AI, members of ithe IFPMA should consider the impact to the individual themselves as well as the overarching social benefit it may have. Where possible AI should also be used as a means in which those impacted by the use of AI can control their own healthcare based on their evolving requirements.

Accountability

IFPMA members should at all times take accountability for the use of AI, whether it is developed by a third party, throughout its lifecycle. This should include establishing a proper governance protocol, have implemented appropriate risk-based controls, and developed strategies to limit any unintended negative consequences, such as feedback loops.

Human Control

AI should be deployed with an appropriate level of human oversight, based on its potential risk to individuals. Where there is potential for substantial harm, AI should never be given complete autonomy in the making of decisions.

Fairness and minimization of bias

Developers and owners should seek to limit bias and maximise fairness within their AI use. Any developments should include procedures for the review of datasets used in training and assumptions used in the design of the algorithms implemented to determine whether bias can be further reduced. Members should continue to monitor and adapt their AI to correct for errors and bias throughout the life cycle of its use. Diversity in the designers and developers of AI within their organisations should also be sought.

Privacy, Security, and Safety by Design

As part of the design process of any used AI, privacy and security should be considered as a primary point of development. Members should also implement appropriate measures to mitigate risks to the privacy, security, and safety of individuals and their data.

Transparency, Explainability, and Ethical Use

When in the process of deploying AI, members of the IFPMA should publicise, to the extent possible and as appropriate, when and how AI is used in their organisation. This should include items such as how personal data is used, the goal of its use, what the limitations of data used are, and any limitations more generally. Where non-explainable AI is used and has the potential for significant impacts on individuals and their rights, members should ensure additional measures are taken with a focus on transparency, human oversight, and limitation of bias.

What comes next?

The IFPMA are keen to highlight that the Principles should act as a starting point for each of its members to consider how their internal processes, controls, operations, and policies could be adapted to incorporate AI in an ethical and responsible manner.

Exercising ethical implementation of AI however is an ongoing process and the IFPMA expects its members to continue in their efforts and commitments beyond their initial review of the Principles.

Therefore protocols and policies should continue to update as AI becomes more pervasive throughout the healthcare and pharmaceuticals industry.

Find out more

For more information on AI and the emerging legal and regulatory standards, contact the authors or your usual DLA Piper contact or find out more at DLA Piper’s focus page on AI.

You can find a more detailed guide on the AI Regulation and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organisation’s maturity on its AI journey in (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool.

You can find more on AI, technology and the law at Technology’s Legal Edge, DLA Piper’s tech sector blog.

DLA Piper continues to monitor updates and developments of AI and its impacts on industry in the UK and abroad. For further information or if you have any questions please contact the author or your usual DLA Piper contact.