AI is set to transform the healthcare and life science sectors - and have a huge impact on the healthcare experience of patients around the world, saving lives, improving outcomes and reducing harm. Already, we are seeing the use of real world data channelled through AI to optimise clinical trials, diagnose patient conditions and analyse the effectiveness of treatments.

In the pharmaceutical industry, AI is having a huge impact on drug discovery and development, as well as supply chain and manufacturing efficiencies across the value chain. These are fundamental and deeply important things. But nevertheless, the use of AI and machine learning also creates difficult choices for those involved in aspects such as data ethics and privacy. No one wants to block progress for non-essential reasons. However, it is essential to safeguard the key principles of data ethics so that data doesn’t get misused (with loss of trust), corrupted (so that wrong decisions are made off the back of it) or shared incorrectly (so that the wrong people get sight of it leading, to a loss of trust).

It's clearly essential that we have strong standards and rules  around the ethics of AI including legal compliance, data privacy and patient consent. But with developments moving at such lightning speed, this can become a challenge.

This is not to say that it is the ‘wild west’ out there. Healthcare, life science and pharmaceutical organisations take their responsibilities extremely seriously and have a framework of ethics committees, governance bodies and established policies and procedures in place. However, due to the sheer pace of change and the level of innovation we’re seeing, the question arises as to whether the established controls are sufficient.

Trust and compliance

The relevance of these questions was clear at a fascinating panel discussion at the Digital Ethics Summit 2023. One of the key takeaways arising from this was the overriding importance of trust. Keeping the trust of patients, the public at large and other stakeholders including regulators is simply essential if AI is to be able to transform medicines and healthcare in the positive ways everyone is hoping for.

How can that trust be established and maintained? At a foundational level, it comes down to companies rigorously complying with the laws and regulations that are in place. Companies developing AI within life sciences and healthcare must ensure that data protection responsibilities and human/consumer rights are respected at every stage of the process. There are a number of key principles that they must follow, including purpose limitation (only using data and personal information for the purpose it was collected for), data minimisation (only holding as much personal data as is actually needed), data anonymisation, and the right of individuals to know how their data is being used, with transparency over progress.

With rules and regulations likely to proliferate as the AI revolution continues, clearly it is essential that players in the industry stay closely attuned to developments to ensure they remain compliant. Strong regulatory intelligence mechanisms and sound legal advice where needed will be essential. At the same time, however, whilst regulatory change will undoubtedly come, another issue is that regulation is not moving at the same speed as technology. This may create the dilemma for some organisations that they need and want to act responsibly and ethically – but the regulations have not yet caught up.

Engagement with people

But it is not simply a case of ‘following the rules’ in any case. Another theme that emerged through the conference discussion was that individuals must be given degrees of choice and options if they are really to engage. The developer of an AI-based personal medical device, for example, emphasised that it is when patients have some decision-making latitude of their own in how and when to use the device that they most fully embrace the technology.

Quite simply, if AI is ‘done’ to patients and consumers, they are more likely to be resistant or even hostile to it. When the AI works with the patient, they are more likely to embrace and value it. They must have an element of control. In the words of one panellist, the patient must retain a ‘kill switch’. This is also why it is crucial that patient groups are involved and consulted in the development and trials of new treatments and methods of care.

It is also important to remember that there is a uniqueness to some medical data, such as genetic data, in that it is connected to individuals and their families, and lasts longer than someone’s life time. That data not only relates to what they are, but what their children may be as well. It could also describe certain medical characteristics that an individual may not want to be shared more widely such as genetically inherited medical conditions. This is why personal medical data must be treated so carefully, particularly when it is fed into AI.

Preventing bias

Another key issue is preventing bias. This is of fundamental importance to the whole of AI across the board, not just healthcare. But it’s an especially powerful and emotive question here. As far as is humanly possible, medicines and care pathways must be for everyone, not just specific groups or privileged cohorts. Bias needs to be prevented from one end of the industry to the other, from how clinical trials are designed and run (ensuring diverse and representative patient groups unless a trial is deliberately designed for a specific cohort or demographic) through to the actual delivery of care.

Rigorous and repeated testing of algorithms and the outcomes they are producing is needed: are these as expected? are they equitable? as far as possible, are they reflective of the diverse groups that make up our communities?

Countering bias in algorithms comes down, in part, to the integrity of the data that is fed into them. For this, it is crucial that we find ways of unlocking data from the silos that it still so often sits in (within public healthcare systems, for example) and connecting it up across networks and pathways.

Senior accountability and ethical leadership

Suffice to say that if there are multiple challenges, one fundamental precept was agreed upon in the conversation: senior accountability.

Management teams within health and pharma organisations must take ownership of the ethical issues surrounding AI and ensure that this flows down through the organisation in clear and robust governance structures. These will be needed more than ever. The speed of the AI journey is only going to accelerate. Responsible leadership, ethical clarity and strong governance will be needed to stay on course and make AI the force for good it should be.