AI is set to transform the healthcare and life science sectors - and have a huge impact on the healthcare experience of patients around the world, saving lives, improving outcomes and reducing harm. Already, we are seeing the use of real world data channelled through AI to optimise clinical trials, diagnose patient conditions and analyse the effectiveness of treatments.
In the pharmaceutical industry, AI is having a huge impact on drug discovery and development, as well as supply chain and manufacturing efficiencies across the value chain. These are fundamental and deeply important things. But nevertheless, the use of AI and machine learning also creates difficult choices for those involved in aspects such as data ethics and privacy. No one wants to block progress for non-essential reasons. However, it is essential to safeguard the key principles of data ethics so that data doesn’t get misused (with loss of trust), corrupted (so that wrong decisions are made off the back of it) or shared incorrectly (so that the wrong people get sight of it leading, to a loss of trust).
It's clearly essential that we have strong standards and rules around the ethics of AI including legal compliance, data privacy and patient consent. But with developments moving at such lightning speed, this can become a challenge.
This is not to say that it is the ‘wild west’ out there. Healthcare, life science and pharmaceutical organisations take their responsibilities extremely seriously and have a framework of ethics committees, governance bodies and established policies and procedures in place. However, due to the sheer pace of change and the level of innovation we’re seeing, the question arises as to whether the established controls are sufficient.