Ethics, Insurance and Artificial Intelligence

More knowledge, more ethics? More information should help us become more ethical. Ethics is (at least partially) a matter of recognising and predicting how our actions affect others, and then pursuing those actions that result in the least harm or the greatest good. With more information and better prediction our ethical decision-making should improve. At least, this would be the case if one assumes, as Socrates did, that “knowledge is virtue”. Socrates believed that when we truly grasp what is good, and why it is good, such knowledge is compelling – it effectively guides behaviour. The inverse, then, is that bad behaviour is ultimately a form of ignorance. Artificial Intelligence (AI) for good TIt stands to reason, then, that the advent of Big Data and the predictive power of AI opens new moral opportunities. And there are uses of Big Data and AI that do exactly that. Consider, for instance, the personal behavioural improvements made possible by the myriad of tracking apps available to individuals. Receiving alerts and summaries about the driving “mistakes” you make allows you to consciously adjust your driving, learn new habits, and consequently reduce the risk of accident, injury or death. In a Socratic sense, these applications help the moral agent to “know thyself”1 .

On a bigger scale, organisations like “AI for Good” are pursuing moral good through the use of AI. One example is rAInbow (or “Bo” for short) – an AI-powered conversational bot that provides a safe and nonjudgemental space to identify and prevent abuse and gender-based violence2 . Another example, from the field of psychiatry, is the use of machine learning to identify the risk of suicide from an analysis of social media posts, creating algorithms that will aid clinical decision-making in future3 . Through technologies like these, new avenues open up for the effective pursuit of health, mental and physical wellbeing, community and social justice