Trust in Artificial Intelligence

Trust in Artificial Intelligence

Controls over system change have been a staple of IT control environments for decades.

Controls over system change have been a staple of IT control environments for decades.

It could also become increasingly difficult to independently verify outputs or decisions made by an AI system if supporting data and/or calculation methods used at the time of the decision are not captured: for example, because capturing such ‘evidence’ continuously would require vast data storage.

Mastering the change process

Controls over system change have been a staple of IT control environments for decades. Without effective controls, the likelihood of erroneous code, system outages and successful cyber-attacks are high – as numerous examples demonstrate.

That said, many controls around how entities change their technology systems over time will prove obsolete for managing changes as a result of continuous selflearning capabilities of AI. That would be the case, for example, where changes are being made by the AI, e.g. to the weightings within the model that determine the answer that the model produces, without seeking or obtaining approval for making such changes.

Whilst we can envisage alternatives – such as AI suspending ‘code release’ until testing and approval has taken place – it may not be possible to prove that change has not taken place without approval, and it may be virtually impossible to conduct parallel running in an AI system. And how would you implement the traditional segregation of duties (SoD) between the development system and the live systems if the AI solution can deliver it all with no human intervention? Or would you no longer need SoD for non-human processing?

 

 

Download full article here

Trust in Artificial Intelligence

 

Connect with us