The Ukraine war impacts us all. It has caused the greatest humanitarian crisis in Europe since WWII, an energy crisis, and alarming cascading effects in the economy. In this blog we take a moment to reflect on how the Ukraine war affects Artificial Intelligence and the risks associated with its use.
The first impact of the war in the Artificial Intelligence governance, risk, and compliance world is probably quite obvious. European defense spending has increased by hundreds of billions, and the defense industry is scaling up its production of smart weapons. The reader will probably think of drones, which have been used to good effect in an offensive capability in Ukraine.
War between high tech powers is likely to put drone AI and similar technologies in fast forward. Although society at large agrees that there should be a man-in-the-loop whenever a drone acquires a target and attacks1, there is a factor that pushes developers towards application of AI in drones if the envisioned enemy is more technologically capable than the Taliban.
Remote control depends on wireless data connections, and those connections can be interfered with. In order to not lose control over drones, the drone needs a certain level of autonomy. The big question is: what is the drone supposed to do when it is not under control? The right answers depend on the application and the operator. Every armed force will answer the questions that arise for itself.
This is an interesting topic for an essay by itself, but not one we will address any further in this blog.
A second big impact is on critical infrastructure. On the Smart grid2. The resilience of the prediction models that balance our electricity and natural gas grids is being tested. Both their ability to function properly in generally exceptional circumstances, and their ability to handle outside interference to disrupt them.
There are three basic instruments for balancing in a situation of real scarcity: real time pricing, and bringing generation capacity and consumers on and offline. The historical data has no clear answers for this winter, or the risks of intentional interference for this balancing operation.
The third one is least obvious, but most widely spread. Financial risk, and the financial resources to set aside to deal with this risk, have become harder to accurately predict. Obviously for energy companies dealing with unpredictable pricing and households that may not be able to pay their bills. For public sector organizations dealing with income assistance and displaced people. For the manufacturing sector, insurance companies and banks, and SME like greenhouse farmers, bakeries, restaurants, caterers, whose business may critically depend on the cost of energy for viability.
This impact affects almost everyone. Right?
Many organizations will be in a position to hastily adapt prediction or decision-making models, or are considering new applications of such models to deal with new circumstances and new problems. But based on new analyses of old data. If these models are brought into operation in a hurry, good governance, risk, and control capabilities will be even more essential to keep your AI in Control.
In our AI in Control methodology3, we distinguish five pillars of good AI practice: integrity, resilience, explainability, fairness, and accountability. In the last few years, the most important business case for AI in Control was based on an interest in fairness and explainability. In the near future, the more traditional resilience and integrity may turn out to be the pillars that are going to matter most to our clients.