Artificial Intelligence can assimilate a great complexity of data in a way that is not possible for humans, and for that very reason it can deliver unique value when using data from (many) different sources. This requires new forms of collaboration between parties in ecosystems. Forms where parties work together in a structured way to tackle social issues, but without the collaboration feeling like a straitjacket.
Data as 'food' for Artificial Intelligence
Using a single data source often has limited value in data analysis. It is precisely the combination of data sources that makes it possible to create value. For example, think about how we can greatly improve the care pathways of individual patients through the seamless use of data from the different organisations involved around the patient, or how we can make container flows in seaports more efficient, and numerous other domains. Therefore, the true value of AI – which needs data as ‘food’ to demonstrate value – will often only come into its own in concepts where the analysis of multiple data sources is possible. This is precisely why working together in ecosystems offers great potential to cut costs, reduce risk, or create value. What matters is that multiple parties address challenges together, rather than individually.
Technically, much is possible in this area. Advanced AI models (such as ‘digital twins’) can generate the necessary insights in such a complex environment. Practice shows that interest is growing across the board in this regard, particularly because there are more and more possibilities to make data available in a responsible way for the necessary analyses.
Collaboration in ecosystems
This does, however, involve a number of challenges and preconditions.
One is scalability. Experience shows that simply bringing all data from different sources to a single point or party does not work. At the same time, a lot of AI did develop from the idea that all data is located at a single point. A new (federated) model is therefore needed in which agreements are made on how to collaborate. This requires a degree of freedom: each participant in an ecosystem can implement a component in multiple ways. The ecosystem's reference architecture provides for possible implementation, but also offers freedoms as long as the parties adhere to the defined protocols and standards. This same concept also got the internet off the ground.
Second, modularity is important. A well-known pitfall is that there is a great business case for setting up an ecosystem, provided that everyone makes the investment and can get in at the same time. In practice, however, that is often unfeasible, so the initiative dies before it even gets off the ground. Therefore, an ecosystem should be allowed to grow like a weed and expand, where the initial phase should already be valuable to the participants.
Third, there is also a need for oversight of collaborative agreements. The concept of policy-as-code, or PaC for short, offers perspective here. In essence, this is about being able to program supervision into software. This then creates 100% control – breaking a rule is inevitably flagged and, if possible, prevented in the first place. Until now, PaC was only possible for relatively simple hard standards – for example, whether or not certain data may be used by a certain person. But in practice, surveillance often involves standards that require interpretation of a complex number of factors. And that is exactly what researchers are now focusing on. Recent developments in AI make it possible to build such interpretation into software.
In short, federated ecosystems – with some autonomy of participants but crystal clear agreements on cooperation – are likely to have a strong future. Parties should therefore now develop a strategic vision of the role they want to fill in such an ecosystem.