Executive Summary
This article continues to explore the challenges in the payments industry and how cloud technologies can provide solutions. It introduces KPMG's payments reference architecture, which decomposes the logical architecture into three levels for a modular structure. The article emphasises the importance of a sound component placement strategy for cloud adoption, considering factors such as performance, data privacy, disaster recovery, cost, and integration requirements.
The article then focuses on message processing and orchestration, highlighting the benefits of using cloud-based infrastructure for these components. It discusses the need for scalability, distributed computing, reduced latency, and integration with various stakeholders. The article also emphasises the importance of high availability, reliability, and SRE principles for orchestration systems.
Overall, the article provides valuable insights into cloud adoption and component placement strategies for payment processing systems, particularly in the areas of message processing and orchestration.
The first article in this series investigated some of the challenges faced by the payments industry and how cloud technologies can be used to alleviate some of these. In this follow up article, we investigate some of the work KPMG has undertaken to help with this transformational journey.
Introducing the payments reference architecture from KPMG
Level 1-3 architectures refer to the various tiers of the processing stack in the context of systems for processing payments, and can be summarised as below:
- Level 1: The Network Level
The network level, which manages the actual physical transfer of payment data, is the lowest level of the stack. The physical setup and procedures for data transmission over the internet or other networks are included at this layer.
- Level 2: The Level of Processing
Processing and routing payment transactions fall within the purview of the processing level, which is positioned above the network level. The many switches and payment gateways that enable the transfer of data between the various players in the payment processing ecosystem are included in this layer.
- Level 3: The Business Level
The top layer of the stack, known as the business level, oversees running the system's general business processes. This layer consists of elements including reporting and analytics systems, transaction management systems, and merchant management systems.
To ensure the quick and secure processing of payment transactions, each layer of the stack has its own set of duties and obligations.
KPMG has decomposed the logical architecture into three levels to aid a modular structure. The modular approach enables future change management where parts of a component can be worked upon without having to take the entire system offline and additionally new functionality can be added by making small changes to the disaggregated components. The levels of KPMG’s reference architecture are:
- Level 1 of the logical architecture comprises the high-level domains of Network/ Security, Portal/Connectivity, Inbound/ Outbound processing, Orchestration, Core Clearing and Settlement. These domains are supplemented by Supporting Services
- Level 2 defines the modules within the domains that provide structure to the architecture. The Payments Hub comprises 11 key modules.
- Network
- Identity Management
- Security, Portals
- Core Message Gateways
- Message Ingress
- Message Egress
- Message Processing Business Services
- Orchestration
- Clearing
- Settlements
- Level 3 elaborates further on the components within each module necessary to support a robust and resilient solution.
Component placement strategy
Carefully evaluating the needs of the various components of a payment processing system and choosing the best deployment model for each component are essential components of a solid component placement strategy for cloud adoption.
Through our experience working with banks around the world, KPMG has identified the common themes that are critical for consideration:
- Performance requirements:
Depending on the component, a payment processing system may need to have some components with high performance and low latency and others without. High-performance components should be placed closer to the end users than low-performance components, which can be placed further away.
- Data privacy and compliance standards:
Some parts of a payment processing system might deal with sensitive data that needs to be safeguarded in accordance with legal standards like PCI-DSS. These parts ought to be used in a safe setting that complies with the required standards.
- Requirements for disaster recovery and business continuity:
To guarantee service continuity in the event of a disaster or outage, payment processing systems must be highly available and robust. Critical components should be deployed in different availability zones or regions to provide redundancy and failover, according to a sound component placement strategy.
- Cost factors:
By doing away with the requirement for infrastructure administration, some cloud deployment models, such as serverless computing, can result in cost savings. These models might not, however, be appropriate for all parts of a payment processing system.
- Integration requirements:
A payment processing system's components must be able to communicate and share information with one another. The deployment of components in a way that ensures seamless and reliable operation is the goal of a smart component placement strategy.
Regardless of where your organisation sits along this cloud transformation journey, a sound component placement strategy for cloud adoption entails carefully assessing each payment processing system component's needs. Each individual organisation must choose the best deployment model based on performance, data privacy and compliance, disaster recovery and business continuity, cost, and integration needs.
Considerations for cloud adoption and component placement - Message Processing and Orchestration
Service providers may obtain faster processes, greater agility, scalability, and operational efficiency by utilising the possibilities of the cloud for payment message orchestration. Payment message integration, automation, and monitoring are all made possible by cloud-based orchestration, assuring dependable and effective processing throughout the whole payment ecosystem.
Considering this, as well as the message processing and orchestration components of KPMG's reference payment architecture, it becomes clear that a contemporary payments architecture should be modular, with certain components being hosted in the cloud.
Let's consider the specifications for controlling message ingress and egress. The validation and formatting of messages and files, bulking and debulking services, and inbound and outbound message queuing are among the essential elements included in KPMG's reference design. Verifying and maintaining the accuracy of payment-related data as it enters or exits a system is known as message validation for payment messages ingress and egress. Maintaining the accuracy and security of financial transactions depends on this validation. These services must be able to scale resources up or down in response to demand because of their nature. Systems may see a spike in transaction volumes based upon the time of day, or at specific times during the year. The ability to dynamically assign resources depending on demand provided by cloud scaling enables the system to scale up or down as necessary. Payment service providers may simply scale up their compute, storage, and network capacity to manage the additional traffic effectively by utilising cloud-based infrastructure. This makes sure that resource constraints do not impede the translation, validation, and processing of payment messages.
Additionally, cloud systems support distributed computing, enabling the simultaneous processing of payment messages across several servers or instances. Translation, validation, and processing times may all be greatly sped up by this parallel processing. Payment systems may process several messages at once by utilising cloud-based distributed processing, which shortens processing times and boosts productivity.
Cloud services are often accessible across a variety of worldwide areas. By deploying their systems closer to their clients or business partners thanks to this geographic spread, payment service providers can lower network latency and boost performance in general. It is a common design pattern in a hybrid-cloud architecture to place edge components as close as possible to end users, thereby reducing data transfer costs, which is typically driven by an edge computing strategy. The cost model used by cloud service providers is typically based upon data egress from specific regions and/or networks. Limiting cross-region data transit will therefore reduce costs while lowering latency. For instance, having cloud (or data center) based infrastructure in a particular country or region helps speed up the process by reducing data transmission delays when a payment message needs to be translated, verified, and processed in that nation or region.
Message processing should be deployed in a distributed and scalable manner, but what about orchestration which is the heart of the payment hub? How can consistent message state be guaranteed to ensure fast and accurate settlement?
Similarly, to message processing, the loads upon message orchestration will tend to be variable and dependent upon demand driven volume. However, orchestration tends to require a more centralized approach than processing. Hence, a distributed structure may not be ideal, but once again cloud may be the best option.
The number of transactions that must be handled by payment orchestration is frequently large, and tight performance standards must be met. The cloud infrastructure should offer scalability choices to support peak workloads and the required computing, storage, and network capabilities to manage the anticipated burden. As computational demand is so asynchronous, should serverless computing be considered, or possibly a fully asynchronous system based upon an event-driven architecture?
The payment ecosystem's numerous payment systems, financial institutions, payment gateways, and other stakeholders must all be smoothly integrated with the message orchestration system. The cloud service provider must provide strong integration tools, such as message queues, APIs, and other middleware options. It's crucial to assess how well the cloud platform and the payment message orchestration system can be integrated.
High availability and reliability standards are needed for orchestration systems to guarantee continuous operation. A systematic and proactive mentality is created by using an SRE-based strategy to guarantee the dependability, availability, and performance or orchestration components of the payment architecture. Organisations can create trustworthy orchestration systems that are resilient, scalable, and efficient, leading to easier and more dependable payment processing, by using SRE principles, monitoring, automation, and incident management. In addition, the cloud service provider must have a solid reputation for dependability, supported by redundant systems, strong infrastructure, and disaster recovery plans. To guarantee uptime and establish the reaction and resolution timeframes in the event of any disruptions, service-level agreements (SLAs) should be in place.
These are some of the discussion points KPMG’s cloud and payment architects would consider when considering component placement and design. Considering the number of requirements that would have to be created, as well as the number of design decisions made, the complexity of cloud adoption for a single component of a payment architecture becomes apparent.
Get in touch with our experts
Connect with us
- Find office locations kpmg.findOfficeLocations
- kpmg.emailUs
- Social media @ KPMG kpmg.socialMedia