Delivering into the Cloud - Phi defines best-practice architecture as foundation for GCP migration of Credit Risk system for Tier 1 Bank.
Credit Risk architecture overhaul for consolidation and cloud-enablement

A DAG based / micro-service workflow design allows client Credit Risk Batches to be consolidated and Cloud-enabled.

Situation

Our client, a major Tier 1 bank, had a monolithic Credit Risk batch proces involving a series of applications to calculate Credit Risk exposure for its cross-asset global derivatives portfolio.

The architecture was unable to support the additional calculation burden from ever increasing trade volumes, portfolio complexity and regulatory change.

Requirements

Phi were chosen to deliver a complete review of the current architecture and propose a new design meeting three major criteria:

  • Event-driven architecture
  • Cloud-compatible
  • Immediate intra-day capability, and compatibility for future real-time delivery of results

In-scope

  • End of day stressed / unstressed eposure
  • Exposure / model stress testing
  • Back Testing
  • What-If processing

 

Solution

Phi provided a team of industry-leading specialists for six weeks to undertake an intensive review of the existing batch processes and deliver a clear roadmap to implementation of a modern, DAG-based event-driven framework

  • simplify the management of the applications currently supported
  • optimise the use of compute resources
  • provide a pathway to migration to Cloud infrastructure and associated scalability
  • provide a framework for delivery of intra-day and real-time credit exposures

Results

  • We proposed a DAG-based, event-driven model which allows each specific process to be tailored to extract data from their own data sources, perform the necessary transformations and then deliver to a common compute grid.
  • We identified the need to create a set of micro-services to perform the steps in the batch process so ensuring idempotency and recoverability from error.
  • We provided a set of service definitions – for both framework services and functional services – that would be required to support the use cases.  The service definitions included a specification of the functional requirements, manifest context, dependencies, events, inputs/outputs and data persistence together with the required API’s.
  • We provided a template for the manifest to describe variations on the tasks (source of data, simulation methodology, trade population, data sink, etc)
  • The use of a single compute grid along with a process of prioritisation would allow better use of the available resources and translate into an optimal design for use of cloud resources in the future.
  • While the approach proposed for the workflow was clearly capable of meeting a batched intraday requirement, the team identified an alternative approach which would deliver a near real-time capability based on only minor changes to the proposed architecture.

 

As a bonus, we shared the benefit of our DevOps expertise by providing specific advice on re-organisation of the client’s delivery teams and governance processes to allow multiple groups to cooperate in developing and enhancing the proposed architecture without compromising on quality.

TOP