In production, networked application stacks need to be able to easily locate the core components and the services that they provide within their network.
Traditionally this has meant analysing the network and service architecture and writing appropriate static configuration in advance, so that the network will be correctly ‘wired up’ when all of the components are started. It may also mean that components need to be started in the correct order to avoid loss of connectivity.
The introduction of Discovery into the Caplin stack means that components and their locations no longer need to be fully defined in static configuration; they are discovered and connected to at runtime. This means that additional components can easily be added to the stack and existing components removed. And that these changes will automatically be ‘discovered’ and those components and their services will be added to, or removed from, the deployed network.
Traditional style configuration
Traditional Caplin Platform style configuration requires that services and their addresses are clearly defined in the components on both sides of a connection. For example, on the Adapter side, and on the Liberator/Transformer side. This can lead to verbose configuration files, which must match at crucial points or the service will not be set up correctly, and will not be available. Meaning that even in a relatively simple configuration, several configuration files on multiple machines are required to set up a single service.
Consider the example below of two adapters and a Liberator. The adapters, FIDataSource1 and FIDataSource2, are configured to connect to the Liberator on host liberator, port 25000. Correspondingly, the Liberator is configured to expect a connection from FIDataSource1 and FIDataSource2.
Discovery style configuration
Once the Discovery orchestration application is used to anchor and co-ordinate the Caplin Platform network, this configuration can be massively simplified and reduced.
At its simplest, configuration can be reduced to a single line: discovery-addr.
This relies on all data being supplied via the ‘default’ Liberator data service, and being matched on the data that is required.
The recommended way to configure data services with Discovery is by specifying each one, rather than using the single default service. In this instance it will be easier to track the named service (e.g. fi-pricing), rather than relying on tracking a regex that represents the data that service supplies, which is what using the single default service would require. Additionally it is not possible to enable monitoring if named services are not used.
Even using named services the configuration is still minimal compared to the traditional configuration.
The simplified configuration that Discovery allows streamlines each Platform deployment, whether for development on a single machine or deployment into the cloud. Adding and removing components is quick and easy to achieve, by indicating where the Discovery server is located. The new component, once connected to Discovery, is seamlessly added to the network stack, and its services made available to the network.
The named services configuration means that you can still retain full control of services for production deployments. While using the simplified dynamic services configuration makes development much easier to setup and adapt from a simple to a more complete network of components. This provides the flexibility to address the needs of multiple use cases from individual developers up to fully nuanced production deployments.
The configurations are both shorter and simpler. Making them easier and quicker to debug as well as deploy. Meaning that time can be saved both during development, and in production. Anyone new to Platform deployments can be brought up to speed more quickly, easily creating a Discovery-based local or remote stack, and efficiently hooking their IDE-based Adapter into that stack.
A key bottom line impact for these benefits is that fixed capital expenditure can be transferred to flexible operational expenditure when migrating infrastructure to the cloud. Meaning that large, up-front costs, can be converted into expenditure spread out over time. Which when carefully managed can reduce overall costs.
We will explore scaling components and how this can help control costs in a future blog.