There is a lot more to a Single-Dealer Platform than just a Comet Server, but it is still a key component and often one of the components bought in rather than developed in house. At Caplin we have dedicated much of our time to building what we believe is the Comet server that offers the best performance and feature set on top for building a SDP. Hopefully this post will explain why we believe this, and where you might have more work to do with the other products.
A Comet server allows an SDP to deliver prices and manage trading between a backend system and a browser client – although many will support other clients too. These days there are a number of choices for a Comet server, and more of them are realising that financial data is a great use case and targeting effort in that direction.
If you are choosing a Comet server for an SDP there are a number of questions you should be asking that will have a big impact on the amount of integration and other work you will have to do.
- What is the structure of the messages or objects?
- Does it support bidirectional messaging?
- Can it throttle or conflate fast updating data?
- What languages are client APIs available in?
- What languages are backend integration APIs available in?
- How does it perform with high numbers of users and update rates?
- How does it perform in terms of message latency?
- How does it integrate with permissions systems and Single Sign On (SSO) ?
- How much bandwidth does it use?
Cometdaily covers some Comet servers focusing on the low level technical details that relate to Comet specifically with their Comet Maturity Guide. This is important, however it doesn’t really help differentiate between the offerings for an SDP, and doesn’t really answer all of the questions posted above.
I have looked at a number of Comet servers over the last few years and the following information is based on what I have experienced when I tried them out. Where I have been unable to obtain information directly from trying them out I have acquired it from documentation and performance results published by the providers.
Liberator is of course a Caplin product and many, if not all, of the other servers may be perceived as a competitor in some way. I have tried with best endeavours to be impartial in my assessment but invite comments from all the mentioned vendors if they believe there are inaccuracies about their server, or if they feel I have misrepresented them
Mature C based product focused on finance and trading (Integrates with higher level APIs in Caplin Platform)
|Messages/Objects:||Field/Value based record objects, plus higher level types such as lists of objects|
|Server APIs:||Good – Java, .Net, C/C++|
|Latency:||Excellent 1 (Published)|
|Permissions/SSO:||Good – API based / SSO integration built in|
Mature Java based product and finance aware.
|Bidirectional:||Yes – in version 3.6|
|Server APIs:||Average – Java, .Net|
|Permissions/SSO:||Good – API based / SSO integration possible|
Limited to Flex clients. Not really focused on finance. Integrates well in areas outside of Comet.
|Messages/Objects:||Object serialisation based|
|Client APIs:||Limited – Flex|
|Server APIs:||Limited – Java|
|Performance:||Average – better in Version 3 (Blog)|
|Permissions/SSO:||Simple API based|
Messaging server with lots of API support. Probably suits people with the enterprise/message queue/JMS mindset.
|Messages/Objects:||Opaque and Field/Value based|
|Conflation:||No – opaque messages|
|Server APIs:||Good – Java, .Net, C++, VBA|
|Performance:||“many thousands” (Unpublished)|
|Latency:||“ultra fast low latency” (Unpublished)|
|Permissions/SSO:||Good – API / SSO integration|
Kaazing Gateway is a WebSocket style server. It is intended to be used to bridge a client to an existing socket based server. This means it is missing many of the features of the other servers, but could be attractive to someone wanting a minimal server to build on top of.
|Messages/Objects:||WebSocket API only|
|Conflation:||No – opaque socket|
|Server APIs:||Socket interface|
|Performance:||“millions of messages per second” (Unpublished)|
|Latency:||“near zero latency” (Unpublished)|
|Bandwidth:||Dependant on application|
Relatively new product that scales to very high numbers of users. Background in RMDS, so finance focused.
|Bidirectional:||No evidence of bidirectional API|
|Server APIs:||Average – Java, PHP|
|Bandwidth:||Good – but still sends full topic and field names|
The free alternative. There are plenty of other free Comet servers, but I chose to include Jetty/Cometd. This is the main server behind the Bayeux protocol
|Messages/Objects:||JSON message structures|
|Server APIs:||No specific server API|
|Latency:||Long Polling only – not so good for fast updating data|
|Permissions/SSO:||Simple API / SSO Integration|
|Bandwidth:||Good – but still sends full topic and field names|
One area not covered above is the subscription semantics. Most of the Comet servers here implement some kind of publish/subscribe model. This is implemented in a number of ways though. Pure pub/sub, as I think of it, de-couples producers and consumers completely and a when a consumer subscribes they only receive something the next time that item updates. In financial applications this can be quite limiting.
The first aspect of that, the de-coupling, is inefficient. You don’t want producers of data sending everything on to the next hop in the chain if noone is subscribed to it – it is better if the producer is told when to start and stop publishing for a specified topic. In a fixed income application there could be tens of thousands of instruments, but not all of them will be subscribed to at any one time. For pure pub/sub systems this extra layer of subscription handling is often implemented on top of the core functionality at integration time.
The second aspect is also often implemented as an after thought on top of a messaging system. When a subscription is made to a financial instrument, most of the time you want to see the current values (the image) and receive subsequent updates to those values. Systems that don’t implement this for you leave you open to nasty race conditions if you try and add a cache.
Performance and latency are difficult to compare directly. Even if a head to head bake off is setup, there are numerous variables involved in these kinds of tests, and some servers may fare better in particular scenarios but worse in others. For example Migratory have cleared put some effort into supporting very high numbers of users, something that we haven’t had the demand for at Caplin so our tests have only ever gone up to 30,000 users (which is still a high number for many financial use cases). For some scenarios squeezing out every last drop of latency is priority, whereas others need lots of users and high update rates and can accept a slightly higher, but maybe more predictable, latency.
For a lot of applications predictable latency can be more important than the latency itself. When a price is quoted, with a known latency to the recipients of that price, the latency can be taken into account to cover the risk involved. However, if the latency is unpredictable then the highest likely latency will have to be taken into account when a price is quoted. Alternatively it will be necessary to introduce a threshold that blocks trading on more latent prices, which doesn’t lead to a good user experience.
It is possible to build an SDP with any of these products, but there are two high level reasons why some will be better than others. Features and Performance. In most cases the features mean there will be a lot less work in building your SDP, but you could probably still do it given enough time and money. However, performance may be more of a stumbling block. You may be able to solve it by throwing more hardware at it, but often the nature the of problem is not perfectly scalable and you will still hit issues.
There are other aspects of Comet servers that can help with developing SDPs, I have concentrated on some key aspects that are comparable in some way and hopefully provided some useful information.
 The published Liberator benchmarks only show tests where batching was configured. This gives more headroom at the expense of latency. I have recently carried out some tests with zero or much lower batching to get the lowest possible latency with excellent results and without impacting the numbers of users/updates too much. These will be published along with a whole new set of benchmarks in a few weeks time.
 LCDS messages can be sent in a number of formats. One is XML based which means message sizes up to 10 times the size of the best here. Using AMF it cuts out XML tags, but still contains a lot of data, eg field names. Messages are still up to 5 times larger than the best here.