Transporting a JSON object has always been possible by storing the JSON literal in a record field; however, this approach is inefficient for large JSON objects. Liberator can stream change-deltas for a record, but not for a field. If only a small part of a JSON literal stored in a field changes, then Liberator has to send the whole field again.
In a forthcoming release of the Caplin Platform, JSON objects will be available as a first-class Caplin Platform data type. This will allow delivery of true JSON messages, minimise bandwidth consumption, and simplify integration of JSON data into UI frameworks such as React and Redux.
Optimising update messages
One of the core optimisation strategies of the Liberator is to send only the data the client requires. For example, we developed container windowing to reduce the load on web UIs when showing trade history blotters.
To optimise the sending of an update to a JSON object, Liberator will have the option of describing the update in terms of an RFC6902 JSON patch. If the patch is smaller than the whole JSON object, then Liberator will send the update as a patch. This can result in considerable bandwidth savings.
Optimising sending large messages
JSON is often used to model large data structures. To ensure that a client remains responsive when receiving a large JSON object, Liberator will send the JSON object in chunks and interleave other messages for the client in between the chunks. This ensures that when a client receives a single large JSON object, it does not delay the client from receiving other messages, such as price updates.
Features that we are looking at implementing in future include:
- Fixed-period throttling: send an object every n seconds, but only if it has changed.
- StreamLink subscriptions to part of a JSON object
- Optional DataSource APIs that consume materialised object, serialise and handle delta calculation.