Limiting real time data

When you are loading a web site, or even most fairly advanced RIAs, you are requesting data and getting a response – an HTML page, SWF file, some JSON or XML data etc. How quickly that data is returned to you isn’t generally a critical factor other than improving the feel of the application for the user.

Real time data is different. Something external is determining when and how much data is sent to you. Yes, the user or the application is probably making some kind of subscription for that data, but it is returned asynchronously and at a rate that might not be specified by the user.

So what happens when this data starts to exceed the bandwidth available, or in some cases the CPU limitations of the client PC? You are going to get problems unless there is something in place to handle these scenarios. But what can you do?


It is difficult to have a solution that fits all, how you can best cope with these scenarios depends on what the data is. Take, for example, an application that subscribes to a number of financial instruments that are updating frequently. For an individual instrument a new price replaces the current price, so if updates are too fast you can have some logic that doesn’t send every update; if a price is updating 20 times a second a user probably doesn’t need to see every update. This is a common feature for financial data, often called conflation or throttling.

However, what if the update rate per instrument isn’t actually that high, and that the problem is due to there being too many instruments subscribed to? Conflating works on a per instrument basis, conflating instruments that are not updating frequently becomes less useful. If you are subscribed to a large number of slow updating instruments conflation will have little effect unless you conflate to an undesirable level.

In the above scenario, the number of subscriptions is the problem. If each update has to be processed by the client, or conflation is not appropriate, there are some things that you can do at the application level that the infrastructure isn’t capable of doing for you. Maybe the application can unsubscribe from data that isn’t in view on the screen. Maybe conflation can be used for less important data leaving the most important data updating as needed.

Don’t just drop it!

Whatever you do, for important data, you do not want to prevent data being sent that the client is expecting to see. Crude bandwidth limiting that simply drops data is no good for most trading applications. Another problem with this is how it is triggered. A server that tries to detect a slow consuming client needs to do so by measuring latency, it may be too late to act if it waits for the network stack to push back (due to network buffers etc.). Latency feedback from the client will be far better to trigger any kind of limiting of data or adjusting of the application’s behaviour.

Consider other types of application. It would probably be fine for an application giving you a live twitter stream to drop messages as it is unlikely that seeing every tweet is really necessary.

So what does all this tell us? The method for dealing with too much real time data is dependent on the application and how the application is being used. Many applications do not need to worry about this problem, or rather it isn’t worth the effort to come up with a solution, but for other applications it could be critical.

Related Posts with Thumbnails

There are no comments yet, add one below.

Leave a Comment