Real world benchmarking scenarioson Oct 01, 2010 in QA, Real-time web by Martin Tyler
I have blogged about benchmarking before, describing the process. Most of the benchmarking I have done on Liberator and other products has used fairly simple scenarios, testing the core capabilities of the server to pass messages to clients as fast as possible. The real world is different though, so it can be useful to look at other scenarios too.
The standard scenarios I run go something like this:
10 Add clients that subscribe to X subjects.
20 Once subscriptions are done, measure the latency of all messages for 30 seconds
30 GOTO 10
This means we are measuring the steady state of updates, while no other activity is going on.
We have soak tests that do a whole lot more than that, but we’re talking about benchmarks here.
It is easy to come up with more real world scenarios. For example, clients unsubscribing and subscribing to other data, clients sending messages into the server, and using different features of the platform. The purpose of a benchmark is to have measurable attributes, for the standard benchmarks this is clear – latency, cpu, numbers of users etc. It is also easy to understand – ‘Liberator can support 100,000 clients receiving 1 message/sec at under 10ms latency’ is is something you can compare to other products (with a few more details like message size clarified).
The problem with the real world scenarios is they are less comparable, less meaningful – ‘Liberator can support X clients subscribing and unsubscribing every minute, while sending messages to the server every 30 seconds…’ – it becomes fairly arbitrary. The standard benchmarks are the kind of numbers you can ask a customer for – how many clients will you have? how fast does your data update? With the real world examples, try asking questions and see what you get – how often will people switch views (change subscriptions)? how often will users trade? You will get answers that are vague at best.
Nevertheless, I have been doing some benchmarks along these lines to see how various commonly used features impact the performance of Liberator and the platform.
Transformer is a server that sits behind Liberator and allows modules to be implemented to transform the data coming from sources. So I added Transformer into the chain in the benchmark environment. Tests include using each of the three types of module, C, Java and LUA. The modules themselves do very little apart from passing the message through, so it is really only testing the bridge between languages. Results are good, with very little latency added at all.
Trading. Clients have to send messages to Liberator to trade, so I added some scenarios that include client to server messages, being echoed back and round trip latencies recorded, as well as looking at the usual update latency of other data.
Containers. Liberator has the concept of a container. A meta object that can be subscribed to with a single command. A container is a list of references to other data, which you get subscribed to by the server. This is very convenient for long lists of data in grids, such as a list of bonds. It is also possible to only subscribe to a portion of a container, allowing the client to only receive what is visible on screen and not anything scrolled off screen. Items can be added and removed from containers with the clients subscriptions updated accordingly. This is how many clients subscribe to the majority of what is on their screen, so scenarios where subscriptions are made this way are quite important.
I will probably publish some actual graphs soon, but final results are still being run.