Real world benchmarking scenarios

I have blogged about benchmarking before, describing the process. Most of the benchmarking I have done on Liberator and other products has used fairly simple scenarios, testing the core capabilities of the server to pass messages to clients as fast as possible. The real world is different though, so it can be useful to look at other scenarios too.

The standard scenarios I run go something like this:

10 Add clients that subscribe to X subjects.
20 Measure the latency of all messages for 30 seconds
30 GOTO 10

This means we are measuring the steady state of updates, while no other activity is going on.

We have soak tests that do a whole lot more than that, but we’re talking about benchmarks here.

It is easy to come up with more real world scenarios. For example, clients unsubscribing and subscribing to other data, clients sending messages into the server, and using different features of the platform. The purpose of a benchmark is to have measurable attributes, for the standard benchmarks this is clear – latency, cpu, numbers of users etc. It is also easy to understand – ‘Liberator can support 100,000 clients receiving 1 message/sec at under 10ms latency’ is is something you can compare to other products (with a few more details like message size clarified).
The problem with the real world scenarios is they are less comparable, less meaningful – ‘Liberator can support X clients subscribing and unsubscribing every minute, while sending messages to the server every 30 seconds…’ – it becomes fairly arbitrary. The standard benchmarks are the kind of numbers you can ask a customer for – how many clients will you have? how fast does your data update? With the real world examples, try asking questions and see what you get – how often will people switch views (change subscriptions)? how often will users trade? You will get answers that are vague at best.

Nevertheless, I have been doing some benchmarks along these lines to see how various commonly used features impact the performance of Liberator and the platform.

Transformer is a server that sits behind Liberator and allows modules to be implemented to transform the data coming from sources. So I added Transformer into the chain in the benchmark environment. Tests include using each of the three types of module, C, Java and LUA. The modules themselves do very little apart from passing the message through, so it is really only testing the bridge between languages. Results are good, with very little latency added at all.

Trading. Clients have to send messages to Liberator to trade, so I added some scenarios that include client to server messages, being echoed back and round trip latencies recorded, as well as looking at the usual update latency of other data.

Containers. Liberator has the concept of a container. A meta object that can be subscribed to with a single command. A container is a list of references to other data, which you get subscribed to by the server. This is very convenient for long lists of data in grids, such as a list of bonds. It is also possible to only subscribe to a portion of a container, allowing the client to only receive what is visible on screen and not anything scrolled off screen. Items can be added and removed from containers with the clients subscriptions updated accordingly. This is how many clients subscribe to the majority of what is on their screen, so scenarios where subscriptions are made this way are quite important.

I will probably publish some actual graphs soon, but final results are still being run.

2 thoughts on “Real world benchmarking scenarios”

  1. Hi Martin,
    I agree with you that sophisticated benchmark use-cases are more difficult to be performed, explained to and understood by the customers.
    However, some simple metrics such as the rate with which a push server accepts new clients and the rate with which a push server accepts subscriptions could be comparable metrics – in the same way as latency and bandwidth usage are.
    One of our customers is using Migratory Push Server in a very large deployment where tens of thousands of users connect and disconnect every second. The users visit the web portal to check some live results and most of them disconnect. Such customers will find useful to have some benchmark results related to the above metrics before starting to evaluate a push server.
    I estimate we will include such metrics in our future benchmarks for Migratory Push Server. BTW as I already announced you in another comment we just released the new benchmark results for Migratory Push Server version 3.5. You can find them at:

  2. Hi Mihai,
    I’ve done some tests in the past like you mention.. my scenario was to simulate a setup with 2 servers, and one goes down, so half your users have to switch over to the remaining server.
    Well done on your new benchmarks. I’m going to be getting hold of some new hardware soon to push Liberator passed the 100k clients mark.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *