StreamLink for Browsers (SL4B) is the JavaScript library that we use to stream data from our server-side Liberator to our client side applications. It provides a simple API for client applications to interact with the financial (or other) data that they are interested in.
StreamLink for Browsers is one of the most important libraries of our architecture, and, as with all our products, quality testing is a priority. However, in the past acceptance testing the SL4B has been a significant problem. When we moved to rewriting SL4B from scratch as StreamLink JS we took a new direction in acceptance testing.
The Problems with SL4B Acceptance Tests
Integration Tests != Acceptance Tests
When I first started at Caplin over two years ago, most of our acceptance tests were simply integration tests masquerading as ATs. Instead of actually testing whether SL4B behaved correctly, it was instead stuck into a large vertical stack and tests were written from one end to the other.
This system works. And it is not a terrible way to test your software. But it’s not great either.
Firstly, it provides a large number of failure vectors. Just one product running slowly, or a little bit of lag in the network, could make SL4B act subtly different, and fail the test. While this is important information to gather, (we need to be sure that our software works correctly when combined together) , it is not conducive to testing the StreamLink itself, at an Acceptance level; If an Acceptance Test fails, it must always be due to a bug in the product itself.
Secondly, having multiple products (6 or 7 in some of our legacy tests) interoperating for a single AT means that a failed test could be due to any number of reasons in any number of products, only one of which is product you are actually trying to test. This is annoying because it requires the QA to know every product deeply to determine the root cause of the problem. When you have as big a stack as we do, it isn’t really plausible to expect all the QAs to know all the products this well.
Finally, it binds the ATs to particular versions of your software. If API of one of the products changes, then you need to spend some time updating all the ATs to the new version’s API. Or in reality, there is no time to do the upgrade, so instead you just continue to use the old version and after 6 months you just decide that your tests will simply use the old version. This can be absolutely crippling to development when all your ATs are bound to software that was deprecated 2 years ago.
Selenium is a sword with no handle
Many testing teams have fallen into the trap of writing a Selenium test, seeing how well it works 99% of the time and then writing more and more tests. Next thing you know you have 300+ tests (seriously) and the chance of the build failing due to one “random” failure is nearly 100%. WebDriver is better, though still not perfect. And even though it is more reliable, it is seriously slow. Running through your entire backlog of tests soon becomes something you can only do overnight, and that is completely usesless in modern development practices.
The SL4B test suite was implemented entirely in Selenium, and while significant amounts of developer time was sunk into it over the years, we only achieved in making it tolerable. This was not the way to test your software.
I am not saying not to use Selenium/WebDriver. It is a really good tool. But it is double edged and it must be handled with great care. Write your tests very defensively, taking into account all the ways that latency or DOM delay could hamstring the tests, and don’t write any more tests than you have to. Write the majority of your ATs in another technology (see below), and use WebDriver as the final verification.
Testing by Contract to the rescue!
StreamLink has two interfaces: RTTP and Client
RTTP Interface
RTTP is a well defined protocol which applications use to communicate with the Liberator. The StreamLink has an RTTP Interface, to which you can provide a Liberator, be it real or a mock for testing, which it will communicate with.
Client Interface
On the other side, we provide an interface for the WebClient to provide all this pub/sub data in a structured, detailed and predictable manner. This interface would normally be interacted with by a client such as Caplin Trader, but it can of course be equally used by a test mock.
Testing the Interfaces
In SL4B, we tested this architecture by connecting to a Liberator and writing a basic html front end that represents the underlying object model which we then tested with Selenium. For StreamLink JS we mocked out both the Web Client and Liberator and then defined a StreamLink Contract, which we tested against. This allows us to test, headlessly, allowing to completely bypass Selenium and test the JavaScript directly.
Software Contract
Consider the most basic PubSub story: “The user should be able to subscribe to an object and get it’s value”.
In SL4B, this test was written in Selenium, with a lot of infrastructure working away in the background to get everything working, and looked like this:
Click Subscribe To Object Button
Check Object Value String
Wait for up to 30 Seconds for the Object Value String to not be blank
This test is almost useless. It does not describe how the software works, nor would its failure leave any indication as to where an issue might be.
In contrast, we wrote a StreamLink Contract in BDD in StreamLink JS Acceptance Tests. While, we did not explicitly use a Given When Then syntax in the code itself, we used the concepts of starting a certain software state, invoking an event on the StreamLink and then ensuring that the Correct message was received.
For example:
Given "that the StreamLink is connected"
When "the user requests an object"
Then "send a request object message"
This is one of the most basic StreamLink Contracts. When the user requests an object, the StreamLink should request it from the back end. There is of course a counter-contract:
Given "that the StreamLink has requested an object"
When "the Liberator returns that object"
Then "callback the user with the object they requested"
And with this second, contract, we have completed the happy path of the Story “The user should be able to subscribe to objects and get their current values”.
In contrast to the SL4B, this test is incredibly useful.
- It explains how the StreamLink behaves. Requests send a message to Liberator, responses from the Liberator get provided to the requester.
- It will provide information about the failure. If the first test fails, you know it isn’t requesting the data, if the second test fails, it is not handling the response correctly
- The fixtures themselves (if well written), will clearly state the specifics of what the interfaces themselves, look like.
There are a couple of “drawbacks” to this way of testing:
Firstly, it requires whoever is writing the tests and fixtures, to have an in depth technical knowledge of both interfaces, which often is not possible on all teams. Many QAs are only capable of testing the “front end” interface, via something like WebDriver. You are deluding yourself if you think you are performing Test by Contract like this.
Secondly, this does not actually ensure that the product works, only that it honors the contract, as it has been interpreted by the person writing the tests. Not that this is something unique to Test by Contract. Nonetheless, it is important that these Acceptance Tests be tempered with judiciously written Integration and Smoke tests to ensure that the products contract matches up with those of the products with which it will be interacting.
Conclusion
Test by Contract provided a massive improvement to the way that we tested StreamLink, and I believe it can add a massive quality improvement to any project.