Caplin: A hands-free SDP and the testing challenge

One of the recent challenges our QA team has been facing is our upcoming release of a hands-free single-dealer trading platform.

Here’s a brief explanation for those of you that are unfamiliar with the concept: in an ever increasing mobile environment, one of the requirements our customers have asked for is the ability for hands-free trading using technologies such as voice recognition (VR).

This is extremely useful for situations such as driving (using our iPad application for example), or when unable to use your hands for various reasons.

Instead, the trader will simply read out his/her trade request, get an automated vocal offer (configurable for male/female based on personal preferences) and then confirm it, again by VR.

Blink once for an RFQ, twice for a “one-click” trade

Alternatively the user can use eye gestures (EyeG): this requires a camera that monitors the user’s eyes. The user stares at a ticket for at least 1.4 seconds (why 1.4? More about that in my next post) and then blinks once for an RFQ or twice for a “one-click” trade.

If the system fails to identify the command or gesture, a message will sound saying: “sorry, we cannot make out what you’re saying/looking at; you are being transferred to a human being.”

Testing a hands-free single dealer platform

Testing the eye gesture (EyeG) functionality was a challenge. To be as close to reality as possible, we used full-size mannequins with controllable eyelids.

This is run in our continuous integration environment, but we had to combine the use of Jasmine (on top of js-testdriver) to allow for asynchronous calls due to the interface with the mannequin API.

This has proved to be a more useful approach than the VR testing, except that the tests must be run with the lights on, requiring us to deactivate the power-off of the office lights at night to be able to run nightly cycles.

Challenges in testing hands-free single dealer platforms

So how do you run automated tests against such interfaces?

For VR functionality, our test team had to develop an automated engine that uses pre-recorded voice commands in various voices and accents to test the application, and for this we got a number of people from all teams to contribute their voices (good opportunity to get everyone involved, no doubt).

The tests were run using js-testdriver and a third-party command-line voice activation framework we worked to enhance. Each testing machine was equipped with a microphone, and the testing framework had a loudspeaker facing it, requiring a pair of machines for each test (although we are looking into combining these into a single box).

The main challenge is now isolating background noises which occasionally cause our tests to fail (on one occasion a loud phone ring resulted in a series of 1417 trades!) We might need to move our automation lab into a sound-proof environment to solve this issue.

Hands-free applications: The way of the future

There is still much to be done in our voice-recognition testing framework, but we believe that this investment will be well worth it as hands-free applications become the next chosen technology.

More about VR and EyeG technologies can be found here.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *