Agile Testing and BDD Exchange 2011on Dec 05, 2011 in QA by Mike Salsbury
Summary of all talks
There were 7 talks with 15 minute intervals between each. The event was sold out, so around 125 attendees. In general the first five talks seemed the most relevant for us, and in particular the first and last in the morning and the first talk in the afternoon. The podcasts for all the sessions are available here http://skillsmatter.com/event/agile-testing/agile-testing-bdd-exchange-2011 . A brief overview:
- Driving requirements from business value – Chris Matts
- BDD as it’s meant to be done – Matt Wynne
- Evolving the big picture: maps of living documentation
- On what testers and developers can learn from each other – David Evans
- Where exploration and automation meet: getting the most from automated functional tests – Andy Kemp
- Specifying user interaction – Lasse Koskela
Great talk with more questions than answers. Emphasizing that our current favourite metrics of defect numbers and code coverage are by no means the best indicators of quality in our code, or even where our testing efforts should be focused. The use of Story and Effect maps was put forward as a good place to start looking at alternative visualizations of quality. Mapping risk against features within stories can give a better indication of where to focus testing effort. See visualizingquality.org and gojko.net. Also on mind maps of important stuff http://bit.ly/accMatrix for an ACC map. Also consider SpecFlow.
This was a more Business Analysis focused talk in terms of what kind of project are we involved in. In particular does our project Increase Revenue; Protect Revenue; Reduce Costs; Avoid Costs. Also it can be characterised on a quartered graph (purpose alignment model – nick nickelaison) as a Partner; Invest and Excel; Who Cares?; or Good Enough project. The value is in the output of projects, so look at the outputs and analyse them to know what features need to be pulled out. This is an example of Feature Injection (like a Dan North slide) – Hunt the value; inject the features; break the model. The output equals requirements and inputs are dependencies. (that enable the requirements) The teabag analogy (What do you want to drink? A teabag). Strategy is about what you are not doing now.
Using a Gherkin specification and Cucumber to hunt for your domain model. Once the domain model is found and documented via Cucumber tests then those tests can be made to pass by creating an application driver layer. This layer can then be used with or without a GUI to test the underlying application. The session was a brief introduction and book plug (The Cucumber book, @mattwynne) followed by a paired programming session (the audience as pair). Finishing with an explanation of what we had just done. Including creating a quick web gui with capybara and Sinatra to demonstrate that the cucumber tests created with their application driver layer could easily switch in a web gui and still test the whole without needing to change any of the underlying tests. All Ruby based.
A good and entertaining talk on how testers and developers interact, and what they should remember when talking to each other. Remembering how they are looking at the same things from different perspectives. Code that isn’t tested doesn’t work, don’t prioritise what not to test. Develop and test features in this order. A bug report is only an opinion. The value of a test is in the speed with which it allows us to take action. Elizabeth Hendrickson: “Trying to test the depths of the code thru the UI is like peering through the shower head to examine pipes in the basement.” (http://agilesoftwarequalities.blogspot.com/2009/08/quotes-from-twitter.html ). ATDD slows down development in the same way that passengers slow down a bus.
A premise for a good talk, but the technology and examples did not help the execution. The premise was that you can use elements of existing automated tests as the starting point for further exploratory testing. In particular ThoughtWorks Studios Twist was used to show how it can be used to drag and drop steps from other existing automated tests to create a new ad hoc testing sequence. A manual step can be added so that the test automation will pause at that point and ad hoc manual exploratory testing can proceed from that point. If required a comment can be added at this point to indicate what exploratory testing took place. This can be viewed later in the logs. The problem was the example used was too short, and as pointed out by someone in the audience the same exploratory testing could have been done much more quickly without using the automated steps. A better example might have been to setup a complex scenario with tests along the way, and then begin exploratory testing from there. In that case a lot of complex steps can be automated to get the application into a particular state, then allowing exploratory testing from there. Having that setup and then talking through it, and running some exploratory examples might have been more appropriate than the shorter example used. This would have supported the premise that re-using existing automated functional tests can inform and facilitate further exploratory testing.
The importance of having UX specialists and how their input can be managed within an Agile process. Mainly an example of their experiences introducing UX specialists and combining their more waterfall style processes with the development teams Agile processes. Jeff Patton’s story maps again. Doing UX for one part of the screen only rather than the whole thing all at once. Remembering how long even re-skinning an existing application can take, and allowing suitable time for this. Four points:
- Agree on how important UX is to you
- If UX matters, start that work early
- Include the whole team in UX work
- Keep an eye on the next game
Overall a great day full of informative speakers across a range of stimulating topics relevant to Agile Testing.