The final day of the conference came with another variety of great talks and also some practical sessions. I was unable to attend ALL of the sessions as I had a flight to catch back to London! But here are the write-ups for those I was able to attend.
Ola Ellenstam – Fast Feedback Teams
Feedback is important
Ola used the metaphor of a hotel shower to illustrate the problem. When we go to a hotel and want to take a shower, we often need to calibrate the temperature settings so that it’s ‘just right’. For us to do this we need a receptor, something to give us feedback and let us know how we should adapt and change the settings. So we place our hand under the running water, we get feedback as to whether we need to turn up the heat or lower it and we simply adjust the settings until we’re happy with the result – perfect warm water!
Frequency of feedback
Lots of people use GPS systems in their cars when travelling far distances. If our GPS system only updating say, every minute interval – what impact would that have? We’d miss turns, possibly end up having to go back the way we came and reach our destination a lot slower than we set out to. Our estimation time to arrival would be in constant flux.
Relating this back to testing, we need to tighten our feedback loops so that we are able to make fixes quicker and be able to fix bugs at a smaller cost. This is where continuous integration fits in, where you have your tests run on each individual code commit.
In software development and particularly in testing, we need to “remove stuff that slows you down”. This of course, is easier said than done. Sometimes it’s even the automated tests themselves that slow you down!
I found myself agreeing with a lot of the content of this talk, it was great at highlighting awareness and the need to resolve the problem. However, for me, it didn’t feel it offered any big insights or techniques into how to specifically address it. But maybe that’s OK, not everything has silver bullet – if it was that easy then it wouldn’t be such a big problem, right?
Each team needs to be able to analyse and understand what’s slowing them down and work together to come up with solutions.
Focus on the WHAT and the WHY, not the HOW
We were presented with the importance of separation between the behaviour we want to test (‘what’ and ‘why’) from the implementation (the how). Understanding the requirements of your system given specific test conditions and/or inputs will allow you to better design and architect your solution. A tester providing a set of given, when, then examples for a proposed story to a developer can make a big difference.
One of the best advantages of adopting BDD is that it helps you understand the purpose of what your application should do without you struggling to find motivation for it. We use BDD extensively at Caplin with Verifier, the implicit clarity that is gained from defining tests in a given, when, then format also assists in helping communicate test cases with business analysts. It also helps with the readability of tests which is very important. Kittens cry when a test does not articulate it’s purpose.
A lot of what was said has also been echoed around the Caplin offices for a long while now; before you come up with a testing strategy for any story or project, you should always understand what exactly you are testing, why you are testing it and at what level it should be tested.
This talk followed on smoothly from the previous one by Carlos and Iván and was a lot more technical with live coding and testing. I did note that he preferred IntelliJ over Eclipse as his IDE of choice.
We went through a worked example of TDD with a Q&A game web app which was to present the user with a question and an input box to answer in. If you answered correctly, you would be shown a tick and you were able to proceed to the next generated question.
This goal of the session was at targeting the common problems of testing the server-side and DOM manipulation, as well as the interactions between the two using given, when, then tests.
The code and presentation slides are available on github.
Scott Barber – The on-goin evolution of testing in agile development
Scott is viewed by many as a prominent thought-leader in the area of software system performance testing and presented a brilliant keynote. I thoroughly enjoyed his presentation style, he was very energetic, he was pacing up and down the entire room, had brilliant wit and his passion was there for all to see.
Scott began by poking fun at Scrum and the Agile Testing Manifesto by recalling his reaction to it as ‘DUHH! This isn’t anything new!’ – it was all common sense.
“It’s all R&D really, how can you write software which is not R&D?”
Scott then reflected on how we did development in the past and how it differs to how the industry is moving today.
Back in the day
Gerald Weinberg coded in the 1960s where:
- Measurement of performance was in nanoseconds, not seconds or minutes (or using LoadRunner)
- Team’s didn’t really have deadlines because they didn’t know what they were working on would even be possible
- This way of working was easier for the and faster
- Everyone took collective responsibility
- They didn’t do code reviews because a book told them to or because someone wanted to reinforce their new shiny agile certification
They were able to put people in space and come back down, alive. We put websites up, which fall over on Black Friday. What does this say about software development today?
We created our own problems
Scott discussed how a lot of the problems we have today is of our own wrong doing when we split testing from development:
- We seperated the teams (dev and testing/QA)
- Created walls in the offices (less communication, ‘us and them’)
- We then added supervision (managers)
- Managers then needed to report back so we introduced progress reports and the burdened the teams with it
- We then introduced a dev manager and a product process manager to split the roles of the already existing managers
This was Scott’s take on the above (paraphrasing) – “We did all this for what? So that the product process manager could arrange a meeting with the dev manager to talk about a bug? Only for then to get him to find the developer who knows about that area of the code, get him and some more managers to sit in a room together for a few hours and have the developer then go back to his computer, test it out and then tell them that it’s not reproducible. I can’t make this stuff up, I’m not that creative.”
Keeping up with the market
Scott hailed the approach taken by Facebook and said that “they are the model for world class performance”. They are “consistent, disciplined and their software is completely free. Not a single person in that company is hired as a tester. They don’t care what your title is, you come to work, you get features out and get bugs fixed”.
That’s not to say that they don’t do testing, they do. But perhaps not in the ‘traditional’ sense in people’s mind of what ‘testing’ means. They make their changes, merge their code, if their tests and acceptance tests pass they integrate to production – we’re talking minutes and not hours.
If something goes wrong, some boss somewhere may get annoyed but they say that’s OK, because they can fix it, re-test and deploy within 10 minutes, not hours or days.
Tidbit: Facebook’s definition of a ‘stress test’ is Justin Bieber getting a new haircut – that’s what they test for, it’s caused the website to go down in the past.
Scott’s advice – how to be a ‘good tester’
- Don’t be one of those arrogant testers who believe their bug is more important than the business goals, if you stand to miss out on large revenue because of X amount of bugs then you’re doing it all wrong
- Help your team produce business valuable systems, faster and cheaper
- Be a testing expert and project ‘jack of all trades’, be able to adapt and help the team wherever necessary – not just testing