In traditional waterfall projects, one of the main roles of Quality Assurance was to decide whether to give “QA Signoff” — to decide whether the product can go to production. This expectation reflects the waterfall approach to quality, that it’s something that is added late in the project, and “tested in.”
Agile takes a different view. Quality must be maintained from the beginning of an effort. Hence, QA in Agile doesn’t have that gatekeeper role — the question is transformed from whether quality is high enough for a release to whether functionality is high enough for a release.
This is startling to many QA professionals and managers when we ask them to give up the gatekeys. There comes a day when we no longer look to the QA department to give a go/no-go decision, based on the adherence to the original specification. And QA asks, “What am I doing here?” “So is it just up to the developers to test?” “Is there a role for testing at all?”
This is the question that I hear over and over again in my Agile Testing classes. The answer is yes (obviously, since there’s a class for it), but the major change is of mindset more than toolset.
The primary purpose of testing in an agile development environment is to provide fast feedback on two questions: Did we break it? Are we done yet?
The power of the first is straightforward: the faster we know whether something is broken, the fewer the changes that need to be inspected. It’s the difference between “something that the team did this month broke this feature” and “something that Sarah did this morning broke this feature.” One is dramatically easier to deal with than the other. This leads us to value automated testing, as it is typically the fastest and most frequent of all testing tools.
The second question, “Are we done yet?” addresses the core personality failing with most programmers: They like their job. Give them a problem to solve, and they’ll happily go write code! Unfortunately, if you give them a slightly open-ended problem to solve, and maybe add some neat new technology to it, they’ll go write code, and write code, and write code, and keep writing code until the food and water runs out. There is no natural prompt to stop in the software itself.
Let’s look at how to use a skill from traditional testing, Risk Assessment, to create these prompts. Here’s an example of a conversation that might happen during Sprint Planning, between Tony the Tester, and Pauline the Programmer:
Tony: Based on the story grooming conversations we’ve already had, I know that this new search feature is both vital to the customer and has some new technology. Is that right?
Pauline: Yeah, we’re going to use the inline-semantic queries to use this vendor’s built-in approach to word-sense disamjargulation. I think that this will really jargon the jargon’s jargon with the McJargonjargonjargonjargon…. jargon.
Tony: Right. So, I’ve added a task to test the results of the search. I’ll start working up specific scenarios for that tomorrow, since it has a lot of risk. How soon can we run those tests and show it to our Product Owner?
Pauline: Well, it’ll probably be a day for the search, a day for authentication, and a half day for the results layout. We’ll be ready to demo on Thursday or so.
Tony: Ouch, Thursday’s a long time. Is there anything new about the authentication or layout stuff?
Pauline: Nah, just the standard stuff. Takes a while, you know with the other jargon jargon, jargon. And layout is always finicky.
Tony: Can you give me an early version that just displays something simple, like maybe the record IDs?
Pauline: Oh yeah. I’ll create a task that says “display basic results info, no layout” and another that says “full results layout”
Tony: What about the authentication piece? Could that be stubbed out initially, and just implemented the next day or so?
Pauline: Well, I could dummy it up and just give you a user/pass that will work until we implement that part. I’ll create one task for “dummy login”, and a task for “remote repo” that we’ll do later.
Tony: Sounds good! I’ll add a task to test the actual login that we’ll do later. Can you just stack the dependent tasks on top of my testing one, so I know which ones to watch for at standup?
See what happened here? Tony the Tester communicated to the team what was important, and asked for early versions of the software to be built around those important features. Tony also knows that as soon as the tasks that Pauline described (and wrote on cards) are done, that testing might be able to start*. And, as a result of this collaboration, the first feedback on the search results was moved up almost two days!
From story creation to sprint planning to daily standups, the best teams are constantly asking, “What should we show next? What testing can we enable next?” And that’s Test Driven Development writ large across the whole project.
*This team could adopt Acceptance Test Driven Development, turning Tony’s test tasks into BDD-style automated tests, but that’s another post. :)
The post The Role of Agile Quality Assurance appeared first on .