I’ve always been intrigued by how and why testers think and do the things they (we) do. Each day I get the opportunity to see this tester uniqueness played out, and it is always interesting and often thought provoking.
Lately I’ve been thinking about the differences between new or novice testers and more experienced testers. This month I got an opportunity to witness some differences first hand. A couple of new testers had seen a bug but couldn’t reliably reproduce it. To further complicate things they could only get the issue to appear once in an eight hour span of time, drastically slowing their isolation efforts. When I got involved they had already spent days working on it and were mentally lost. They had gotten to the point where they were blindly searching with little to no direction and had a lack of tester motivation. While this wasn’t a project I was working on I decided to jump in to see what another set of eyes could do. I also wanted to provide some guidance, motivation, and support.
I started by asking lots of questions about what had been done, what they were doing when it occurred, and what they thought might be the root cause. I’ve found that in situations like these disciplined thought and structure can help clarify the problem and provide new direction. I also started brain storming about tests that could be done to exercise each possible root cause. Just as we finished our discussion one of the testers announced he had reproduced the problem; the best part was he was able reproduce it at will. The group was excited that they had found repro steps and that’s where they stopped.
But that’s where I started. While it was great to have reliable steps that didn’t tell me what the problem was. In the end what the group had seen was just a symptom, they had not isolated or understood the underlying failure. I began thinking through the repro steps and comparing that to what I knew about the different parts of the application. I then headed over by the developers to discuss my thoughts and ideas. I asked a number of questions about what was happening during that span of time, I also confirmed what I thought I knew about the application and how it worked. I did this to try to better understand the how and why of the observed symptom, with the intention of exposing not only the root cause but more importantly to uncover additional areas that needed to be tested.
So what did I learn from this experience? I learned that I’m not satisfied with exposing a symptom I need to understand how it works behind the scenes. I don’t trust whats directly in front me, sometimes what I am seeing is a smokescreen or a byproduct of a different problem. As I test I need to understand whats happening and how it happens, my intention is to better understand the application as a system. To do this I use multiple oracles and a number of different information gathering methods which include strategic testing, asking good questions, and thoughtful observation.
I’ve found that having a system level understanding of the application you are testing opens up a whole new world of testing possibilities. Knowing how it works lets me dig deeper and find important problems early.
Do you understand [figure out] how it works?