Understand [figure out] how it works

Posted June 2, 2010 by dynamoben
Categories: Software Testing

I’ve always been intrigued by how and why testers think and do the things they (we) do. Each day I get the opportunity to see this tester uniqueness played out, and it is always interesting and often thought provoking.

Lately I’ve been thinking about the differences between new or novice testers and more experienced testers. This month I got an opportunity to witness some differences first hand. A couple of new testers had seen a bug but couldn’t reliably reproduce it. To further complicate things they could only get the issue to appear once in an eight hour span of time, drastically slowing their isolation efforts. When I got involved they had already spent days working on it and were mentally lost. They had gotten to the point where they were blindly searching with little to no direction and had a lack of tester motivation. While this wasn’t a project I was working on I decided to jump in to see what another set of eyes could do. I also wanted to provide some guidance, motivation, and support.

I started by asking lots of questions about what had been done, what they were doing when it occurred, and what they thought might be the root cause. I’ve found that in situations like these disciplined thought and structure can help clarify the  problem and provide new direction. I also started brain storming about tests that could be done to exercise each possible root cause. Just as we finished our discussion one of the testers announced he had reproduced the problem; the best part was he was able reproduce it at will. The group was excited that they had found repro steps and that’s where they stopped.

But that’s where I started. While it was great to have reliable steps that didn’t tell me what the problem was. In the end what the group had seen was just a symptom, they had not isolated or understood the underlying failure. I began thinking through the repro steps and comparing that to what I knew about the different parts of the application. I then headed over by the developers to discuss my thoughts and ideas. I asked a number of questions about what was happening during that span of time, I also confirmed what I thought I knew about the application and how it worked. I did this to try to better understand the how and why of  the observed symptom, with the intention of exposing not only the root cause but more importantly to uncover additional  areas that needed to be tested.

So what did I learn from this experience? I learned that I’m not satisfied with exposing a symptom I need to understand how it works behind the scenes. I don’t trust whats directly in front me, sometimes what I am seeing is a smokescreen or a byproduct of a different problem. As I test I need to understand whats happening and how it happens, my intention is to better understand the application as a system.  To do this I use multiple oracles and a number of different information gathering methods which include strategic testing, asking good questions, and thoughtful observation.

I’ve found that having a system level understanding  of the application you are testing opens up a whole new world of testing possibilities. Knowing how it works lets me dig deeper and find important problems early.

Do you understand [figure out] how it works?

Advertisement

Comptability isn’t a defect?!?

Posted June 1, 2010 by dynamoben
Categories: Software Testing

When did compatibility become something other than a bug (defect)? Especially when its a mission critical system with older devices in the field.

US Military GPS compatibility issue

Casino Bug

Posted May 29, 2010 by dynamoben
Categories: Software Testing

As testers we talk a lot about bugs and how they can/do reduce value in our products and projects. Well here is an example of a bug that incorrectly increased a monetarily value, which decreased the software’s value.

Woman Wins $20.18 instead of $42 million thanks to a software bug

Automation is like Duct tape…

Posted May 19, 2010 by dynamoben
Categories: Software Testing

It seems to me that an awful lot of people treat automation like duct tape.

Duct tape tends to be the miracle cure for all kinds of problems (there are websites dedicated to this premise). If the bumper on your car falls off “duct it”, if your plumbing leaks “duct it”, if you need a prom dress “duct it” (no joke ppl make prom dresses with this stuff). The problem is in most of these cases there is a better way to accomplish the task and the “duct it” method just leaves a sticky residue and doesn’t really fix the problem.

That seems to be the way people talk about (or use) automation. If testing takes too long “automate it”, if testing costs too much  “automate it,” wee want zero defects “automate it” (really there are ppl that believe that zero defects is possible). Here again automation is not a cure-all and in many of these situations there might be a better way. Be wary of the knee jerk response “automation it” you may just end up with a sticky residue that doesn’t really fix the problem.

Software Test == High Paying and Low Stress

Posted May 16, 2010 by dynamoben
Categories: Software Testing

Raise your hand if you think software test (and development for that matter) is a low stress and high paying career. This is a touch comical considering software test is an infinite process; combine that with “you have 3 days to test this app to make sure its perfect then it ships. I see no stress here, do you? 😉

Apparently this writer things that what we do is low pressure job and high pay.

Search and Rescue vs. Leading/Organizing a Test Effort

Posted May 7, 2010 by dynamoben
Categories: Software Testing

I recently said goodbye to one project and hello to another. With this change my mind shifts to how I organize and lead this new project.

For some time now I’ve been trying to find my “style” when it comes to leading and organizing the testing project for which I’m responsible. I work in a very small team, some of that team is in experienced and often contracted/short term. Further its not usual for me to play a duel role of test lead and senior tester. These things alone make organizing a project a challenge but when you mix complex software and short time lines things can get down right overwhelming.

For past projects I  have always started strong with what I thought was a clear vision and focus, but after the twists and turns I began to loose  my way and my focus. I found myself and the team testing for testing sake. While we were providing information for our stakeholders it didn’t feel like we were moving forward. This has become a point of frustration for me.

So during a long drive last weekend to visit relatives I started thinking about what I could do differently to maintain focus and purpose even through the twists and turns. I thought about the things I had read by Bach, Bolton, and Kaner. I also reflected on what Scott Barber had drawn the back of a notepad one evening about how he plans. All of this was good information but didn’t give me the “ah ha” I was looking for nor were they “my style.” I then started thinking about other industries, trades, and fields that might have good models that I could adapt for this task.

Then it hit me, when I was younger I was involved in Civil Air Patrol. One of the primary roles of this organization was Search and Rescue (SAR). I spent many weekends in the woods on practice missions, and had the opportunity to be a part of real searches on a number of occasions. To be involved and take on leadership roles I  had to learn how to read maps, layout search grids, participate in ground and air searches, and coordinate communications.

While this works for SAR I wondered if it could be adapted for software testing. Then I remembered something that Jon Bach mentioned during a class on Session Based Exploratory Testing. He shared that someone had come to him and said the Session Based ET was a lot like Search and Rescue. So if it can work at the micro level in an exploratory session, why can’t it work to organize and lead the entire project?

So in the coming months I hope to leverage my search and resuce skills (which are rusty) to lead this project. I’ve already begun comparing SAR concepts to my test project and many are lining up (search grids vs coverage areas, ground team member vs software tester, search types (air, ground) vs breadth/depth) I’m excited to see where this leads.

Analog

Posted February 7, 2010 by dynamoben
Categories: Software Testing

As a tester I feel that I play many roles, two in particular are Engineer and Artist. My artist, or creative side, encourages me to experiment, create, and appreciate the world around me. My Engineer side gives me the tools and know how to make those things come to life.

This week my creative side has been out in full force so I’ve been “feeding” it. A while back I came across a free ebook that has short snips about “What Matters.” I decided yesterday to give it a longer look and I happened upon a section called “Analog” and this part of it really got me thinking:

“Digital computing can answer (almost) any question
that can be stated precisely in language that a
computer can understand. This leaves a vast range of
real-world problems—especially ambiguous ones—
in the analog domain. In an age of all things digital,
who dares mention analog by name?”

— George Dyson

As testers aren’t we really working in the Analog domain? And isn’t it often the case that we are expected to act like we are in digital domain? We  struggle to help others understand that in our analog world problems are complex and ambiguous and it takes an analog device (a human) to understand these things and try to make sense of them. Further there is no digital means that can replace the mastery of a human when it comes to interpreting the analog.

I’m starting to understand that while my work may be described as digital, my job is analog. After all is not the software we work with taking an analog problem, converting it to digital for a solution, and then spitting out an analog representation of the answer. What could go wrong? 😉

The author ends with:

“Analog is back, and here to stay.”

— George Dyson

I would amend this to:

“Analog never left, just ask a tester.”

— DynamoBen

Apple’s Snow Lepoard wipes user data – Part II

Posted October 15, 2009 by dynamoben
Categories: Software Testing

This was in a report today about the issue found in Snow Lepoard.

Apple’s Snow Leopard technology is apparently the root cause of the error, King said, and questions should be raised about how such a fundamental error could have slipped through the product testing process. However, he added, the primary responsibility for backing up consumer data remains where it always has been — in consumers’ all-too-often uncertain, unwilling or unready hands.”

Is this really a testing process problem? Wouldn’t this be a development process problem? After all we testers don’t create the “error” we discover it. Beyond that why must blame be attached to any one group? Is not the quality of a software product the responsibility of everyone involved starting from the inception of the idea to its release?

Had I been asked to address this matter I would have said “we intend on understanding why we were unable to discover this error prior to it reaching our customers. We intend on investigating ways we can prevent something like this from reaching our customers in the future, but in the near term were are working on a fix.”

I don’t think this isn’t an issue of bad testing nor is it an issue of bad development. All involved (managers, testers, developers, and customers) play a part in high quality software and all share in responsibility when things go wrong. Each person on the project team had a hand in this “error” and need to use this as an opportunity to learn, not to blame.

Apple’s Snow Lepoard wipes user data

Posted October 12, 2009 by dynamoben
Categories: Software Testing

Apple Snow Leopard devours user account data

This is a lesson in software testing if I have ever seen one. Often we as testers have our “daily routine” for testing which may not include a guest account.  On the surface not logging into the guest account and back out may seem like an obvious testing oversight; I would bet that many of us (myself included) wouldn’t  have thought that doing this would wipe user data.

In short realize that your “daily routine” could be masking critical bugs. I’m off to try the guest user account on my systems (PC not Mac).

Testing vs. Checking – Functional Specs

Posted August 31, 2009 by dynamoben
Categories: Software Testing

Michael Bolton posted about a topic that is very close to my heart, checking software versus testing software.

Testing vs. Checking

I work in a regulated environment and this topic comes up quite a bit. Often the laws and recommended practices for testing in a regulated environment focus more on checking than testing.

Something really stood out to me in Michael’s post. Michael talks about functional specifications, which in a regulated environment are meant to guide and act as the sole oracle for the “testing” effort. As Michael points out “old-school” testing proponents feel that a functional spec should be unambiguous and thus can adequately fill this role. In my experience, and obviously Michael’s, a functional spec is everything expect for unambiguous.

I view a functional specification very differently I see it as a second product/deliverable that needs to be tested. So while I’m checking the software I’m testing the functional spec, and when I’m testing the software I am checking the functional spec. In fact, it’s not unusual for me or another tester to file a bug report against something in the functional spec. While this may seem extreme, you need to understand since I view both as products/deliverables to be tested I thus treat them equally.

Michael’s point is that testers don’t need functional specifications. I agree, and promote this idea with the testers in my team. In fact, I often withhold the functional spec from my team. This challenges them to question the software and create a “living” functional specification in their minds. Further, I’ve found that a functional spec can poison their “tester brain” and steer them away from potentially more important issues in the application. When I do finally pass out the functional spec I challenge them to compare their mental spec with the paper spec. I encourage them to use the paper spec not as a document that dictates what is to be tested but as checklist to “tour” functions they hadn’t yet “seen.”

I feel that a tester needs to treat a functional spec like any other testable product. You have to understand its limits and not treat it as the only “perfect” oracle.