Team Strength Through Weakness

Posted October 17, 2013 by dynamoben
Categories: Software Testing

When I was a kid a army soldier told me, “you are only as strong as your weakest man.”

Recently I was reflecting on this statement in the context of software test teams and planning. I recognize that each member on a test team has a unique skill-set. When I lead teams I try to discover not only what each member is good at (strength) but also what they are not good at (weakness).

During project planning I consider these strengths and weaknesses and try to determine the effect they might have on our testing mission. While strengths are good I’ve found that I tend to pay special attention to each team member’s weaknesses. An individual weakness can translate into a limitation for the entire team which could lead to a failure of the mission. That failure would be due to me over committing the team, functionally setting them up to fail. I try to build my plans around individual limitations and discover paths for success. If I can’t find a path for success then I need to make my stakeholders aware of these limitations and let them know that I’m concerned about whether we can succeed in a given mission as stated.

Like most things it taken a fair amount of time to develop this ability. At times I struggle with it, I think most leaders do but I recognize that this is something my team(s) needs me to be good at. Ultimately they want to succeed and they don’t need me setting them up for failure by ignoring our limitations and weaknesses. Beyond that my stakeholders don’t want any surprises and expect that if we can’t accomplish the mission’s goals then they will know sooner rather than later. So the better I am at this the better we are overall.

So I’ve found that being aware individual weaknesses and planning around it is an interesting way to build team strength. The stronger we are as a team the stronger we are as a company.

Advertisements

A cook or a chef?

Posted October 14, 2013 by dynamoben
Categories: Software Testing

Periodically I gain insight or inspiration from studying unrelated areas or industries, this was the case when I was struck by the similarities between testing and cooking. On the surface they seem drastically different but as you dig deeper you start to see the similarities.

The most basic type of cooking involves following recipes. These recipes are typically created by someone else and offer not only the items needed to create the entree, but also step-by-step instructions. The ingredients and steps working together will, in theory, create the perfect replication of the dish. But many of us know that even after following the recipe to the letter things don’t always turn out so perfectly. You can accidentally over or under cook the meal because of variations with your stove. The instructions could make assumptions about your level of knowledge or terminologies (simmer being my favorite) that cause the dish to not turn out correctly. Or you may find that even after making the dish its not really what you had hoped it would be, sounded better on paper.

As a tester I hear these things and immediately think of scripts and test cases. These sorts of “tests” (checks) are usually made by someone else and outline the prerequisites for the test, the steps to be followed, and if all goes well you get the “expected results.” But this method has the same problems as a recipe. Your setup may not exactly match, or you may misunderstand a term, or worse you could expend all  this time and effort and the test(s) aren’t as good as they seemed on paper.

Taking this analogy further lets consider the difference between a cook and a chef. A cook follows a recipe created by someone else, usually a head chef. He rarely, A cooks job is to recreate the same exact entree consistently, over and over again. Whereas a chef is looking for new an interesting culinary experiences, they won’t settle for the same thing over and over again. A chef not only needs to understand the science of food but also the aesthetics, and they must be highly creative. Chefs understand that repeating the same thing over and over again numbs the palette and doesn’t advance their craft. To this end a chef is constantly looking for the next new food combination. They are often sought out not for what they have done in the past but what they can do in the future.

Here again as a tester I see parallels, in this case “checkers” versus” testers.” A checker is someone who performs scripts that someone else created with the intention of consistently repeating the test(s) looking for the excepted results. But here is where things get interesting, in software testing often we aren’t looking for the same thing over and over again, we need to find something new. This is where a tester steps in. A tester, not unlike a chef, understands there is little value in doing the same things over and over again. A tester also has a high level of skill and must be knowledgeable in their craft. They often aren’t satisfied with the work they’ve done in the past and are looking for new an interesting testing opportunities. Testers are often sought out not for what they have done in the past but what they can do in the future.

So are you a cook (checker) or a chef (tester)?

Lessons Learned from Firefighting

Posted October 7, 2013 by dynamoben
Categories: Software Testing

My father, whom I’m very proud of, was a firefighter and EMT for most of his working career. In firefighting when you arrive on scene you need to quickly assess whether there are victims or occupants in the structure that need rescue. The first step and most important duty of the responding firemen is to perform a “Primary Search” of the structure. This search is typically a wide sweep of the structure but more often than not doesn’t include the entire structure.

Take a look at this article about the challenges of a Primary Search: Conducting a Primary Search

This really caught my attention because this is similar what I face as a tester on a daily basis. Granted what I’m dealing with is not life or death but the environment is equally uncertain. As testers we can’t go everywhere in the software looking for problems. As testers we must learn how to quickly assess whats important, and recognize that each situation is different. Frequently I find that what worked last time won’t work this time, which means I must be cognizant of my context as it changes and switches.

Just as each fire scene is unique, each application, project, build, and request is unique. Like firemen we need to know how to do a Primary Search of our software knowing that it might be different each time. Project to project, week to week, day to day, we need to be flexible and willing to change our approach.

Ultimately there is no one right way to do this search , but search we must. Embrace your changing context and change with it.

Good News vs Bad News Cultures

Posted October 4, 2013 by dynamoben
Categories: Software Testing

A while back I was recently watching a special on TV about a large automotive manufacturer in the US. Over several years the CEO created a “Good News” culture, in that he was only interested in hearing good news and not bad. Managers were rewarded for Green status reports and punished for anything else. This expectation and culture caused people to become dishonest because they would hide problems to avoid the slings and arrows of a negative report.

After some time there was a change in leadership (CEO), the status reports were presented as usual expect one manager decided to present a “Red” or bad news report. After he gave his report the new CEO started clapping and thanked the manager for his honesty, which came as a shock to everyone in the room. This new CEO wanted the truth and wanted to know about problems as they were discovered so they could get fixed. He knew that hiding them could cause them to become too big to resolve which in-turn put his products and company and jeopardy. This positive response to bad news caused a change in culture and at the following status meeting the reports went from mostly Green to a rainbow of colors that represented the truth about each product and project. This company had successfully moved from a “Good News” culture to a “Bad News” culture or a company that embraced bad news.  This cultural shift made the company stronger and better.

In a “Good News” culture stakeholders are interested in hearing things that are working, they celebrate the lack of problems, and typically are more focused on getting a product out based on single factors such as time or money.

In contrast a “Bad News” culture stakeholders are interested in hearing things that aren’t working, they celebrate new an interesting problems, and recognize that something found in-house avoids it being found in the field. Typically they are more focused on getting a product out based on quality and enjoy the challenges of resolving bad news.

In the testing world we can run into both types of cultures in our workplace both at a micro level (individuals) and a macro level (department or company wide). Testing is already challenging but when you combine that with a “Good News” culture it can be down right impossible. Let’s face it software testing tends to uncover  unpleasant and unflattering things, often making us the bearers of bad, or worse yet unwanted, news. If we are in a culture that only wants to hear good news then what is the value of testing? The test effort becomes merely a confirmatory process turning a tester into a checker. There is also and ethical dilemma that can occur, if a tester finds something the culture doesn’t want to hear bringing it forward can be a “career limiting move” but not doing so is irresponsible. What is a tester to do, we have a responsibility to report important problems that might detract value from our projects, but people need to want to hear the bad news.

As testers we must always be honest in our reporting, and ensure that we aren’t doing “fake” testing (IE writing “pass” in all the boxes because that is what people want). Further if the culture is causing you to limit constrain your testing or you know about important problems, you have a responsibility to speak up! Further managers need to be aware that for a tester to succeed a culture needs to be receptive to bad news; they must welcome it and celebrate it. Only then can a company succeed and truly appreciate the full value of testing.

How I interview: An exploratory approach

Posted September 25, 2013 by dynamoben
Categories: Software Testing

Recently Michael Bolton posted some Interview Questions as a thought experiment. This post prompted a bit of a discussion on twitter about how people interview. Like most I’ve interviewed and been interviewed countless times so I’ve experienced a wide array of methods, techniques, and approaches. Beyond that I minored in Human Resources so I’ve studied interviewing in-depth. As one might expect some interviewing methods are better than others and often what is best is based on the specific needs of the role and company. With that said I thought I would share my approach which has served me well over the last 8 years, your mileage may vary.

I use a framework that I adapted (stole) from testing, I call it Session-Based Exploratory Interviewing (SBEI). It’s based on Session-Based Exploratory Testing (SBET) created by Jon and James Bach. The Session-based portion of the name refers to the amount of time spent. A session is a limited amount of time, known as a time box. By design interviews are time boxed, usually between one to four hours depending on the role and the number of people involved. The exploratory portion refers to how I conduct the interview. Instead of a list of canned questions that I that I ask each candidate I instead consider topics I would like to cover within the time box. During the interview I use these topics and I keep notes about what was discussed so I can refer back to them later. After the interview is complete I meet with everyone involved and do a debrief.

So when I’m hiring someone I come up with a charter or end-state goal for the interview (could be written down if you like). For a tester I want to understand their background, how they think, their skill-set, and if they would be a good fit for the team. I then consider more specific areas (topics) I would like to cover during the interview. For example interest in the job, career path, motivations, preferred type of work (structured vs unstructured), chaos management, view of software testing, and experience (if any). Beyond that I like to cover “day in the life,” company culture, team dynamics, and how we view software testing. After the “traditional” interview then I wrap up with a short testing exercise (the dice game which James Bach taught me).

No two interviews are alike and this is important because no two people are alike. To ask the same set of canned questions for every interview contorts the discussion and risks not discovering the one thing that might make them a good fit. During the interview you want a natural flow so you can learn about the person but you also need to balance that with your charter and coverage (I don’t always cover everything, sometimes more interesting things come up which trump my charter or coverage. These are called opportunities). Ultimately interviews are meant to be conversations not a set of prescribed questions with numerical ratings of responses. For me I prefer an exploratory approach.

Dice Game for Interviews:

The reason I do the dice game with candidates is because some people are so nervous during an interview that it disguises their intellect and testing ability. Doing an exercise allows me a glimpse into how they might approach testing problems without worrying about saying the right thing.

How I play

First you need to experience the dice game first hand (I suggest attending CAST or hunting down someone in the Context-Driven community). I time box the game to 15-20 minutes and preface it by saying that I don’t expect them to solve it in this short amount of time but I would like them to work through it. After the time box has elapsed I bring in someone who acts as a Product owner or Project Manager and the interviewee then gives a report about their testing. The Product Owner or PM are allowed to ask questions of the tester to better understand what took place.

Why no posts?!?

Posted December 6, 2012 by dynamoben
Categories: Software Testing

So you may have noticed my posts here have been few and far between and might wonder what happened to me. Never fear, I’m still around and have been very busy in the testing world.

In May I changed jobs and am now a software test manager at WTS Paradigm, so I’ve been getting used to my new role and working with a new team. In August I was reelected to the board of directors for the Association for Software Testing and then elected President. I’m also the conference chair for AST’s eight annual conference (CAST) which will be held this summer in Madison, WI (hope to see you there). Finally in my “spare” time I’ve been a guest on the STP TWiST Podcasts.

Because blogging takes so much time it hasn’t happened lately, however I have become very active on twitter (@DynamoBen). So if you want to catch up on what I’m doing or see some of my software testing ramblings you can find me there, 140 characters at a time.

So while I may not be posting here as much as I used to I’m still around, and I will likely continue to post here in the future when I’m inspired and need more than 140 characters.

My CAST 2011 Emerging Topics talk

Posted July 30, 2012 by dynamoben
Categories: Software Testing

“Applying Military Concepts to Testing to Help Cope With Change and be a Better Leader”