Good News vs Bad News Cultures

Posted October 4, 2013 by dynamoben
Categories: Software Testing

A while back I was recently watching a special on TV about a large automotive manufacturer in the US. Over several years the CEO created a “Good News” culture, in that he was only interested in hearing good news and not bad. Managers were rewarded for Green status reports and punished for anything else. This expectation and culture caused people to become dishonest because they would hide problems to avoid the slings and arrows of a negative report.

After some time there was a change in leadership (CEO), the status reports were presented as usual expect one manager decided to present a “Red” or bad news report. After he gave his report the new CEO started clapping and thanked the manager for his honesty, which came as a shock to everyone in the room. This new CEO wanted the truth and wanted to know about problems as they were discovered so they could get fixed. He knew that hiding them could cause them to become too big to resolve which in-turn put his products and company and jeopardy. This positive response to bad news caused a change in culture and at the following status meeting the reports went from mostly Green to a rainbow of colors that represented the truth about each product and project. This company had successfully moved from a “Good News” culture to a “Bad News” culture or a company that embraced bad news.  This cultural shift made the company stronger and better.

In a “Good News” culture stakeholders are interested in hearing things that are working, they celebrate the lack of problems, and typically are more focused on getting a product out based on single factors such as time or money.

In contrast a “Bad News” culture stakeholders are interested in hearing things that aren’t working, they celebrate new an interesting problems, and recognize that something found in-house avoids it being found in the field. Typically they are more focused on getting a product out based on quality and enjoy the challenges of resolving bad news.

In the testing world we can run into both types of cultures in our workplace both at a micro level (individuals) and a macro level (department or company wide). Testing is already challenging but when you combine that with a “Good News” culture it can be down right impossible. Let’s face it software testing tends to uncover  unpleasant and unflattering things, often making us the bearers of bad, or worse yet unwanted, news. If we are in a culture that only wants to hear good news then what is the value of testing? The test effort becomes merely a confirmatory process turning a tester into a checker. There is also and ethical dilemma that can occur, if a tester finds something the culture doesn’t want to hear bringing it forward can be a “career limiting move” but not doing so is irresponsible. What is a tester to do, we have a responsibility to report important problems that might detract value from our projects, but people need to want to hear the bad news.

As testers we must always be honest in our reporting, and ensure that we aren’t doing “fake” testing (IE writing “pass” in all the boxes because that is what people want). Further if the culture is causing you to limit constrain your testing or you know about important problems, you have a responsibility to speak up! Further managers need to be aware that for a tester to succeed a culture needs to be receptive to bad news; they must welcome it and celebrate it. Only then can a company succeed and truly appreciate the full value of testing.

Advertisement

How I interview: An exploratory approach

Posted September 25, 2013 by dynamoben
Categories: Software Testing

Recently Michael Bolton posted some Interview Questions as a thought experiment. This post prompted a bit of a discussion on twitter about how people interview. Like most I’ve interviewed and been interviewed countless times so I’ve experienced a wide array of methods, techniques, and approaches. Beyond that I minored in Human Resources so I’ve studied interviewing in-depth. As one might expect some interviewing methods are better than others and often what is best is based on the specific needs of the role and company. With that said I thought I would share my approach which has served me well over the last 8 years, your mileage may vary.

I use a framework that I adapted (stole) from testing, I call it Session-Based Exploratory Interviewing (SBEI). It’s based on Session-Based Exploratory Testing (SBET) created by Jon and James Bach. The Session-based portion of the name refers to the amount of time spent. A session is a limited amount of time, known as a time box. By design interviews are time boxed, usually between one to four hours depending on the role and the number of people involved. The exploratory portion refers to how I conduct the interview. Instead of a list of canned questions that I that I ask each candidate I instead consider topics I would like to cover within the time box. During the interview I use these topics and I keep notes about what was discussed so I can refer back to them later. After the interview is complete I meet with everyone involved and do a debrief.

So when I’m hiring someone I come up with a charter or end-state goal for the interview (could be written down if you like). For a tester I want to understand their background, how they think, their skill-set, and if they would be a good fit for the team. I then consider more specific areas (topics) I would like to cover during the interview. For example interest in the job, career path, motivations, preferred type of work (structured vs unstructured), chaos management, view of software testing, and experience (if any). Beyond that I like to cover “day in the life,” company culture, team dynamics, and how we view software testing. After the “traditional” interview then I wrap up with a short testing exercise (the dice game which James Bach taught me).

No two interviews are alike and this is important because no two people are alike. To ask the same set of canned questions for every interview contorts the discussion and risks not discovering the one thing that might make them a good fit. During the interview you want a natural flow so you can learn about the person but you also need to balance that with your charter and coverage (I don’t always cover everything, sometimes more interesting things come up which trump my charter or coverage. These are called opportunities). Ultimately interviews are meant to be conversations not a set of prescribed questions with numerical ratings of responses. For me I prefer an exploratory approach.

Dice Game for Interviews:

The reason I do the dice game with candidates is because some people are so nervous during an interview that it disguises their intellect and testing ability. Doing an exercise allows me a glimpse into how they might approach testing problems without worrying about saying the right thing.

How I play

First you need to experience the dice game first hand (I suggest attending CAST or hunting down someone in the Context-Driven community). I time box the game to 15-20 minutes and preface it by saying that I don’t expect them to solve it in this short amount of time but I would like them to work through it. After the time box has elapsed I bring in someone who acts as a Product owner or Project Manager and the interviewee then gives a report about their testing. The Product Owner or PM are allowed to ask questions of the tester to better understand what took place.

Why no posts?!?

Posted December 6, 2012 by dynamoben
Categories: Software Testing

So you may have noticed my posts here have been few and far between and might wonder what happened to me. Never fear, I’m still around and have been very busy in the testing world.

In May I changed jobs and am now a software test manager at WTS Paradigm, so I’ve been getting used to my new role and working with a new team. In August I was reelected to the board of directors for the Association for Software Testing and then elected President. I’m also the conference chair for AST’s eight annual conference (CAST) which will be held this summer in Madison, WI (hope to see you there). Finally in my “spare” time I’ve been a guest on the STP TWiST Podcasts.

Because blogging takes so much time it hasn’t happened lately, however I have become very active on twitter (@DynamoBen). So if you want to catch up on what I’m doing or see some of my software testing ramblings you can find me there, 140 characters at a time.

So while I may not be posting here as much as I used to I’m still around, and I will likely continue to post here in the future when I’m inspired and need more than 140 characters.

My CAST 2011 Emerging Topics talk

Posted July 30, 2012 by dynamoben
Categories: Software Testing

“Applying Military Concepts to Testing to Help Cope With Change and be a Better Leader”

Test everything…or not.

Posted May 7, 2012 by dynamoben
Categories: Software Testing

A month ago Michael Bolton sent me an article that underscores the problem of “all tests being deterministic” and “we have to cover all the functions in this application.” Something I was thinking about while I read the article was how a checkup or physical at a doctor’s office works and how that compares to software testing.

Often stakeholders want testers to “test everything.” Can you imagine if instead of checking the usual health indicators and vital signs your doctor tried to test “everything.” Consider how long that would take, and how much it would cost. It would be ludicrous to do such a thing especially if you were in good health.

However the risk in not testing everything (and the primary fear of our stakeholders), is that the usual indicators and vitals don’t always turn up something in the examination room. A patient could leave and drop dead in the parking lot. So what can doctors, or in this case testers, do knowing this very real risk?

I think the key is having a medical history for your software. Having year’s worth of tribal knowledge about the application helps you decide what indicators and vitals matter. Having this history helps a tester perform tests that are likely to turn up something important about the application while it’s in the building. While there still is no guarantee that the application won’t drop dead in the parking lot, it does increase the chances of finding something significant before it does walk out the door.

AST newsletter article: “The Road Ahead…”

Posted January 12, 2011 by dynamoben
Categories: Software Testing

I wrote an article for the January AST Community News, you can find it here. Enjoy!

Being seen as a Tester: Not always a good thing

Posted October 14, 2010 by dynamoben
Categories: Software Testing

Let me start by saying that normally I’m proud to be seen as a software tester. I understand how challenging, rewarding, and interesting this job is. However, there is a dark side to being seen as a tester. Testers can be treated differently by people who don’t understand or don’t see testing the same way I do. This disconnect can not only devalue the testing effort but at its worst it can put an entire project at risk.

Let’s start with how I want to be seen as a tester. I view software testing as a service, and I want to provide valuable feedback to my stakeholders in a timely manner. To do this I need to maintain a high level of project awareness and knowledge at all times. This means I need to be on the inside of a project actively participating in discussions and learning as I go. I want to be aware of what’s happening in development, marketing, customer care, and even sales. By being actively involved at this level I gain a greater understanding of the context I’m working in. I use this improved understanding to increase the value and effectiveness of the testing effort.

With that said, often people view software testing as an administrative task that happens toward the end of a project. In their minds, a tester makes sure the software works the way it was specified and fills out test documents to satisfy that expectation. A tester is typically given limited information about the project and is usually only included in discussions if it’s felt that they relevant to testing (by their definition). This draught of information leaves the tester with little to no context, which can lead to feedback that has little to no value, which results in reduced overall testing value and effectiveness.

So while I love being seen as a tester for the right reasons, I would rather not be seen as a tester for the wrong reasons. To combat this and add value to a project I need people to think of me as a developer, marketer, support representative, or sales person. This ensures that as conversations occur or information flows I’m included and privy to it. By being included I don’t miss out on critical project information that may not on the surface seem “tester appropriate.” My philosophy is I would rather have too much information then not enough, when doubt invite me or share, then I can decide if its “tester appropriate.”

CAST 2010 transformations

Posted August 16, 2010 by dynamoben
Categories: Software Testing

I attended CAST this year in Grand Rapids and as usual it didn’t disappoint. For most returning CAST’ers the reason they come back year after year is the high level of interaction or conferring. And while that is very true of CAST what I find interesting is the transformations that occur in the matter of 2 days from this conferring.

Each year CAST attracts a number of new attendees that feel that software testing is really about being the sentry of quality. They view themselves as the gatekeepers of quality and often talk about standing in the way of shipping products. I’m always interested in seeking these people out, discussing this with them, and seeing how their opinions change by the end of the conference. Typically by the end of the conference they have shifted their focus away from quality assurance and toward quality assistance, which is amazing.

For me I love the fact that CAST provides an environment where testers can quickly learn how to advance the state of our industry trough interactions and informal discussions in the hallways, over dinners, and between sessions.

If you haven’t had the chance to experience CAST you really need to, especially if you’ve been testing for some time. CAST 2011 will be in Seattle Washington and is looking like it will be an amazing experience. Be sure to check out the promo video: http://www.youtube.com/watch?v=NpLcIFmmSF8

Firmware is software

Posted June 21, 2010 by dynamoben
Categories: Software Testing

I regularly work with hardware that interacts with the software (not off the shelf PC hardware, custom hardware device). Over the years the hardware I work with has gotten more complex, which also means the firmware has gotten bigger and more complex.

What’s interesting is that some people view firmware and software as unrelated and dissimilar especially from a testing perspective; not true.  As an electronics hobbyist and software tester I can tell that firmware is software, and software is firmware; they are one in the same. Sure software can be millions of lines of code but when you consider the size of a microprocessor, thousands of lines equals millions.  Further gone are the days of assembly language  programming, most hardware developers use C and some are starting to use C# (not sure how popular this is). So the PC and custom hardware worlds are starting to collide.

From a testing perspective I’ve found that any test that can be performed on the PC side can be performed either directly or through light adaptation on the firmware side. A good example is boundary and race conditions, which exist in firmware also and can create some very interesting bugs (but that’s for another day). Honestly I can’t think of much that is different between firmware and software from a tester perspective. To this end I think its unfair to assume that since its “just firmware” it doesn’t need the same level of attention or somehow requires a different skill set to test. In fact I think that many of the lessons we have learned on the PC side could be of great benefit to the firmware side.

I think the firmware and software testing worlds have collided, and we need to embrace it as testers. So please give firmware the testing attention it deserves, after all its software too.

Attend CAST 2010

Posted June 21, 2010 by dynamoben
Categories: Software Testing

Last year was my first CAST. I had heard countless good things from the community testers I interact with, this in combination to the cost savings were the main drivers for my attendance (employers like the fact that its 1/2 the cost of a larger conference).

Once I got there I discovered why it was so highly recommended. Sessions aren’t just a person lecturing and then you leave, you get the opportunity to ask questions and test the presentation. This takes each session beyond just a lecture and into true learning and sometimes a debate. In addition I discovered the best part of CAST happens after the sessions in the hallways.  I got to break bread and talk with testers I respect about my ideas, I got to ask questions, and share war stories. I also got to interact and network with testers beyond the casual hello while passing one another in the halls. Instead of a stack of business cards, many of which never get used, I was able to build a true “support network” which I use through out the year.

You need to attend CAST, but be careful its addictive, you will never view conferences the same again. 😉

Attend CAST 2010

The 5th annual Conference of the Association for Software Testing

August 2-4, 2010, Grand Rapids, Michigan, USA

“Skills in Testing”

About CAST

CAST reflects the AST’s core mission: to build community amongst scholars, practitioners, and students for the advancement of the practice of software testing. In 2010, CAST aims to leverage peer collaboration to build an enhanced understanding of how various skills influence tester effectiveness.

CAST offers a unique opportunity to learn and confer with others that simply isn’t found at other conferences. Each scheduled session allocates time for facilitated “open season” discussions that encourage participants to question and challenge the presentation. What takes place in the hallways, at receptions, and during meals and lightning talks truly sets CAST apart; for many attendees, the greatest value is derived from the opportunity to discuss and delve into the topics that matter to them.

Space is limited Register Today!

More information and Registration: www.CAST2010.org

Full Conference Pricing (non-member)

$630 by May 15th, 2010 ($280 savings)

We can’t wait to see you in Grand Rapids!