Archive for the ‘Testing Lessons’ Category

Note: this is a somewhat unpolished post. There are a lot of ideas around this topic, and I’m just scratching the surface. However, in the interests of following the 80/20 rule, here are a few semi-organised thoughts on the matter…

It’s a moment that chills me to the bone. A manager wanders over to say “Oh, you know that software you tested last month. We found a bug in production – it’s currently affecting EVERYONE”. Or a couple of UAT testers running basic test cases find a couple of bugs your vaunted exploratory testing failed to detect. To misquote Darth Vader, my failure is now complete.

A missed bug is only one form of failure, and a test project provides dozens of different ways to “screw up”. The moment of failure is rarely a positive experience, but as Dan Ashby wrote not too long ago, it can be a very useful opportunity to reflect, learn and improve. Failure can also provide us with an excellent example or story to share and even teach others with (which is a handy way of remembering….).

When time permits, I like to learn. Some of my learning is externally motivated (e.g. university deadlines), but a lot of my learning is internally motivated and largely solitary (e.g. a udemy course).

As a result of seeking new avenues of learning, I’ve became connected with the growing Australian (and international) context-driven testing community. Biased though I may be (at least in this particular circumstance), I believe getting involved with communities of like-minded individuals is possibly the best way of increasing your expertise, getting a lot of encouragement and useful feedback in the process, and ultimately making a contribution to that very community.

June not so recently came to a close, and with it the AST’s latest round of the Bug Advocacy course. After completing the BBST Foundations course last year and waxing lyrical about it, I was keen to jump head-first into another of the AST’s offerings (fortunately, the course is not a physical object and I was spared a nasty bump on the cranium). I was not disappointed – four weeks of bug-tastic study, discourse and evaluation has yet again triggered fresh and challenging perspective on bug investigation and reporting – an aspect of testing I’d always thought of as being at the very worst “OK” in. Many a lesson learned has crept into and visibly improved my 9-to-5 work over the past few weeks.

An array of quizzes, online discussion, videos and bug reporting (yes, in this day of simplistic multiple choice assessment, it’s quite strange that you would actually practice the course’s subject, as well as be evaluated on such work) awaited us throughout the four weeks the course ran. It was engaging to the max, and although time-consuming, I never felt I was just trying to “tick off” activities. Well… I must admit that by the end of it, I was a little worn out. If I had one small criticism of the course, it was the sheer amount of ground that’s covered in less than a month.


Posted: November 24, 2012 in Testing Lessons

Back in the misty mists of time, I came across something in the BBST Foundations course that struck a chord with me.  It was the following statement – “Karl Popper argued that experiments designed to confirm an expected result are of far less scientific value than experiments designed to disprove the hypothesis that predicts the expectation” (Slide 67, 2010 version of the BBST Foundations slide pack).

“Yes, of course!” I cried (errr, just in my head… because I’m not crazy and prone to shouting “Yes, of course!” aloud for revelations immediately applicable only to myself).  Let us aim to disprove the claims about the software (wherever those claims may come from) and “break” it more effectively (though it could be argued testers don’t break software, but that’s a topic for another day…).  And although I had unknowingly (or perhaps semi-knowingly) skirted this philosophy with my own test efforts, having it explicitly stated was a heady experience.

Hello again.  It’s been way too long.  Life can really get in the way of maintaining a good (or bad, or otherwise – I’ll let you decide which of those this falls under) blog.

My story today is short and slightly saccharine.  It also provided a blindingly obvious lesson I was able to re-acquaint myself with.

I had finished testing functionality to mask credit card numbers that are entered into our company’s transaction processing application a few weeks ago.  It was a pain to test (as it was a small change thrown onto a much larger, more important pile of testing I was already in the thick of), and so I executed “auto-pilot test”, threw the developer what bugs I found, re-tested the fixes and declared it good and proper.  Fin – so I thought.

This morning, the business analyst wandered up and asked a few questions about when the masking was triggered.  I replied with a confident “oh, it only happens once condition X has occurred”, to which she replied “it’s also supposed to happen with condition Y”.  My stomach sunk as I saw, nestled among the relatively short requirements doc, condition Y, in all its finger-pointing condemnation.