Categories
Ramblings

Non-technical testers are the gatekeepers of quality

Apologies for the intentionally ridiculous title, which is fitting in two of the most frustrating terms in testing.

Non-technical testers is a term often used to describe a test specialist who doesn’t do automation. As I don’t write automated tests in my current role (and haven’t used Selenium), I guess that includes me then?

No. This is nonsense.

  • I use Wireshark to analyse network traffic, comparing against protocol documents to understand what is going on.
  • I look at the contents of crash dumps to help me understand why it crashed and to get better reproduction steps.
  • I occasionally pick up development tasks.
  • I can set up & work with complex system tests and environments.
  • And perhaps most important and applicable to many of us “non-technical testers”, I understand our software & technologies. A good tester will use their domain knowledge to find the edge cases and risks in a complex system.

Non-technical…

I wanted to bring this up because I strongly believe this label is not just insulting to those who don’t write automated tests but belittles the profession. Also a topic for another day but I also think developers are better placed to be writing the automated tests anyway.

Let’s move on.

I’ve often seen testers described as the gatekeeper of quality but I have never agreed with it for two reasons.

1. I am, have never and don’t expect to be the gatekeeper on release

At a simple level, I am not in that position of authority. I don’t see why I should be as a test engineer or QA role (be that junior, senior or lead etc). It is is deeper than who calls the shots though.

Ahead of a release the decision on whether the quality is there and whether the product is sufficient quality needs to be a collaborative effort between the teams and roles.

I certainly prefer to take the approach that my role is to ensure that the decision on whether the quality level is high enough is adequately informed. As a test engineer I have knowledge of using the product. I’ve looked to put myself in the position of a customer and I look understand the implications of known defects. Consequently I believe that my opinion on the quality level is important and typically valued, whether that is attending key meetings or simply informing the team’s representatives and providing quality data/reports etc.

2. Quality is subjective and needs balancing

What are the implications of not achieving a deadline? Could this be a loss of a deal or legal implications? Further to this, is any delay to improve quality of sufficient value to the customer?

As a customer I may be OK with the app having significant alignment issues on Edge given that I typically use Chrome and the predominate page that I use isn’t affected. However I probably won’t be OK with my own projects missing their deadlines because our software was late.

In a similar line, I wouldn’t be happy if functionality that I regularly use has a poor locking design meaning that I have to keep hitting retry. Sure, the code might work as intended but are my needs being met?

Further to this, as a customer I would certainly take an application that provides all of the functionality that I want but needs restarting periodically over having to use half a dozen meticulously implemented applications with conflicting workflows.

Within this thought process, as test engineers we need to consider the customer but with perhaps more balance and realism. I will try and provide my advice on what I think of the quality, trying to think of a customer.

Bugs happen and especially in larger and more complex products, there will be known bugs in a release. Quality of software in terms of stability, bugs and functionality is a balancing act.

No one person or team should be regarding themselves as gatekeepers of quality. Instead everyone across the SDLC should be working to help enable quality.