Categories
Ramblings

Releasing bugs is a team effort

In modern development we have many layers of automated testing and there’s manual functional and exploratory testing. We’re shifting testing further and further left to catch things early – ideally before a line of code is written. So why do so many applications that we use day-to-day have bugs? Why aren’t we as members of the engineering team catching everything?

Of course many of the bugs that are in production software have been caught and triaged. We simply decided not to fix them. However I plan to look at how we manage to miss bugs.

It takes a real team effort for a bug to get into the wild.

I’ve been talking about bugs a bit lately and know that I am not alone in taking an escape from a feature I was involved in as a personal failure. I felt this way as a developer and I still feel that way as a test engineer. However I know that not only is it not fair to blame any one individual, it should be an opportunity to learn. From this I’ve put together the series of mistakes and failures required by the entire team in order to ship a bug.

Missed when writing the story

I don’t want to dwell here as I feel that when writing a user story you are focused on the expected behaviour that can be found through example mapping or thinking about user workflow. That isn’t to say that you shouldn’t be conscious of these sort of issues.

Missed during planning / refinement

Before a story is picked up by a developer, we would expect it to have been refined and reviewed. Multiple members of the team should have been involved in looking at it. However for an escape to happen, it is likely that it was missed during these sessions.

Way to go team – you missed it.

Missed during 3As / kick off

Regardless of your way of working, hopefully there was some conversation when the story was picked up. This is the last chance to catch a bug before development starts and often one of the most detailed discussions. As well as ensuring that we understand the acceptance criteria, we really want to be thinking about the edge cases here.

Way to go amigos – you missed it.

Missed in developer testing

Any self-respecting software engineer will be testing their own work, both manually and using automated tests.

Way to go dev – you missed it.

Missed in code review

Code reviews are more than nitpicking over names or formatting. They most important goal is to spot mistakes that the developer may have made.

Way to go reviewer – you missed it.

Missed in test phase

Whilst many people will look exclusively at this stage, it isn’t the sole responsibility of the tester of a story to find / prevent bugs. Nonetheless,

Way to go tester – you missed it.

Missed in showcase / sign off

Many workflows out there have some sort of sign off to say the story is done. If your workflow has such a state then clearly the PO has signed off on this buggy story.

Way to go PO – you missed it.

Missed in general use of the software by engineers

Unless you’ve a super fine tuned CD pipeline, there’s a likelihood that members of your team or the wider group will have used the feature.

Way to go team – you missed it.

Missed in release / sanity testing

Of course not all teams do this, depending on your workflow and deployment system but if you’re doing testing before release, it hasn’t caught it. Before you say “it was out of scope”, ask yourself why the scope doesn’t catch it and whether its even worth doing.

Way to go tester – you missed it.

Way to go all – you missed it!

Escapes can and will happen. When it does you should look to learn, not blame, and if you do really want to play the blame game, it should never be one person.

Categories
Ramblings

Value of a bug report

Over the years I have seen an increase in the idea of not reporting defects within Jira, Azure DevOps, Bugzilla etc and having a conversation instead.

If it is an issue within the story itself and an AC failure then I certainly see the merit in skipping the bug report. It can be busy work to write up behaviour, send it to the developer and for them to go “ah sod – I forgot about that” and quickly fix it. However as a counter point, from my time as a developer I did find it easier when things were written down as my memory recall from verbal communication isn’t the best. As a tester, unless the story comes back to test quickly, I want some kind of record.

This post is about more than pointing out that as humans we have different preferences. My concern is when entering a bug that isn’t currently planned to be fixed is considered busy work. This includes bugs found in areas that isn’t related to the current feature, or is quickly reviewed sat together & consider too low priority. I still always want to log a bug here.

Documenting why a bug isn’t worth fixing

I’m sure we’ve all been there where a defect is seen when you’re up against it. It is a real edge case and doesn’t have a significant user impact. The developer realises that it will be in a high risk area of code and a chunk of effort to fix & test it. Quickly it is clear that it isn’t worth doing right now. This is made in the knowledge that not fixing it now means that it probably won’t get done at all. So why enter a bug?

Think about the case where 6 months down the line and someone else sees this for the first time. They raise it with their PO and its agreed as a nice fix to put in. The scope isn’t quite understood so a developer goes away to investigate it. Half a day later they come to the same conclusion as before but aren’t all that confident in their understanding and want to get a second opinion from another team. “Oh that rings a bell. Yeah, real can of worms.”

The 15 minutes saved by skipping writing this down has saved a lot of time here!

What about when the customer reports it?

Very much in the same line of thinking as above, what about when the rather angry customer phones tech support complaining that half the time they try saving to their shared space, they get an error and need to retry.

Tech Support haven’t heard of the issue. It isn’t in the bugs / known issues list available to them. So naturally they investigate. A chunk of time later it comes to Engineering. They try and reproduce it and confirm it is a bug. It is then scoped and fixed by a team.

The team involved in the fix decide that a bug escape review (or whatever your company calls it) should be held. How was this missed?

If only that bug was entered eh? Tech Support may have known already. The team responsible for fixing it would have known who to speak to about it.

Bugs can be useful for learning

I’ve mainly worked with large, complicated software. This means there’s tech debt and a deep history of why some of our more complicated code exists. When it comes to developing a feature that has been around the block a few times, unless you yourself are familiar with it then it can be good to know about the gotchas and challenges. Similarly when identifying the risk and scope of testing, what bugs and challenges have we seen in the past?

A searchable pool of defects that includes changeset notes, a discussion on how to fix an issue or why it isn’t fixable can be pretty handy here! I’ve learnt about some of the nuances of the protocols used by our solution through developer’s bug comments.

I also believe that just knowing about the quantity of bugs in software is important. Zero bug software isn’t saying zero bugs are in the application – give me 30 minutes and I’ll prove otherwise – but that we’re leaving zero bugs in an open state. Just as zero bug development shouldn’t be misinterpreted as developing bug-free software, it shouldn’t be misinterpreted as don’t bother entering bugs!

As engineers we are all responsible for quality but how do we gauge that if we’re not recording bugs. How can we say that we have an understanding of our software’s quality level when there is no visibility of how many bugs we have accepted.

Within the world of cyber security it is known that you can mitigate, accept, transfer or avoid vulnerabilities. The same is true of bugs. Marking a bug as won’t fix is accepting the risk and impact on a customer should they encounter it.

The next time you or someone on your team says “don’t bother entering a bug”, make sure that everyone involved in the decision knows that they are keeping quality issues to themselves and they may be screwing over their colleagues down the line.

ACs passed. Able to place mug on the table. User story moved to done.

Categories
Ramblings

Non-technical testers are the gatekeepers of quality

Apologies for the intentionally ridiculous title, which is fitting in two of the most frustrating terms in testing.

Non-technical testers is a term often used to describe a test specialist who doesn’t do automation. As I don’t write automated tests in my current role (and haven’t used Selenium), I guess that includes me then?

No. This is nonsense.

  • I use Wireshark to analyse network traffic, comparing against protocol documents to understand what is going on.
  • I look at the contents of crash dumps to help me understand why it crashed and to get better reproduction steps.
  • I occasionally pick up development tasks.
  • I can set up & work with complex system tests and environments.
  • And perhaps most important and applicable to many of us “non-technical testers”, I understand our software & technologies. A good tester will use their domain knowledge to find the edge cases and risks in a complex system.

Non-technical…

I wanted to bring this up because I strongly believe this label is not just insulting to those who don’t write automated tests but belittles the profession. Also a topic for another day but I also think developers are better placed to be writing the automated tests anyway.

Let’s move on.

I’ve often seen testers described as the gatekeeper of quality but I have never agreed with it for two reasons.

1. I am, have never and don’t expect to be the gatekeeper on release

At a simple level, I am not in that position of authority. I don’t see why I should be as a test engineer or QA role (be that junior, senior or lead etc). It is is deeper than who calls the shots though.

Ahead of a release the decision on whether the quality is there and whether the product is sufficient quality needs to be a collaborative effort between the teams and roles.

I certainly prefer to take the approach that my role is to ensure that the decision on whether the quality level is high enough is adequately informed. As a test engineer I have knowledge of using the product. I’ve looked to put myself in the position of a customer and I look understand the implications of known defects. Consequently I believe that my opinion on the quality level is important and typically valued, whether that is attending key meetings or simply informing the team’s representatives and providing quality data/reports etc.

2. Quality is subjective and needs balancing

What are the implications of not achieving a deadline? Could this be a loss of a deal or legal implications? Further to this, is any delay to improve quality of sufficient value to the customer?

As a customer I may be OK with the app having significant alignment issues on Edge given that I typically use Chrome and the predominate page that I use isn’t affected. However I probably won’t be OK with my own projects missing their deadlines because our software was late.

In a similar line, I wouldn’t be happy if functionality that I regularly use has a poor locking design meaning that I have to keep hitting retry. Sure, the code might work as intended but are my needs being met?

Further to this, as a customer I would certainly take an application that provides all of the functionality that I want but needs restarting periodically over having to use half a dozen meticulously implemented applications with conflicting workflows.

Within this thought process, as test engineers we need to consider the customer but with perhaps more balance and realism. I will try and provide my advice on what I think of the quality, trying to think of a customer.

Bugs happen and especially in larger and more complex products, there will be known bugs in a release. Quality of software in terms of stability, bugs and functionality is a balancing act.

No one person or team should be regarding themselves as gatekeepers of quality. Instead everyone across the SDLC should be working to help enable quality.

Categories
Ramblings

Behaviour Driven Development as a manual test engineer

I’ve heard a lot of enthusiam for Behaviour Driven Development and seen a number of talks on the matter. However every time I dive into the subject, it leads into creating automated tests and my work is primarily with manual testing.

Nonetheless I am fascinated by the topic and am keen to explore how we can take some of the concepts of BDD and apply them to manual testing.

When I first started learning about BDD it was as part of a talk on Gherkin. Given I’m not overly fond of very rigid language structures in documentation (or at least no scope to deviate when applicable), I didn’t initially show a lot of interest, however over time I’ve changed my thoughts.

To start with, we are defining both requirements and the tests at the same time. Whilst all the material I’ve seen ends in tying this to automated tests, why can’t this also be manual testing?

Speaking with another test engineer recently, they commented on how they found existing test specifications a better source of describing behaviour than our UCMs (if the docs exist for that product). It did strike me a little – we’re writing requirements in User Stories, then updating our UCMs and then our test specs. Our unit tests also define the behaviour, although as a test engineer I am not exposed to them in our usual working processes.

BDD seems to solve this by defining the behaviour and using that as the definition for your test. Whilst sometimes a Gherkin scenario won’t fully explain what to do, in many (or most) test cases it does. I do question whether a test entitled “Log in with valid user credentials” needs 5 steps explaining how to log in with pre-conditions of not being logged in. If the behaviour is clearly explained (especially when using more “real world” examples), that is what I’ll test doing. These are my test cases.

A common exercise in BDD, Example Mapping, is also very interesting. Often testing for a User Story might take the acceptance criteria and map it to a test task, with some test cases created as a result. However using Example Mapping and the knowledge in the User Story and any other requirements docs can help us define great test cases and/or data sets.

I’m keen to give this a shot when I return to work. Whilst clearly it is an exercise that is intended as a planning exercise for a team, I can see the usage as an individual test engineer. I’ll even use the output of this to create a load of TestLink test cases, possibly using Gherkin a shot within the description, only including steps if actually essential.