Categories
Experience Reports Guide

Using dump files to guide testing

You don’t need to understand code to make use of dump files.

One tool that I’ve frequently used throughout my testing career (and also development) is WinDbg. I was a little surprised when I realised that very few other people use it so I thought that I’d share a little about why I use it and how to get going.

What can you do?

  • See the code path in a crash dump
  • View data in memory
  • View threads that are running when software is in a hang
  • Many more things that I’ve yet to try

This can be especially useful if you’re tasked with reproducing a crash reported by a customer and (as is unfortunately common) they say “I wasn’t doing anything”.

What you’ll need:

  • WinDbg or WinDbg Preview
  • Access to symbols files for your software (developers can probably help set you up).
  • A dump file that you want to look at (more on this later)

This is part of Debugging Tools for Windows. You can download it for free from Microsoft. There’s a newer “preview” version that is quite neat plus the older one that I’m more accustomed to using as part of the Windows 10 SDK. Both are linked from here:

https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools

If the link is broken, Google for WinDbg and you should find it.

Obtaining dump files

Hopefully your software outputs crash dumps but if not, you can add some registry keys to ensure that they are generated in a known location. Even if your software does create minidumps, you may value full dumps more:

  1. Open regedit and access: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDump
  2. Create the following:
    1. DumpFolder – REG_EXPAND_SZ – %LOCALAPPDATA%\CrashDumps
    2. DumpCount – REG_DWORD – 5
    3. DumpType – REG_DWORD – 2

For more see: https://docs.microsoft.com/en-us/windows/win32/wer/collecting-user-mode-dumps

If you want to create a dump file of running software, for example to inspect memory or dig into a hang then you can use task manager. Just right click on the process and click “Create dump file”.

Note that if you are using a 32bit application then you’ll need the 32bit task manager, e.g. c:\windows\syswow64\Taskmgr.exe

There’s also tools out there that will generate dump files on demand.

Symbols

The next thing that you’ll need to do is setup symbols path. These will help turn the 0s and 1s in a minidump into more readable strings. Create yourself a folder for the symbols, for example c:\symbols. Then in WinDbg you’ll need to set the path. In the old version open the File menu and you should see an option. For the Preview version go to Settings then Debugging.

SRV*c:\symbols\*https://msdl.microsoft.com/download/symbols;SRV*c:\symbols\*<YOURPATHHERE>

You may have to add a few paths in there but hopefully you get the idea.

Analysing crash dumps to get a call stack

Now on to the juicy part, analysing a crash dump. You can open it from the File menu.

From the View menu, you should be able to see the option for a stack / call stack. Bring that up whilst the dump is being loaded.

Now run the following commands (they take a few minutes):

.ecxr

!analyze -v

This should tell you a chunk of information about the crash. Based on this you can

An example of using this information

When looking for an example I found a crash dump from a game that I made many years ago. I have absolutely no idea what might of caused it so hopefully now I can figure out why.

My WinDbg analysis included the following:

System.NullReferenceException

This tells me that it tried using an object that didn’t exist. Either it hasn’t been set or has been deleted but is still in use.

007edc0c 081abfb6 X_Orbtek_II!X_Orbtek_360.XOrbtek.UnloadContent+0x6
007edc10 081abe73 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.Game.DeviceDisposing+0x1b
007edc20 081abe43 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.OnDeviceDisposing+0x13
007edc28 081abe15 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.HandleDisposing+0x15
007edc34 081ab7cf Microsoft_Xna_Framework_Graphics!Microsoft.Xna.Framework.Graphics.GraphicsDevice.~GraphicsDevice+0x2f
007edc40 081ab72f Microsoft_Xna_Framework_Graphics!Microsoft.Xna.Framework.Graphics.GraphicsDevice.Dispose+0x1f
007edc60 081ab543 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.Dispose+0x133
007edc74 081ab3f5 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.System.IDisposable.Dispose+0x15
007edc80 081ab0e3 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.Game.Dispose+0xb3
007edcb4 081ab015 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.Game.Dispose+0x15

From this I can tell that the game was being closed. It has happened when unloading content so likely its tried to . It may be possible dig deeper. As you gain more skill with the software it is possible to learn more about what was in memory to understand at exactly what point it crashed.

And here’s the key part

If I was trying to reproduce this crash, I can take the knowledge learnt from the crash dump to guide how I will try and get to the bottom of it. Unfortunately the example dump I picked is a tricky one but I could maybe come up with something like “Explore exiting the game with different assets loaded to discover the source of the crash”.

A 10-20 snoop in the dump file might save me a huge chunk of time in trying to reproduce a crash. Obviously I can’t share real examples from my professional life in any detail but knowing that mouse over a control caused a “random” crash or that the software crashed after hitting “Save” and the top of the call stack was “MyApp!MyApp.FileIo.SaveFile.ApplyTextOverlay” then I can focus on that area.

There’s loads more that you can do but hopefully this has been useful!

Categories
Experience Reports Ramblings

Why I believe that manual testing is a great job

I’ve had an unusual journey to my current role (Senior Test Engineer, doing primarily manual testing).

My career started as a QA tester in games as a “foot in the door” to be a games developer. This was very common in the industry. However after establishing myself and becoming a Senior, I moved to Games Design rather than development. Being games, I was eventually redundant and with the desire to get paid again, I took a role as a Software Test Engineer.

I was good at it. I learnt new techniques and skills. I was using Wireshark to see communications between devices and understand why things may be behaving incorrectly. However I was also bored. Most of our testing was running test cases that had been written (and often already executed) by the developers. I then moved into an “Engineering Support” role where I’d take on all support cases passed to Engineering, taking the load off our senior & lead developers. I loved trying to analyse the system and using my “tester brain”, but constantly handling escalated cases with no useful information was miserable.

This is when I made the leap to development. After 5 solid years, working on a variety of different products, I was at the stage where I really ought to be taking on the responsibility to become a senior software engineer but I had very little appetite for it. Instead of taking the lead on new development technologies and emerging languages I found myself more interested in improving our testing. When the opportunity for a senior manual test engineer role came up, I went for it.

A few people have asked me “why?” and treat it as a step down (and even a waste of my talents), however I believe that it has made me more valuable to the company

I like to feel that I am a fairly creative person and am also good at problem solving & analysing data. This lends well to both professions. There’s common ground like being involved in the planning phase, breaking down a feature and identifying the risks and challenges that are there. The “tester brain” is really handy here. Developers then get to flex their brain in designing the code to solve the problem whilst testers will be performing exploratory testing and identifying things that were hard to see when the feature was conceptual. Whilst developers get the thrill of seeing the code they’ve written become a feature that customers use, I certainly enjoy the buzz of finding a bug. Finally there’s debugging. I can really hunting through logs, network traces and code to understand a “weird bug”. This applies to both roles (and is something I’ll touch on in a later blog).

Testing can be boring and laborious, especially when you are mainly doing “checking”. Being given a bunch of things to check, following a load of steps then providing the result is rubbish. It is just as bad as writing what seemed like endless documentation during my time in development.

During my time in development I was always undone by build infrastructure. Particularly with C++ and Apple-based applications, I had a torrid time getting things built for the first time and often my projects were light on feature work and about pulling in latest dependencies etc. I didn’t understand most of the failures or why it wouldn’t just work. Words cannot describe how happy I am that this is a rare occurrence for me nowadays (although newer technologies do seem to have alleviated a lot of the pain here).

Ultimately I prefer manual testing to development. I find that I get to spend more time doing the interesting bit (finding bugs vs writing feature code) and that because (I believe that) I am a great tester and an decent developer, I add a lot more value to the company in helping us deliver quality features in my current role than as a tester.

But what about automation testing?

What I loved about development is seeing something work. Knowing that it will be deployed for customers to use. I felt like I was making a difference in delivering the product. Automation includes the same enjoyment of writing code but ultimately it lacks that feature delivery buzz. As a role it feels inferior to being a developer. You’re doing the less interesting development tasks. Similarly if I’m spending my time writing automated tests, I am not doing exploratory testing. I am not digging through logs and code to see if I can understand the behaviour.

I believe that writing scripts, tools and on occasion tests to reduce my effort and time spent doing boring work is a valuable use of my time. Automation can be great here but to check that the ACs are in place, it is usually quicker to fire up the software and check. Then I can focus on experimenting and exploration, the best bit of being a test engineer.

So the next time someone asks as to whether I’d want to become an automation test engineer, perhaps I should ask “why would I want to do that?”.

Categories
Experience Reports Ramblings

2021 – A year in review

I believe this is a fairly common thing to do and hopefully useful for myself. Lets have a look back at the bizarre year that was 2021.

Key events:

  • Officially became an employee of Motorola Solutions, following a takeover last year.
  • Moved to a new office, which I visited a handful of times.
  • Started a new “secondary role” as a Cyber Champion.

I had a few achievements that I ought to be pleased with:

  • Became involved in cyber security, which involved a chunk of learning theory, running a variety of different types of scans and finally I ran a load of talks for my office during cyber awareness month.
  • I created a new card games called Threat agents and got myself a single copy manufactured.
  • As it has been historically agreed that automating our older, long running programs would be too involving and I was going mad with regressions testing, I wrote my own automation tool. It wasn’t great at reporting results but it did test for stability and found some interesting issues.
  • Changed how we do release testing. It probably doesn’t seem like much but I had been pushing to revamp things for a couple of years and eventually managed to get it implemented.

Whilst there’s plenty positives from the year, ultimately it wasn’t a massive success. I had a couple of big objectives for the year that failed to accomplish:

  • Develop my exploratory testing. I’ve always had the “knack” for finding bugs but I want to learn more about how people do it more professionally. I just need to take the time.
  • Use an off the shelf automation framework for automated UI and/or API testing. I am very confident that this is something I can do, however until I’ve had some proper experience I can’t add it to my CV (even if I’m not a fan of test engineers writing all these tests).

I don’t think I should beat myself up for not managing the above and I certainly don’t want to be giving myself objectives to judge myself by, however it is good to consider what I can hope to achieve in the coming year. For 2022 I want to:

  • Sell threat agents. I’ve had positive feedback for the concept and it seems like selling it might be an easier way to get it out there than sharing for free!
  • Related to this I want to become more experienced in threat modelling. Especially if I am able to give talks on the subject.
  • Improve my knowledge and technique for exploratory testing.
  • Develop my coaching skills. In particular trying to get some of my colleagues on board with some of my ideas and to feel like I’m having an impact.
  • Discover where I fit in the world. Given that automation is all the rage in the jobs market but I prefer manual exploratory testing to try and break the software, what roles are available to me going forwards? Do I have to give up on my ideals? What might lie ahead for me?

If I don’t meet these and have success elsewhere, well so long as I’m going forwards that is all that matters right?

Categories
Experience Reports Ramblings

Threat modelling: Don’t forget your test engineer

I am a test engineer at my current work. After watching a number of talks at Ministry of Testing I also signed up for a secondary role; Cyber Champion. Through this role I’ve been learning about many aspects of cyber security and then running brown bags for our office to help people learn more about the various aspects of cyber security and I’ve also been doing vulnerability scanning. However what I most want to talk about is threat modelling.

If you’ve not heard of it, Threat modelling, at least within the context of software, is an exercise to identify vulnerabilities within your solution. I’ve written some words about it on my Threat Agents site (will explain “Threat Agents” shortly) so I won’t go into too much detail. In short, you put together a data flow diagram then look for vulnerabilities in it. Most people use a mnemonic called STRIDE to achieve this.

If this isn’t familiar then I’d recommend checking out my Threat Modelling write up on my threat Agents site to learn more, or have a look at Ministry of Testing, OWASP or have a quick Google.

Now to the point. Many teams may approach threat modelling by pulling in their senior software engineers, those with the most experience developing the software. However this is a poor idea. Bringing less experienced people to the table could lead to attacks that are “known but unsaid” and therefore easily forgotten or other blind spots that have been learnt throughout the years.

But there’s someone else that you really should bring along. Someone who spends most of their day trying to identify the risks in a feature. Someone who has the nack of finding holesand flaws. Someone who has probably has the widest knowledge of your solution.

Your test engineer.

Next time you are threat modelling, be sure to invite your test engineers. They don’t need to have any security experience or programming background. If they have the ability to spot that “X + Y – Z = Crash”, they are likely to also spot that “R + T – U = Vulnerability”.

If you’ve not done threat modelling before then it can seem quite daunting. Certainly when I was about to have my first sessions I felt pretty anxious that I’d be out of my depth, despite having read and understood plenty on it, including STRIDE. However after completing my first session, I loved it. Not only was it a useful exercise for the business but I really enjoyed threat modelling. As a test engineer I was in my element.

To help people get over that initial hurdle and avoid the risk of setting around a table, looking at a threat model going “errrr” (what my first session would have been without a great coach), I have created a card game called “Threat Agents“.

This takes the elements of STRIDE, adds my quirkiness to them and some structure to help you get going. The game is free to download and get to print off your own copies.

Categories
Ramblings

Automation Test Engineers re-enforcing 2 tier engineers

Before I begin, I have spent several years as a software engineer and was decent enough at it. As part of this I would write my own automated tests. Since switching to test, I’ve developed a host of handy test tools, developed simulators and even made my own automation tool that used our SDKs to test stability through a huge range if activities.

My point is, this is coming from someone who has experience of automation, even if I consider myself as a manual tester.

Anyway, the point…

The job market in my city is predominantly junior test engineers or senior automation test engineers. Companies are desperate to hire people who can write and execute automated tests. I would like to ask these companies, why get a dedicated person in to do this?

It might seem a little wild, but why do you need to hire someone for this role? Are these companies not writing automated tests? Or are the developers writing them?

You can probably see where I’m going here. Developers are more than capable of writing automated tests and when surely if a company is trying to follow good working practices like scrum, LeSS, ATTD, BDD, TDD and buzz word driven development, then surely the developers are writing the automated tests as part of the DoD for a PBI/story to move to dev done?

Having now made the case for automated tests to be in the ownership of developers, I now want to talk about why being an automation test engineer is regressive.

There has long been the concern or battle as to whether test engineers are second class engineers. I’m not entirely sure that picking up bits of work that software engineers often dislike or see as beneath them is helping to further the value of dedicated testers.

I’ve definitely felt like my skills and role as a test engineer has been most valued when embedded within the feature team, mostly picking up stories in dev done & awaiting testing. However seeing people taking up roles where they act as the safety net in a separate test group where work is lobbed over the partition kind of saddens me.

People who have invested lots of time, effort and maybe even money into learning automation may be scoffing at me right now. I’m not saying it is wasted effort. Far from it. If you enjoy it, rather than being an automation engineer, what about just being an engineer picking up any PBI like the rest of the team?

If you do love your testing and want to keep testing, like myself, there is plenty work to do. Use programming skills that you’ve learnt to automate some of your tests. For example creating a script to further load the system or maybe to help parse results from log files. How about pairing with a developer so they write the functionality whilst you write tests.

Even without the coding, a tester’s skillset is still massively valuable. Get yourself involved in backlog refinement. Go larvae hunting. Coach your team. Get involved in security. Help your team shift left.

Going forward, rather than replacing a team of manual testers with a team of automation testers, let’s use our skillset to identify risks, bugs and possible UX concerns as early as possible.

Categories
Ramblings

Do we trust our code?

Often a user story, PBI or whatever can include a number of alternate paths, scanerios or examples. The team has ideally listed them during backlog refinement and 3As.

When a diligent software engineer picks up the item, they can write their automatic acceptance tests and also provide manual testing as well, listing the testing in changes changeset notes.

So when an engineer, whatever their job title, picks up testing of the work, do they perform acceptance testing? Is there value in this?

In an ideal world, this can be considered busy work that we can skip. We should be able to trust our code when it is committed and instead focus our efforts on testing that is more valuable like exploratory testing to find defects or learn about the feature and its quirks or nuances. We can focus on performance, scalability and more complex testing.

But this requires us to trust our code. Something that historically isn’t the case… hence why we have testing! (and bugs)

So what can we do to remove the need for mindlessly going through a truth table, long list of ACs or a wad of scenarios? I don’t know and this is what I hope to learn. However I do have a few ideas of things to explore:

  • Code coverage analysis for automated tests with our available build tooling to discover if this can give a (genuinely) useful metric at a day to day level.
  • Automated test coverage within code reviews to discover if someone who doesn’t develop the code can identify what is adequately covered by automated testing.
  • Pair programming with a developer and tester to discover if this helps both improve coverage and visibility of coverage. Perhaps writing automated tests.

Categories
Ramblings

Releasing bugs is a team effort

In modern development we have many layers of automated testing and there’s manual functional and exploratory testing. We’re shifting testing further and further left to catch things early – ideally before a line of code is written. So why do so many applications that we use day-to-day have bugs? Why aren’t we as members of the engineering team catching everything?

Of course many of the bugs that are in production software have been caught and triaged. We simply decided not to fix them. However I plan to look at how we manage to miss bugs.

It takes a real team effort for a bug to get into the wild.

I’ve been talking about bugs a bit lately and know that I am not alone in taking an escape from a feature I was involved in as a personal failure. I felt this way as a developer and I still feel that way as a test engineer. However I know that not only is it not fair to blame any one individual, it should be an opportunity to learn. From this I’ve put together the series of mistakes and failures required by the entire team in order to ship a bug.

Missed when writing the story

I don’t want to dwell here as I feel that when writing a user story you are focused on the expected behaviour that can be found through example mapping or thinking about user workflow. That isn’t to say that you shouldn’t be conscious of these sort of issues.

Missed during planning / refinement

Before a story is picked up by a developer, we would expect it to have been refined and reviewed. Multiple members of the team should have been involved in looking at it. However for an escape to happen, it is likely that it was missed during these sessions.

Way to go team – you missed it.

Missed during 3As / kick off

Regardless of your way of working, hopefully there was some conversation when the story was picked up. This is the last chance to catch a bug before development starts and often one of the most detailed discussions. As well as ensuring that we understand the acceptance criteria, we really want to be thinking about the edge cases here.

Way to go amigos – you missed it.

Missed in developer testing

Any self-respecting software engineer will be testing their own work, both manually and using automated tests.

Way to go dev – you missed it.

Missed in code review

Code reviews are more than nitpicking over names or formatting. They most important goal is to spot mistakes that the developer may have made.

Way to go reviewer – you missed it.

Missed in test phase

Whilst many people will look exclusively at this stage, it isn’t the sole responsibility of the tester of a story to find / prevent bugs. Nonetheless,

Way to go tester – you missed it.

Missed in showcase / sign off

Many workflows out there have some sort of sign off to say the story is done. If your workflow has such a state then clearly the PO has signed off on this buggy story.

Way to go PO – you missed it.

Missed in general use of the software by engineers

Unless you’ve a super fine tuned CD pipeline, there’s a likelihood that members of your team or the wider group will have used the feature.

Way to go team – you missed it.

Missed in release / sanity testing

Of course not all teams do this, depending on your workflow and deployment system but if you’re doing testing before release, it hasn’t caught it. Before you say “it was out of scope”, ask yourself why the scope doesn’t catch it and whether its even worth doing.

Way to go tester – you missed it.

Way to go all – you missed it!

Escapes can and will happen. When it does you should look to learn, not blame, and if you do really want to play the blame game, it should never be one person.

Categories
Ramblings

Value of a bug report

Over the years I have seen an increase in the idea of not reporting defects within Jira, Azure DevOps, Bugzilla etc and having a conversation instead.

If it is an issue within the story itself and an AC failure then I certainly see the merit in skipping the bug report. It can be busy work to write up behaviour, send it to the developer and for them to go “ah sod – I forgot about that” and quickly fix it. However as a counter point, from my time as a developer I did find it easier when things were written down as my memory recall from verbal communication isn’t the best. As a tester, unless the story comes back to test quickly, I want some kind of record.

This post is about more than pointing out that as humans we have different preferences. My concern is when entering a bug that isn’t currently planned to be fixed is considered busy work. This includes bugs found in areas that isn’t related to the current feature, or is quickly reviewed sat together & consider too low priority. I still always want to log a bug here.

Documenting why a bug isn’t worth fixing

I’m sure we’ve all been there where a defect is seen when you’re up against it. It is a real edge case and doesn’t have a significant user impact. The developer realises that it will be in a high risk area of code and a chunk of effort to fix & test it. Quickly it is clear that it isn’t worth doing right now. This is made in the knowledge that not fixing it now means that it probably won’t get done at all. So why enter a bug?

Think about the case where 6 months down the line and someone else sees this for the first time. They raise it with their PO and its agreed as a nice fix to put in. The scope isn’t quite understood so a developer goes away to investigate it. Half a day later they come to the same conclusion as before but aren’t all that confident in their understanding and want to get a second opinion from another team. “Oh that rings a bell. Yeah, real can of worms.”

The 15 minutes saved by skipping writing this down has saved a lot of time here!

What about when the customer reports it?

Very much in the same line of thinking as above, what about when the rather angry customer phones tech support complaining that half the time they try saving to their shared space, they get an error and need to retry.

Tech Support haven’t heard of the issue. It isn’t in the bugs / known issues list available to them. So naturally they investigate. A chunk of time later it comes to Engineering. They try and reproduce it and confirm it is a bug. It is then scoped and fixed by a team.

The team involved in the fix decide that a bug escape review (or whatever your company calls it) should be held. How was this missed?

If only that bug was entered eh? Tech Support may have known already. The team responsible for fixing it would have known who to speak to about it.

Bugs can be useful for learning

I’ve mainly worked with large, complicated software. This means there’s tech debt and a deep history of why some of our more complicated code exists. When it comes to developing a feature that has been around the block a few times, unless you yourself are familiar with it then it can be good to know about the gotchas and challenges. Similarly when identifying the risk and scope of testing, what bugs and challenges have we seen in the past?

A searchable pool of defects that includes changeset notes, a discussion on how to fix an issue or why it isn’t fixable can be pretty handy here! I’ve learnt about some of the nuances of the protocols used by our solution through developer’s bug comments.

I also believe that just knowing about the quantity of bugs in software is important. Zero bug software isn’t saying zero bugs are in the application – give me 30 minutes and I’ll prove otherwise – but that we’re leaving zero bugs in an open state. Just as zero bug development shouldn’t be misinterpreted as developing bug-free software, it shouldn’t be misinterpreted as don’t bother entering bugs!

As engineers we are all responsible for quality but how do we gauge that if we’re not recording bugs. How can we say that we have an understanding of our software’s quality level when there is no visibility of how many bugs we have accepted.

Within the world of cyber security it is known that you can mitigate, accept, transfer or avoid vulnerabilities. The same is true of bugs. Marking a bug as won’t fix is accepting the risk and impact on a customer should they encounter it.

The next time you or someone on your team says “don’t bother entering a bug”, make sure that everyone involved in the decision knows that they are keeping quality issues to themselves and they may be screwing over their colleagues down the line.

ACs passed. Able to place mug on the table. User story moved to done.

Categories
Ramblings

Non-technical testers are the gatekeepers of quality

Apologies for the intentionally ridiculous title, which is fitting in two of the most frustrating terms in testing.

Non-technical testers is a term often used to describe a test specialist who doesn’t do automation. As I don’t write automated tests in my current role (and haven’t used Selenium), I guess that includes me then?

No. This is nonsense.

  • I use Wireshark to analyse network traffic, comparing against protocol documents to understand what is going on.
  • I look at the contents of crash dumps to help me understand why it crashed and to get better reproduction steps.
  • I occasionally pick up development tasks.
  • I can set up & work with complex system tests and environments.
  • And perhaps most important and applicable to many of us “non-technical testers”, I understand our software & technologies. A good tester will use their domain knowledge to find the edge cases and risks in a complex system.

Non-technical…

I wanted to bring this up because I strongly believe this label is not just insulting to those who don’t write automated tests but belittles the profession. Also a topic for another day but I also think developers are better placed to be writing the automated tests anyway.

Let’s move on.

I’ve often seen testers described as the gatekeeper of quality but I have never agreed with it for two reasons.

1. I am, have never and don’t expect to be the gatekeeper on release

At a simple level, I am not in that position of authority. I don’t see why I should be as a test engineer or QA role (be that junior, senior or lead etc). It is is deeper than who calls the shots though.

Ahead of a release the decision on whether the quality is there and whether the product is sufficient quality needs to be a collaborative effort between the teams and roles.

I certainly prefer to take the approach that my role is to ensure that the decision on whether the quality level is high enough is adequately informed. As a test engineer I have knowledge of using the product. I’ve looked to put myself in the position of a customer and I look understand the implications of known defects. Consequently I believe that my opinion on the quality level is important and typically valued, whether that is attending key meetings or simply informing the team’s representatives and providing quality data/reports etc.

2. Quality is subjective and needs balancing

What are the implications of not achieving a deadline? Could this be a loss of a deal or legal implications? Further to this, is any delay to improve quality of sufficient value to the customer?

As a customer I may be OK with the app having significant alignment issues on Edge given that I typically use Chrome and the predominate page that I use isn’t affected. However I probably won’t be OK with my own projects missing their deadlines because our software was late.

In a similar line, I wouldn’t be happy if functionality that I regularly use has a poor locking design meaning that I have to keep hitting retry. Sure, the code might work as intended but are my needs being met?

Further to this, as a customer I would certainly take an application that provides all of the functionality that I want but needs restarting periodically over having to use half a dozen meticulously implemented applications with conflicting workflows.

Within this thought process, as test engineers we need to consider the customer but with perhaps more balance and realism. I will try and provide my advice on what I think of the quality, trying to think of a customer.

Bugs happen and especially in larger and more complex products, there will be known bugs in a release. Quality of software in terms of stability, bugs and functionality is a balancing act.

No one person or team should be regarding themselves as gatekeepers of quality. Instead everyone across the SDLC should be working to help enable quality.

Categories
Experience Reports

“Just run what was done before”

My biggest challenge when switching back to test

Before I start, a bit of background. I started at my current company as a Software Test Engineer. I didn’t really enjoy the very rigid processes that we had in place and felt it didn’t make the most of my creative ability to find bugs. I ended up switching to a couple of other roles before returning to test some 6 years later. In that time a lot of changes have been made.

Whilst not explicitly stated, my team now favour a Context Driven Testing approach to both features and user stories. How we write tests, share them and execute them varies project to project and even user story to user story.

I do like this freedom to choose and adapt my approach, meaning that if I think that more detail & structure is required, I can use that. For user stories where tests are very niche & specialist and won’t ever be re-ran, I can dramatically cut down on documentation, allowing more time for bugs.

However when I joined I was replacing the existing sole tester in the team, having been a developer in a different team. I found it quite challenging to use my limited knowledge/experience in several scenarios.

For example when picking up a fairly regular testing around appliance updates, it would seem logical to use a similar strategy as before. However I don’t fully know the context of those tests. Were they shortened due to a time scale or extended beyond the normal coverage because of risks specific to that update? Was it only manual testing because we don’t have automated tests or is looking at making this our first automated project a blind alley that has been pursued before? Why was this strategy used?

In the end, after consulting with the team I repeated the same test cases on a selection of platforms plus one extra due to a specific risky package update. We suspect that was the logic used previously. As per usual I provided my list of tests, equipment to use in a test plan for the story, which can be looked back on.

However this problem is going to continue to resurface as we try to decide the same the best strategy for this context next time. It is possible that the next person to perform these tests might not be me, or I’ve simply forgotten the thought process and time will be needlessly spent trying to figure out the ideal strategy for this context. Likely the same tests will be ran again, including the extra one that I added. This is a problem we have in a few areas.

To try to resolve the problem we discussed that to help people repeating this are of testing down the line that we should provide a rigid set of instructions on what to run etc. We’d break from CDT for these tests. Not ideal, but saves effort for whoever picks up the testing next time. A user story was created to do this.

Thinking on this matter more, this is a blunt way to “resolve it” and papers over the re-occuring mistake. What I really needed to understand previously was the logic in the previous strategy. Why these test cases? Why these platforms? Why were others excluded?

I still want to provide guidance for testers picking this up down the line, detailing the core tests that you need to run every time. However the solution is to not just have the strategy written on the user story, but to ensure that the logic and reasoning is provided for historical use. This should help the next person to pick up this testing to devise the best strategy for that context.