Categories
Experience Reports Ramblings

Experimenting for quick wins

Experimentation is important, if not essential, if we want to have successful high performing teams. It allows us to try things without needing months to review and phase in a new way of working. Instead encourage trying something different, reflect and start again.

In theory, I really like it.

In practice, it was difficult within our group.

I’d like to talk about my attempts to learn the drums. I was tempted to try drumming as I still believe there’s something musical that I can be good at. I started with a small experiment.

I bought a cheap electronic drum kit. I did some reading and I started trying to follow along to a few YouTube videos. I even got a little bit of success doing a not completely terrible job of playing some songs. It was kind of fun.

However I was pretty useless. My experiment didn’t transform me into a musician so I gave up. It joins my bass & guitar sitting idle. Occasionally I’ll have a quick play with one, maybe get a bit of fun but it isn’t yielding great results so I rarely bother.

And this was my frustration with experiments at my former work. We were very open to trying something but seeing things through is a challenge. Straight away we’re asking “is this giving us the results we want?”, potentially then spending more effort on analysing success/failure than the experiment itself and subsequently drifting or stepping off that path. Maybe the odd strum of the guitar so we can say that we play instruments, but not really.

If I want to become a musician, I know that it takes time and practice. The same can be said for becoming high performing teams.

Sometimes our experiments & initiatives can be about trying new things for quick wins but failure to succeed doesn’t necessarily mean the idea is bad. It may mean that it isn’t truly effective yet.

It is essential that we don’t just look for how quickly we can get end results. Some things will require building up skill and practice. If you never give yourself the opportunity to become good at something, you’ll never reach those standards.

Categories
Experience Reports Guide

Meaningful RCAs: Structuring questions

I’ve already talked about how we need to tap into unleashing our inner toddler by asking “why”. But what questions do we ask?

Background

Before getting into the guts of the RCA I like to go through the background. This is partly to act as a refresher for everyone as it may have been a few weeks but also it will help guide me in my questioning.

This usually means sharing:

  • Links to the defect we’re RCAing & the original ticket
  • Links to PRs to fix the issue and where possible the original (“offending”) PR.

Then asking:

  • Can you describe the problematic behaviour? (i.e. what was actually wrong from a user’s point of view)
  • Can you describe the describe the nature of the code fix?
  • What do you remember from working on the story?
    • How long did it take?
    • How many people were involved?

The Fix

Before learning more about why the issue came to be, let’s make sure that we’re confident in the fix. I like to ask two questions here:

  1. How resilient is the fix?
  2. Will we know if the behaviour regresses again? (i.e. did you add automated tests)

Quality Engineering Throughout The SDLC

Now we get into the real important questions. This is where we go through the software development life cycle and think about what we did and whether there were opportunities to (realistically) catch it then.

First of all, if this was an escape, lets ask if we could have caught it in production (e.g. monitoring), release testing or epic close off testing. I wouldn’t advocate for just asking “could have have caught it here?” but asking around what the process is, what was the testing performed and is this something in the scope of what we’d usually test?

We then move on to the story within the sprint, starting with testing of the original story / bug. We’re trying to understand whether this was a brain fart (it happens) or is it just something that we wouldn’t usually consider testing? If not, why not?

Then we get into more technical. We’re looking at the PR, starting with code review. I’ll be asking about the nature of the bug and is that something that we’d look for? I’d want to understand whether SMEs were involved & if not, why not? Did they check the testing notes & automated tests in the code review? Code reviews aren’t ever going to catch everything but it is good to discuss this process. It is a nice chance for people to get to talk about the value and role of a code review too.

I then concentrate on the developer’s testing. What had they covered through automated and hands on tests? How much was iterative? As a former dev, I know all too well how even a well intended developer who tests their work can let things come through here (see dev BLISS).

We’re back then to technical discussions on the code. This is where I hope the architect can ask a few questions, although regularly other team members often chip in. This discussion is a great way for the team to learn from each other.

You might think that now that we’ve talked about the types of testing and the development challenges that we may stop there, but no we don’t!

The teams will have planning and refinement when we’re breaking down the story. We do test strategies and planning at epic and sometimes user story level. We think about the complexity of the code work with architectural studies before starting an epic. Let’s continue diving into these.

Again we’re asking about what was done, whether this is a scenario that could have been caught, either behaviour wise or in code, and tapping into what more we could have done. This helps us with spread left.

A Parting Question

Near the start I asked about our confidence in catching this issue again. Unless we’re running out of time (unfortunately often), I like to ask a similar but slightly wider question. How confident are we that we won’t see a repeat of the issue? Not necessarily the same issue but a similar one.

Summary Section

Finally I’ll have a summary section with actions, learnings and a summary of the RCA. Often written up afterwards because unsurprisingly the hour I book for RCAs isn’t always enough to cover everything in this post! I’ll explain a little more on this in a separate post.

So in short…

We start off by discussing the background of the story to refresh ourselves and help us get an idea on what threads are best to pull on as we go into things. We’ll also check we’re confident in the fix.

We then take our time going through the SDLC. We’re not just asking “could we have caught it?” or “why didn’t we catch it?” but looking at the actions, steps and processes to understand the answer to this.

I switched the ordering from starting with the first stages of the story to starting in prod after advice from a great chap called Stu Ashman. I found this got us much more engagement in some of the testing and activities around post release. You’ll also see how through the different stages we are asking slightly different questions to consider more than “why didn’t we catch it?”.

We’re using every stage as a learning opportunity.

… and that makes for a meaningful RCA!

Categories
Experience Reports Ramblings

Sketchnoting Adventures

I’ve been meaning to write this for a while but after a really interesting conversation at TestBash, I decided to finally get something written down!

In June 2023 I attended Testing Atelier X and it included a workshop by Marianne Duijst on sketchnoting. I’ve seen some very attractive looking sketchnotes from many people, including the wonderful Louise Gibbs, however I didn’t get how people could take their notes, make them pretty and informative. I also didn’t entirely get the point. However I quite enjoyed the exercise and have since gone on to use this approach a lot when watching talks in my spare or personal development time.

The value that they bring is that you can provide a structure to your notes that makes them way more readable when you look back at them later. I rarely found my notes from talks to be worth looking back on in the past but since I’ve started sketchnoting, I’ve found myself picking them back up to remind me of the topics and key points. Importantly they are something that I could share whilst my previous scribbles in a notepad would be hidden (I used to intentionally write in such an illegible manner that only I could decipher the text).

I am no where near sharp enough to do them live but I’ve learnt some techniques:

  • For live talks, consider post it notes or just quickly writing things in a jotter. Then revisit them later.
    • Post it notes are great as you can reorganise them!
  • If you’re able to pause, try and avoid stopping too often. Listen, absorb then rewind to take the notes.
  • Avoid too much bloat to make it easy to parse.
  • Focus on the speaker’s words rather than “oh I could go do this”.
  • Don’t fret about being messy.
Several A4 pages are scattered on a table. They all contain notes taken from a talk in a sketchnoting format.

See more examples of people’s sketchnotes on the Ministry of Testing Club

Categories
Experience Reports

Collaborative reflections

Earlier this year I wrote about my changing role and push to help improve working practices and support my teams in testing.

In terms of my role, it is certainly coming together. Over the past few weeks I’ve been getting involved more and more with teams and helping teams reflect on quality. My big win is that my email signature is updated to state “Senior Quality Engineer” (although my mug says “QA Lead”, which confused me a little).

What I wanted to delve into in this post was that reflection.

One initiative that I’ve started driving is regular bug RCA sessions.

I’ve found these to be interesting sessions and generally I’ve been pleased at how people have been open and honest. This is an essential requirement for teams to learn, develop and improve. Whilst my nature says “oh its all the teams doing this”, I think I deserve a wee bit of credit for setting the tone.

Before we got started and at the start of the session I like to stress that it isn’t about blame. It is a collaborative effort to ship a bug. Similarly mistakes will happen, which is why we have processes and practices in place to help us catch and rectify things. If one person is truly to blame, we have bigger problems.

I’ve also been attending, and on occasion facilitating retrospectives.

In both my retros and RCAs one thing I’ve done is play on (and exaggerate) my inexperience in the team’s working practices and software. Being a “question asker” is a valuable quality of a tester and I’ve been experimenting with this. I quite like “tell it to me like I’m an idiot” or “assume I know nothing” (perhaps a Jon Snow costume could enhance this?). The goal here is to get people talking and explaining. It also means that I can ask “so do you all do X testing?” then “Ah okay, you mind me asking why not?” and finally “how can I help?”.

Basically if I have a suspicion, I’ll ask a bunch of questions “to help me get up to speed” and hopefully surface that without being direct (and potentially causing conflict). I’ll also repeat what people have said back to the group to reinforce and also help confirm that understanding. It’s especially useful to move a discussion forward.

I’ve been learning about coaching Vs teaching, especially through talks from the likes of Vernon Richards and Stu Day. I feel like this has helped me act not just as a facilitator but a bit of a coach in this role, but I realise that I still have a tendency to lead with some of my questions to get people talking on what I want, not necessarily where they might have headed naturally.

Finally the other thing I’ve been doing is remotely joining and listening in to meetings whilst I work. Teams can ask my opinion if they want, otherwise I’ll be learning how they work, picking up on things then trying to feed that back when relevant, whether that is between teams or within the team itself.