Categories
Experience Reports Ramblings

2023 – A Reflection

It has been some year. I started in a dark place with lots of change underway. However it is, professionally speaking, been a bit of an immense year.

Speaking

Building upon my first time speaking at a testing conference in 2022, I’ve been fairly active. From running a threat modelling workshop for a small set of people at an Edinburgh MoT meetup to tackling a new topic within security at TestBash UK with a range of activities, it has been really positive. I’ve started feeling like I’ve gotten my name out there and importantly, people seem to like what I’m sharing.

Whilst it can be an exciting (yet nerve wracking) experience to speak and of course having positive feedback and comments makes you feel great, I think the biggest buzz is when people take something away. Speaking always seemed way out my comfort zone but my passion for the topics drove me to give it a go. Consequently when it goes well and you think people are learning and will try out things themselves, it makes it all worthwhile.

Awards

Building upon my speaking, I took my Threat Agents game to a cyber security event for my work. We used it in the threat modelling workshop and I spoke a little and got involved in helping people. I even got a special award within my work for contributions to threat modelling!

Somehow, despite only working part time for a chunk of the year, I’ve managed to achieve a few awards from work, given by peers. This obviously means a lot. However I think a lot of it comes down to…

An Interesting Role

This year I’ve taken on a new role. Whilst I originally dubbed it as a “Free Range Tester”, in reality it has been a senior test engineer who doesn’t test. I have tried to both lead and support.

It was a difficult start and a frustrating one. Quite quickly I learnt why we were struggling to ship a release. I was also distracted with extended leave, reduced hours and helping run our intern program (I even wrote some code!).

But the role has gone well.

My crowning achievement has been my work on analysing our quality for the first major release in a long time. We analysed bugs, reflected on our challenges and took actions. I brought all this together into a presentation (not given, just shared).

For example, whilst a large portion of bugs were attributed to internal mistakes when working on stories, several issues we found, raised and fixed were actually legacy behaviour. We made the software better through these bugs. That is good to know.

It has been quite interesting having this roaming role and getting involved with different teams. As we no longer have a scrum master, I’ve helped fill a little of that void. I’ve had the opportunity to learn how different teams are working and help them with their challenges.

I’ve also been there to help teams out when they are stuck on testing. Who would have thought that getting rid of testers would impact a team’s ability to plan their testing?

I’ve also had the opportunity to get myself involved with the wider organisation. Whilst I’m a shy & timid person most of the time, ask me for my opinion and I’ll give it. And even when I wasn’t asked, I sometimes offered it. Having a culture where anyone, either senior leaders or that weird new tester guy across the ocean, can speak up is wonderful and I definitely appreciate it.

Whilst I haven’t succeeded in getting the organisation to test better, I have raised awareness. I have got allies. This won’t happen overnight but I am confident that in time we’ll get there and what is exciting is that I think I’ll be involved and part of this.

It is a real step forward.

A step back

Unfortunately not everything has been coming up amazing.

I fear that I’ve lost my appetite for actually doing testing. Given how much I love the profession (I’ll post about that separately later), whenever I have some free time to do testing, I’ve often found myself not bothering. I’ll admit that I’ve often found myself reaching over to my Xbox controller when I could, and should be testing. I’ve found excuses not to do actual testing myself. Some of that is semi legit (“managing my energy levels”) but also I know that a bit of the hunger is gone.

Part of this is not having the domain knowledge of the past. Moving to a new, large area when outside a team has made on-boarding very hard. I’ve found it massively overwhelming to try and test a feature that I don’t know, which is part of a very complicated solution full of TLAs and systems named after random comic book characters… and my energy levels & brain capacity are both low.

Strongly held opinions that are easily changed

One other thing that I wanted to reflect upon my opinions and ideologies. I’ll write a separate post about it in due course, but I started the year feeling pretty certain about how things should work but have come to be more flexible over time this year. Maybe there is method in the madness?

Perhaps I was wrong to loathe test strategies so much. I wonder if those times when I was doing copy paste reports that no one really read or cared about tainted me too much?

Challenges ahead

Next year scares me a little. I feel like I’ve over achieved this year and despite knowing I’ve not worked and pushed as hard as I could have, there’s nothing left in the tank.

It has been a draining year.

Next year I think rather than trying to excel and push, I want to build stronger foundations. A wonderful new hire within my work means that I don’t need to push. Just be there to support and be involved.

I am hoping that I won’t need to push to get involved, to get the testing mindset involved, as before. I’ll be there by default. My challenge will be, how do I provide that coaching now that I’m present?

To achieve this I will need to learn how better to coach and help the teams develop their testing. Thankfully we now have 10% time at work so that will be my focus. Having a day each sprint that I can dedicate to the coaching side of things – either by getting time with devs to try new things or just researching & learning, it will help a lot.

However most importantly, I do want to test again. I love testing. I want to find that drive in me again to go try and find those hard to find bugs. To remind everyone what it means to be a tester.

Categories
Experience Reports

Running Workshops

A little while ago I was asked about my experience and learnings from running workshops and what advice I have. I thought I’d share my thoughts on here.

As a quick note up front, I am under no illusion that I’m not an expert and am still learning. I did also look into teaching as a career so some of this is influenced by what I picked up from that.

Expect to fail

Timing will be hard and when it comes to activities, people will be slower than you expect. For example when I first ran my threat modelling workshop it included an activity I assumed that an activity would be 2-3 minutes break from slides and brief re-enforcement of knowledge. Fill in this, bish bash bosh, done. I got Hannah, my wife who wasn’t from a software background, to try and it took much longer (5-7 mins). It turned out that the group needed that time and many people didn’t finish in the 5 minutes that I set aside or the 2 mins extra that I let it go on. I’d suggest finding a newbie and definitely make sure you have room to flex.

Exercises that don’t need a fixed end point (e.g. have a debate/discussion, write your reflection or practice writing these tests) can be handy when it comes to giving you that flexibility.

Not everyone will understand what you’ve just taught or appreciate the point. Don’t fret about that. Maybe have a resource that you can point them to, or get them working with someone. One of the key advantages of workshops is that you can support different learning styles so for those who struggled to understand your words, perhaps they will benefit from pairing and more collaborative learning.

Be wary about asking questions early. When I was teaching I learnt it was good to ask the group questions rather than just unloading information. However don’t expect a room full of shy geeks who don’t know each other to speak up before they’ve had a chance to interact with those around them. Nothing more awkward than the silence when no one raises their voice!

Finally some people will be visibly bored/disinterested (or even say something negative when they think you can’t hear). This isn’t a reflection of you, your delivery or the topic. The reality is that sometimes people get bored or dislike things they are doing. It happens. Just ignore that and avoid fixating on them.

If you’re not sure why everyone in the room isn’t amazingly enthusiastic and quick learners in your chosen area and style, put it down to being first thing / post coffee crash / craving lunch / tired after lunch / tired for end of day.

Anecdote

I am easily distracted when watching someone talk. A late comer arriving, someone on my periphery getting a drink out their bag or someone chatting and I might not get as much from a session. When I close my eyes and listen it can be transformational. However it was pointed out to me that a speaker thought I was dozing off, when I was just losing myself in the words!

The other week we had a visitor sharing some learnings and there were a couple of people who I was hoping would bring enthusiasm, only for them to have a pretty blank look on their face. Afterwards they told me how great it was.

My point with these anecdotes – don’t judge people by their faces.

Closing thoughts

Running an effective workshop isn’t something you can quickly bodge together but you don’t need to be a rock star in your field or devote your life to it. If you’re given the opportunity, go for it. Check out your local MoT group, or other meet ups in the area, and offer a bit of your time.

Categories
Experience Reports

Collaborative reflections

Earlier this year I wrote about my changing role and push to help improve working practices and support my teams in testing.

In terms of my role, it is certainly coming together. Over the past few weeks I’ve been getting involved more and more with teams and helping teams reflect on quality. My big win is that my email signature is updated to state “Senior Test Engineer” (although my mug says “QA Lead”, which confused me a little).

What I wanted to delve into in this post was that reflection.

One initiative that I’ve started driving is regular bug RCA sessions.

I’ve found these to be interesting sessions and generally I’ve been pleased at how people have been open and honest. This is an essential requirement for teams to learn, develop and improve. Whilst my nature says “oh its all the teams doing this”, I think I deserve a wee bit of credit for setting the tone.

Before we got started and at the start of the session I like to stress that it isn’t about blame. It is a collaborative effort to ship a bug. Similarly mistakes will happen, which is why we have processes and practices in place to help us catch and rectify things. If one person is truly to blame, we have bigger problems.

I’ve also been attending, and on occasion facilitating retrospectives.

In both my retros and RCAs one thing I’ve done is play on (and exaggerate) my inexperience in the team’s working practices and software. Being a “question asker” is a valuable quality of a tester and I’ve been experimenting with this. I quite like “tell it to me like I’m an idiot” or “assume I know nothing” (perhaps a Jon Snow costume could enhance this?). The goal here is to get people talking and explaining. It also means that I can ask “so do you all do X testing?” then “Ah okay, you mind me asking why not?” and finally “how can I help?”.

Basically if I have a suspicion, I’ll ask a bunch of questions “to help me get up to speed” and hopefully surface that without being direct (and potentially causing conflict). I’ll also repeat what people have said back to the group to reinforce and also help confirm that understanding. It’s especially useful to move a discussion forward.

I’ve been learning about coaching Vs teaching, especially through talks from the likes of Vernon Richards and Stu Day. I feel like this has helped me act not just as a facilitator but a bit of a coach in this role, but I realise that I still have a tendency to lead with some of my questions to get people talking on what I want, not necessarily where they might have headed naturally.

Finally the other thing I’ve been doing is remotely joining and listening in to meetings whilst I work. Teams can ask my opinion if they want, otherwise I’ll be learning how they work, picking up on things then trying to feed that back when relevant, whether that is between teams or within the team itself.

Categories
Experience Reports Ramblings

Shifting the QA stuff left

Revenge of the Gatekeeper

For the past couple of months I’ve spent half my time, which is in itself reduced following complications, as a “QA Champion”. A title I dislike. In particular I dislike how QA tends to be associated with the testing best described as “checking” and “QA monkeys” running “test scripts”.

The organisation that I joined as “QA Champion” (I’ve died a little more inside when writing that) has had quality issues, especially since a re-org tried to move us into a LeSS structure with dev teams taking on quality and testing. This was first done by getting rid of testers (well not quite) and trying to shift the QA left to do QA earlier. Everyone needs to be doing some QA, not just the QA teams that still sort of exist. We plan to solve this by automating all the QA.

Ignoring the “automation solves everything” part and the fact that we are working on an old tech stack that lacks testability, I have real issues with our approach.

Let’s have a quick prequel first.

My understanding is that our old methods were that testing was thrown over the fence to teams who ran lots and lots of test cases then produced reports and would be the authority on whether we can release. In other words QA as the gatekeepers of quality. To me this is the dark side and rightfully needs defeating.

So why the ranting, rambling blog post? Surely I should be happy that we’re abandoning this? Well no. I’ve found that the QA Champion is sod all to do with championing quality and we’re the new gatekeepers of quality.

What have I become?

A New Hope

As bleak as it may seem, a shift in the balance may be happening. I’m hoping that I’ve awakened something within the QA Champion group so that we can become a force for good.

I’ve started something of a rebellion. Through a tech talk I shared what I think continuous testing, shift left and quality engineering, all that goodness, could look like for us. With passionate discussion (or constant whining), I’ve got discussions going where I think that there is a strong alliance to bring around change.

My goal is built on two pillars right now.

First is with the teams that I immediately work with, we are looking to use practices to help us test continuously and build quality into our day-to-day. This hasn’t been a hard sell because:

  • There’s a strong desire to improve quality where possible.
  • I’ve said that I’ll have their backs in rejecting the initiatives coming from above (i.e. the ways of the dark side).
  • Many of us have worked together. Some of them taught me what I’m now passing back.

Secondly I am continuing to raise the discussion on what I believe to be the right practice and challenging things like “let’s get teams producing reports detailing all their testing for an epic”. I hope to change the language and expectations from management so that they aren’t looking for the same sort of gatekeeping as before. If they want us to be potentially shippable, instead of going through the arduous and prolonged hardening and release processes seemingly non-stop, lets focus our energies on employing quality engineering.

A Force Awakened

So far I’ve had mixed feelings about my progress. There are moments of hope and feeling like there’s positivity and receptiveness to this, followed by a request for dev teams to run hundreds of manual test cases each in a hardening phase for an internal release.

However I’ve now shared our new approach in the latest sprint review. We’ve had lively and positive discussions about how we can actually get to this stage. Things may actually start coming together.

I will wrap up this post by saying that if the next month or so goes well, I hope to get my job title changed from Senior Test Engineer and get the QA Champion program re-branded.

Hopefully this tale will end with the rise of the Senior Quality Engineer.

(sorry)

Categories
Experience Reports Ramblings

Why dev testing fails

During my time as a developer I generally produced a good quality of work. My knowledge of design patterns may not have been very good but I tested my stuff to ensure that it did what I expected it to and that is why there was generally a low bug count for my work… But there was a bug count.

Given that I am very confident in my testing skills (more than my dev skills!), how was it that I missed things in my dev testing? In this blog post I hope to explore that topic.

I believe there are four reasons why we may miss things in our dev testing and push code changes with bugs:

  • Blind Spots: Being human we will have cognitive biases or a tendency to miss certain edge cases. There are things that at first I would miss frequently and therefore got more attention in time. Mine was writing incomplete log messages where.
  • Laziness: Most developers want to be writing code so we can be tempted to get our code pushed as quickly as possible in order to move on to the next thing. Certain aspects of testing may be areas where we get slack or cut corners. I mostly got slack around install/upgrade. Yawn. (See also I get bored)
  • Iterative: My approach to development is very iterative. Sometimes I’ll use TDD and sometimes unit tests come a little later but I’ll always manually test my code as I’m working on it. In theory this is great but I’m not going to re-test everything on each iteration. Occasionally this meant that I missed finding out that I broke something.
  • Scope: I don’t believe dev testing should include things like system, load, soak or other more involving tests. If there’s a high time investment to test various scenarios, rather than having two people repeat the same testing, we will often cover the highest risk tests in dev testing and leave the rest for the test phase. There is some testing that is best done by the person testing a completed user story / feature. Consequently it is quite reasonable that developers can exclude certain testing from their dev testing.

(oh hey, BsLIS, or BLISS – lets say I meant that)

So there are some understandable reasons for us to end up blissfully pushing bugs. How do we solve it?

To some extent, I don’t think we need to do anything special.

This is the reason why we (should) have dedicated test phases of a user story’s life. Ideally by working within teams where testing is completed by the development team (possibly by a dedicated tester) then you can build up the relationships, understand each other and catch these things immediately.

As a tester I like to try and get to know my colleagues and how they work. I like to understand what mistakes they may make, as well as thinking about what mistakes I may make. If we can understand and appreciate how these bugs can slip through dev testing, we can catch them easier.

Obviously if you know your flaws then it is good to work on them. If testing exposes your blind spots, try to force yourself to be more aware of them. Using techniques like TDD allows us to ensure, at least at a unit test level, that our iterative changes aren’t breaking our previous work. Also lean on your colleagues and your testers. If bugs are raised against your software, ask yourself why you missed it. Mistakes happen. It’s fine – especially if we’ve caught it before shipping.

My one final note is that it is OK to knowingly not manually dev test an area of code, so long as we clearly communicate not only what we’ve tested and but also what we’ve knowingly not tested. Writing automated tests also lets you flex your scope on manual dev testing.

Dev testing doesn’t need to be perfect. Instead communicate, be clear and work together. The team can be way more than the sum of its parts.

Categories
Experience Reports

Challenging myself in Security

Over the past 2 years and the past year in particular I have been learning about cyber security. Whilst I have spoken a lot about threat modeling and even created my own card game (see threatagentsgame.com), I have also been learning a wider area.

I recently took part in a tournament by Secure Flag (courtesy of my work). It was quite interesting to get a more practical learning experience in how a lot of the vulnerabilities that I’d heard about worked in practice. Much of it was focused on coding (not my strong suit) and if it wasn’t for being stumped on a javascript based framework that I hadn’t heard of, let alone used, then I would have got perfect points – so that was pretty cool.

What I most enjoyed was some of the ‘QA’ exercises and also how I was testing my code based exercises. The platform would spin up a VM/container that you can use via the web browser. It would typically contain a browser, Visual Code (aside from QA activities) and Postman. I was then typically using Postman to make my attacks and also write tests to verify the fixed environment.

So why am I wittering about this?

Much of the attacks that I made using Postman, or XSS injection etc, weren’t all that different to testing that I might perform ordinarily. Many people will be testing APIs using Postman and used to inserting “dodgy” values to try and break an API, or web form, therefore is it really a stretch to use SQL injection or XSS injection?

Quickly I’ve learnt that adding some security/pen testing to my toolkit is actually pretty straightforward and not different to typical exploratory/destructive testing that I might perform. In fact some of the actions that I performed were things that I might have done in the past. When, as testers, we try and circumvent the intended rules of the system, we are trying to perform elevation of privilege attacks. When we suspect a crash, we’re performing denial of service attacks. When we try to unleash chaos by meddling with data, we’re performing tampering attacks.

Security testing isn’t some special skill for people with fancy qualifications. It is testing. It is what I do.

Note: If anyone reading this is curious then check out OWASP Juice Shop. It is free and in my brief play with it, it is quite fun!

Categories
Experience Reports Ramblings

I spoke at TestBash UK

In late September I attended my first in person testing conference, TestBash UK.

Early arrival at TestBash UK

I’ve previously been to online events, in person agile and development conferences and an alternative style event – TestBashX Edinburgh, but this was especially exciting.

I was attending as a speaker.

Life Goals

Now before I talk more on my actual experience as a speaker, I want to go back a few years. Throughout my life, my career decisions have been to put myself in a position where I can make a difference beyond the team. To do something people may care about. Whether it was the industry when working in games, becoming an “expert” in the growing ONVIF field or my ideas around “Behaviour-driven Lean Testing”, it all boiled down to one thing.

I wanted to do *something*. To be *someone*. The idea that I could meet a stranger and for them to know of my work was a big dream.

(I know, groan)

When I created my Threat Agents game I wasn’t sure of its value initially but people were very excited by it. Jump forwards less than a year and I am attending TestBash UK as a speaker.

Anxiety

As the event drew closer, I started getting nervous. I didn’t know anyone there. I have social anxiety and whilst speaking didn’t scare me (too much), turning up at a conference did. That first moment of walking up to the bar to have a drink with people, I was trembling.

However what struck me and made it such a wonderful event was how welcoming and friendly everyone was. If I spent longer than 2 minutes looking like a deer in headlights, someone would come over and introduce themselves. When looking for a seat, I’d be invited over.

I got to meet so many lovely people. There were folk from throughout the UK and beyond, each with different levels of experience, from someone new to testing to a veteran over decades. I spoke to many people with a mix of skill sets and different passions within testing.

Pride

My talk was (not unsurprisingly) on threat modelling, in particular my journey getting into threat modelling and how I’ve brought it to my team.

I’d been practising it over and over, walking around my living room whilst speaking to an empty sofa. How would it feel doing this in front of people? Especially because I’m a very anxious, shy and nervous person (at first).

There was only way to go about it – go for it. Embrace it.

Belly’s gonna get ya! – Speaking at TestBash UK

After (hopefully no longer than) 30 minutes my talk was complete. The crowd had laughed at my jokes, applauded my video and gave a positive response. I was beaming afterwards! The following day I ran my workshop and people warmed to it really well. My favourite moment of the entire conference was just listening in on one of the groups and hearing a perfect example of a threat modelling discussion.

Threat modeling in action during the workshop

It was the proudest few days of my life, other than my wedding of course.

And the exciting part is that I’m not done there…

Takeaway Express

If anyone is reading this and hasn’t attended a conference before then I’d thoroughly recommend it. Not only do you get to attend great talks (and often also workshops etc) but networking is a huge part of what makes an event so great. I’d always thought that “networking” with people would be like my initial experience and impression of LinkedIn – trying to promote yourself on the jobs market – but it is so much more. It is a great mixture of socialising and learning with maybe a dash of schmoozing along the way.

And finally in other news

Just make sure you leave your weekend free afterwards because you might be pretty knackered! For example maybe not go to the zoo spread over a steep hill with your niece and nephew!

Categories
Experience Reports Ramblings

Effective testing

Many people view testing as a checkbox exercise where you spend a chunk of time writing what you plan to test to verify the ACs (basically re-wording the ACs) and what you will regression test. This may be in test cases or tasks/comments on a story. Once reviewed/shared, the tester of the story would then go through these one at a time and check them off.

One advantage of this is that you are less likely to miss something for forget to check that a certain behaviour works. The other advantages are, well actually I don’t have a scooby. You can give them to anyone that can follow instructions I guess?

It probably isn’t a surprise to hear that I’m not fond of this approach.

This is where I babble on about exploratory testing right? Well no.

My preference isn’t quite exploratory testing as per the text books. I will have an idea of what I need to test and I may jot a few bullet points to remind myself. These aren’t as detailed as charters, lacking the clear scope and purpose in them. Instead these bullet points will serve as a reminder for if/when I get side tracked.

As I use the software and a new feature, it can often quickly become clear what it pretty bullet proof and what could be a little bit more flakey. Based on the story, my experience of the software (and software in general) plus knowledge of previous bugs I can identify the risks as I go. If I’m being effective in my testing, I can quickly adapt and refocus my efforts so that most of my time is spent trying out behaviour that is most likely to be broken.

Bringing this back to how some colleagues think that testing should work, when doing the more rote testing you might spend 50% of your time testing what the developers have already tested. On top of that you’ll probably be testing behaviour that through usage of the software you’ll see anywhere.

I feel like there’s a lot more value in just using the software. You can use heuristics, notes or charters to guide you but don’t give equal weight to code that never yields bugs compared to an area where there’s more likely to be bugs. I try to avoid having to explicitly test the more basic / core ACs. In most cases I will hit these as part of my exploration and testing.

After all, most interesting functional bugs tend to be a result of behaviour that wasn’t called out in the ACs or planning.

(This assumes the tester is involved throughout the SDLC, as they should be)

The other thing of course that any good tester will do is learn.

Software can be very complicated and it can be quite rare that everyone is an expert on using every single feature, protocol and so forth within the solution. Quite often I will be learning something new about the software, Windows or tooling. As we experiment we may find that the operating system has a nuance or configuration option that you previously weren’t aware of or find a piece of related functionality that hasn’t received much love in a while and no one thought about how it responds to the changes in your user story.

There is also the behaviour that may not be captured in the ACs. I wouldn’t expect the ACs to detail the expected behaviour every single possible scenario that could occur (you could be writing ACs until the end of time). Through your exploration you will hopefully be asking yourself questions that weren’t asked back during planning (especially before you’ve seen it for real). Just because no one asked the question, finding out the answer can still be valuable and insightful. It may be that the behaviour in some circumstances “doesn’t matter” as far as the Product Owner is concerned for the feature to ship, however it can still be worth sharing with them.

A learning isn’t necessarily just “this is odd” or “I have a new skill”. It may be that your new button correctly handles OS themes, even if your app doesn’t have it as a requirement.

In my opinion a good tester, after a day of testing will have more to say than “ACs all passed”.

As a bit of a silly example of what I mean in all this waffle, let’s say I was testing a video game that has added the ability to walk around in a user story. Do I need to study the ACs? A lot of it is hopefully obvious. Perhaps I’ll make a quick skim read and jot a few notes. The second I get in I might use the left thumbstick to move around. I’ll no doubt go in a bunch of different directions. What about other buttons/sticks? As I use it, can I notice speed differences depending on how much I tilt the stick? What happens if I spin it? If I bring up the Inventory added previously, does my character run around as I am picking my items?

After a little while I’ll have a good feel for it. I’ve possibly found a few issues with it and have a few notes. I can then go back and properly read through the ACs. No doubt I can tick off most of the ACs already and can then look at verifying the remaining ACs as well as exploring the areas that aren’t working as expected – or aren’t defined in the ACs. For example if I found that the inventory has issues, what more can I do?

So hopefully that explains how I think a skilled manual tester is most effective and useful when they are using their knowledge, experience and heuristics to define their testing and letting it evolve based on what they find, not dictated by a fixed list.

As a short note, recently I was involved in a nasty support case. After finding the bug, I was tasked with giving an estimate in getting a fix out. In doing this I called out the sorts of testing that would be required. A day or two later I was quizzed as to why I hadn’t written up a proper test plan yet. My response – I wasn’t planning on writing one. I never do. I think we were equally horrified with each other with the Tech Support manager thinking I was lazy & not treating it seriously with me wondering why I was stuck back in the dark ages.

Categories
Experience Reports

Using unit tests to unlock quality (Pt II)

In a previous blog entry I talked about unit testing and how I’ve learnt from my (many) mistakes when writing unit tests and practices that I’ve seen that wind me up.

Today I’d like to talk about how I’ve been writing unit tests recently, employing the ideas of TDD (test driven development), and some of the pros and cons of using this approach.

When I first learnt of TDD and was strongly encouraged to use it, I thought it was about writing tests then code. This is kind of true but it also a gross simplification and one that others that I’ve spoken with also have. At the time I really didn’t like it and rejected the idea but having learnt more, I think it is actually kind of swell.

TDD is more iterative and helps you design the code.

  1. Write a “single” unit test describing an aspect of the program
  2. Run the test, which should fail because the program lacks that feature
  3. Write “just enough” code, the simplest possible, to make the test pass
  4. “Refactor” the code until it conforms to the simplicity criteria
  5. Repeat, “accumulating” unit tests over time

Here’s a basic example of TDD for a method to take two strings and adds them:

  1. Start with the most basic case:
    • Assert.Eq(myThing.Add(“1”, “2”), 3)
  2. Write code to make that pass.
  3. Tidy up the code you’ve written
  4. Repeat the process as you build up functionality
  5. What’s next? Error handling with string parsing:
    • Assert.Null(myThing.Add(“cat”, “2”))
  6. After writing the test, see the result and fix if necessary (seems likely at this point).
  7. Okay, time to do the tricky bit. Again, write a new test, see the result and iterate:
    • Assert.Eq(myThing.Add(“one”, “2”), 3)
  8. Some edge cases:
    • Assert.Eq(myThing.Add(“-3”, “four”)
    • or: Assert.Eq(myThing.Add(input1, input2), expectedOutput)
  9. What’s next? Error handling:
    • StrToInt.returns(null) / StrToInt.Throws(ex)
  10. And so on…

One thing I quite liked was when I’m testing my interface for the new class within other classes. It had me thinking “how do I want to handle these situations?”. Previously I would have written a wad of code, handling errors as I see potential to bump into them etc then knowing what I intended the code to do, I’d write the test to ensure it passed. TDD got me more focused on desirable behaviour.

The other benefit that I found was that if I found adding an extra bit of functionality required touching other unit tests that weren’t interested in that change, I knew that my code design was wrong. I was building much more independent tests and therefore, I hope, more maintainable code. If we decide to change how to handle one bit of a method, I won’t be having to update every sodding test like we’d done in the past.

Of course the benefit of better and more maintainable code could just be because I’m more experienced (even if I barely write code since returning to a test role). However I remember feeling especially chuffed with the code.

I’ve heard that TDD can help reduce manual testing required. Personally I’m not sure if that is the case for me given that historically I’ve had very good coverage – even they were written in an overly complicated manner. Anyway, I’d be very apprehensive about reducing the functional testing on the basis of code being unit testing. However I was at least happier than I wouldn’t need to repeat manual dev testing.

There are of course drawbacks. I would have a torrid time if I tried doing this in an area that has really badly written code and tests. It was definitely easier to embrace when I was adding new features.

Also thinking back to some of my previous projects, I may have started work on a changeset with a less defined idea of what I wanted to do. We all know (hopefully) of exploratory testing but I’ve often embraced “exploratory coding”, where I’m exploring ideas of how to put together a class or how an API works through the code.

You can probably still use TDD with this early doors by using behaviour driven tests with little thought on implementation. However my problem here is that if I’m not confident on how something will work, I can find myself adding/removing parameters and changing my design of the code quite a bit until I get a “feel” for it.

I’ve found that if it isn’t a clear area that I’m working on, I might do my exploration of the code, see how it works, understand what I want to be doing and importantly, know that my code is like my exploration notes and not get attached. Then when I have an understanding, I’ll switch to TDD and write it “for real”. However I’ve only limited experience of doing this so I’m not sure how practical it is.

Finally in my experience so far I’ve found that it was definitely slower than some of my similar sized user stories in the past. In the short term it may negatively impact velocity and leave a bad impression but if you’re writing tests that are easier to maintain then this should benefit you in the long run.

Yes, it took me longer to write each changeset, but I wasn’t re-writing unit tests every time my next changeset built upon my previous code. The next time I work on this feature I expect to be quicker than I would have in the past.

In the long run, TDD seems like it will not only help me write better code and tests but whoever picks up working on that area will hopefully thank me for the effort. I’d certainly be grateful if the previous developer in an area has written maintainable and testable code.

Categories
Experience Reports

Using unit tests to unlock quality (Pt I)

When I started working as a developer my mentor taught me to write unit tests with each changeset, so I did. After switching team, my new lead & mentor had us doing the same and I learnt new techniques to write more complex unit tests. When a couple of newer members joined the team, getting unit tests written was something I pushed hard. After all, it was good practice that all good software engineers do.

One of my strengths, or so I thought, was writing unit tests for any and every method. No matter how ugly to code that the test was for was, as a (very small) team we had great coverage… even if it became a running joke that maintaining the tests was often most of a user story.

In hindsight I realise that I was wrong on two accounts.

Not all developers are writing unit tests anywhere to the level that I thought.

It surprised me when we kept having regressions in sections of code. I asked why unit tests weren’t catching them. The simple answer was the code was too hard to unit test.

Now in the developer’s defence here, this is a very old code base that they were building upon and there was no existing coverage but I want to talk about the idea what code can be too hard or “not possible”.

One of the most common challenges that I’ve seen is with calling APIs (Windows / first or 3rd party) or where your method relies on an external entity. Some examples might be using DirectX, accessing the file system or calling an API for a third party system.

The solution is, in theory, pretty simple. Mocking. Rather than calling DirectX directly, have a wrapper and call that. Keep your logic separate from the API calls and you can test it. This is good for developing maintainable code as well as good for your testing. There may be the odd exception where your wrapper might complicate things too much, but that should be a rarity not a norm.

The other reason for not using testing is where timing issue make the tests flakey. Now this is a good reason to not automate something as I believe that a flakey test is worse than no test. However again in most cases I have found the mocking is again the solution. In projects where I’ve been a developer we always have wrappers for our timers so that if we want to test the behaviour in response a timer elapsing, we just invoke the timer.

I’ve found dependency injection to be really useful in making my code testable. We’ve also used reflection as well where you can insert your mock into a created object. You can also set certain properties so that if you’ve got a private member for “isAlive” then you can test “personUnderTest.PokeWith(stick)” with different values for “isAlive”, without having to include steps like “personUnderTest.ThrowOffBridge()” in your setup (meaning changes to ThrowOffBridge can affect PokeWith).

Another thing that I’ve found a little unsettling is “it’s all pushed, I’ve just some unit tests to write.”

No, no no.

There’s a few big issues here:

  • It assumes that your code would pass unit testing before trying.
  • It assumes that your code is testable.
  • If either of those are not true then you will have to re-write the functional code, dev test it again then get it through review again.
  • It can lead you to write unit tests to pass, rather than to test.

My other learning is how bad my tests and code were.

Some of the methods that we wrote were massive and complicated. This meant that in order to unit test one part of the code, I needed to mock and setup absolutely loads of other code. The worst part was making changes. Because we decided that one small part of the business logic needed changing, I was fixing up dozens of unit tests. It was nasty.

I really have learnt the value in keeping things small and ensuring that your methods are serving one function, not “go do everything”.
Some words on how keeping things small is better.

The other major mistake that I made was being what I thought was clever in creating tests that I could set a bunch of inputs on different parameters then the expected output. For example changing how some of my mocks would be setup based on logic in my unit test. Only needing one unit test to cover a bunch of different business logic is genius right?

No. No it is not. It meant that I had tests that were very hard to debug when they failed. It also made it really awkward when we made a tweak or extension to the behaviour.

Lesson learnt: Keep your code and tests simple!

In my next post I will explore more on the technique(s) that I’ve been using to improve my unit tests.