Categories
Ramblings

Change

I am writing this from a dark place. That isn’t my solution to rising energy prices, but instead from the loss of my wife. I mention this as I think it is relevant to the changes that my career has taken and what I want to talk about today.

Change can be hard. It is even harder when it isn’t something that you’d planned for or really wanted.

The first company that I joined was using practices most accurately described as waterfall. After many years of development hell, the product was finally released and a shift to more agile working was on the cards, but it was too late and we went bust. It was interesting seeing the impact of how we worked and the challenges in the introduction of “agile” within our teams.

When I started at this company I was fresh out of Uni and whilst a shy, timid, geek, I did live rather carefree and lacked purpose beyond my work. My days would involve working, watching trash TV and playing games or going out drinking and this was the case until right at the end when I met a wonderful person, Hannah.

After this I started a new role in a new sector, testing surveillance systems. The company had been stuck in a bit of development hell but were finally nearing release. As that completed, the company moved to use (some form of) agile working. I think lean, or scrum of scrums (I get confused over terms at times). This was an interesting period and people responded well. Over the next year or two the company really seemed to improve its ways of working. I was seeing some of the advantages of agile working and whilst I was still technically in a separate test team, I got to work closely with the developers and really liked that.

That said, I wasn’t enjoying work. Testing practices were far too dependant on writing lots of documents, executing what is written in the documents then writing more documents on what was done. My frustrating for this and over time my interest in C# (from hobby game dev) led to me moving to development.

During this period of my life I found that it seemed to all come together. I was happy with my partner, evolving into a better person, enjoying my hobby game dev and happy at work. Whilst there were ups and downs, it always felt like I was moving forward in life. I ended up getting married and life was pretty darn good. During this time my work had also evolved with using a Kanban workflow and teams with embedded testers. That worked really well and I did really life being in the team, even on those days when the project sucked.

Strangely I started feeling down. Missing a “purpose”. I’d been encouraged to push more to learn and develop my skills as a software dev, but I didn’t care about it. At retrospectives I cared more about testing practices. In fact half the time it was testing that was the better part of my job, as opposed to code reviews or writing documentation.

I made some bold decisions by moving back to test and also sought mental health support. 2019 was the year when I took control. This was followed by the year of chaos with the pandemic, a takeover at work, change of teams and with that a move to scrum (with myself taking on the scrum master role as well as QA). It was hard and whilst I hated how changes just happened with no clear plan, once I managed to adapt it was a great time. I began to feel more at home with my teams, my wife and also my career. The Ministry of Testing became a big part of my life over the next few years – in particular the fact that I could downloads LOADS of great talks and watch them whenever. It felt like everything was closer, together and much better… even when the world had us all apart. Life was at its best.

Then the past 3 months happened. Hannah died during the xmas period. We don’t know why. My life was turned upside down and on top of that this month I’ve moved to a new (to me) project in a sort of different organisation. The project that I’m joining has been in development hell for years. There’s major issues (in my view) with the testing practices. I now find myself sitting at home, by myself, watching trash TV and playing games knowing that the next day I’m working in a scenario that I thought I’d avoided twice already.

Change is hard and it can be daunting. However, like it or not, we must go on. (I think)

Whilst I can never fix the loss of Hannah, I’m trying to refocus on my work and testing. I am looking to use whatever little energy I have to try and guide this new organisation. Rather than trying to adapt to the changes, can I make a difference and be a positive force for change? 

Providing I don’t get myself in trouble for writing about this, over the next year I hope to share how this goes. If I am able to say that I’ve managed to make something of the changes going on, well that’s something.

P.s. apologies if this is a bit too Dear Diary. It is good to say these things.

Categories
Ramblings

Exploring my testing

When I first started testing within the games industry we would perform general “destructive testing”. This basically meant there was no specific work so we went off to find bugs (or slack off). I liked to pick on a particular area and would explore that and the behaviours, looking for any little nuances.

Over the years I’ve tried using this destructive testing within my day-to-day testing of user stories, going beyond the remit of what my assigned test cases would say and trying to break the feature. I especially liked doing this when I was picking up something new or unfamiliar – my time to shine – and also to my shame, in a bit of a grumpy mood. The buzz of finding a bug, something that has previously slipped through the net, always cheered me up. Even if I didn’t find a bug, it was often enjoyable and could also be informative.

The one downside of saying that you will do this is that it has a very negative name. I am going to break all of your toys. I didn’t like using ad hoc though. That felt like telling my team “I’m going to go do stuff”.

In more recent times as I’ve learnt more about testing, and perhaps matured (that is debatable), I’ve started using the term “exploratory testing” instead. However I am aware that I’m not really doing it correctly as I was never writing charters, just bulleted lists as a reminder of areas to cover.

I still struggle with them a little. Part of my problem is that I often have an idea of the sort of thing I’m looking for but I often feel like I’m shoehorning thato into the Explore … With … To Discover … format and can end up being “Explore feature with what I always use to discover any regressions”. Very meaningless.

  • Explore changing AD config with existing <redacted> users to discover if they have a seamless experience.
  • Explore alarm ownership combined with features like protection, procedure and escalation to discover if <redacted> users are handled the same as <redacted> users. [~90mins]
  • Explore video lockout with <redacted> to discover if it now works.

These are some of my charters for a large user story. Terms internal to the team have been redacted.

Interestingly when having a read of the Exploratory testing APIs section of Mark Winteringham’s “Testing Web APIs” book I learnt about an alternative template that might suit me better. Going forwards I am going to try writing my charters using “Look at … To test for …”.

The definite positive that I’ve found from my time using more structured/formal exploratory testing is when I’ve had a report to produce. Whilst I usually just have a rolling comment in a test task on Jira (or whatever tool I’m using) to keep my notes, occasionally using Google Docs/Sheets, for some larger testing I used a tool that I had built myself. Whilst obviously I can’t share the reports publicly, my team responded very positively to them. Typically I’d only expect people to look at my final comment on the story. To my surprise after attaching my first exploratory report as a PDF, we were reading them as a team and chatting about some of the findings. No doubt having pictures helped, as opposed to a wall of test that might have seen been in my “test task rolling comment” approach.

Here’s an example report that my tool can create, based on my tool:

I have just started using Xray Exploratory App and it seems pretty promising. I haven’t used it for any functional testing but when I took part in a Capture the Flag style tournament, it was a great way to keep notes on things I learnt about the system and commands that works.

I am starting to feel like I can genuinely call myself an exploratory tester.

Next step – getting my team using it!

Categories
Experience Reports

Challenging myself in Security

Over the past 2 years and the past year in particular I have been learning about cyber security. Whilst I have spoken a lot about threat modeling and even created my own card game (see threatagentsgame.com), I have also been learning a wider area.

I recently took part in a tournament by Secure Flag (courtesy of my work). It was quite interesting to get a more practical learning experience in how a lot of the vulnerabilities that I’d heard about worked in practice. Much of it was focused on coding (not my strong suit) and if it wasn’t for being stumped on a javascript based framework that I hadn’t heard of, let alone used, then I would have got perfect points – so that was pretty cool.

What I most enjoyed was some of the ‘QA’ exercises and also how I was testing my code based exercises. The platform would spin up a VM/container that you can use via the web browser. It would typically contain a browser, Visual Code (aside from QA activities) and Postman. I was then typically using Postman to make my attacks and also write tests to verify the fixed environment.

So why am I wittering about this?

Much of the attacks that I made using Postman, or XSS injection etc, weren’t all that different to testing that I might perform ordinarily. Many people will be testing APIs using Postman and used to inserting “dodgy” values to try and break an API, or web form, therefore is it really a stretch to use SQL injection or XSS injection?

Quickly I’ve learnt that adding some security/pen testing to my toolkit is actually pretty straightforward and not different to typical exploratory/destructive testing that I might perform. In fact some of the actions that I performed were things that I might have done in the past. When, as testers, we try and circumvent the intended rules of the system, we are trying to perform elevation of privilege attacks. When we suspect a crash, we’re performing denial of service attacks. When we try to unleash chaos by meddling with data, we’re performing tampering attacks.

Security testing isn’t some special skill for people with fancy qualifications. It is testing. It is what I do.

Note: If anyone reading this is curious then check out OWASP Juice Shop. It is free and in my brief play with it, it is quite fun!

Categories
Experience Reports Ramblings

I spoke at TestBash UK

In late September I attended my first in person testing conference, TestBash UK.

Early arrival at TestBash UK

I’ve previously been to online events, in person agile and development conferences and an alternative style event – TestBashX Edinburgh, but this was especially exciting.

I was attending as a speaker.

Life Goals

Now before I talk more on my actual experience as a speaker, I want to go back a few years. Throughout my life, my career decisions have been to put myself in a position where I can make a difference beyond the team. To do something people may care about. Whether it was the industry when working in games, becoming an “expert” in the growing ONVIF field or my ideas around “Behaviour-driven Lean Testing”, it all boiled down to one thing.

I wanted to do *something*. To be *someone*. The idea that I could meet a stranger and for them to know of my work was a big dream.

(I know, groan)

When I created my Threat Agents game I wasn’t sure of its value initially but people were very excited by it. Jump forwards less than a year and I am attending TestBash UK as a speaker.

Anxiety

As the event drew closer, I started getting nervous. I didn’t know anyone there. I have social anxiety and whilst speaking didn’t scare me (too much), turning up at a conference did. That first moment of walking up to the bar to have a drink with people, I was trembling.

However what struck me and made it such a wonderful event was how welcoming and friendly everyone was. If I spent longer than 2 minutes looking like a deer in headlights, someone would come over and introduce themselves. When looking for a seat, I’d be invited over.

I got to meet so many lovely people. There were folk from throughout the UK and beyond, each with different levels of experience, from someone new to testing to a veteran over decades. I spoke to many people with a mix of skill sets and different passions within testing.

Pride

My talk was (not unsurprisingly) on threat modelling, in particular my journey getting into threat modelling and how I’ve brought it to my team.

I’d been practising it over and over, walking around my living room whilst speaking to an empty sofa. How would it feel doing this in front of people? Especially because I’m a very anxious, shy and nervous person (at first).

There was only way to go about it – go for it. Embrace it.

Belly’s gonna get ya! – Speaking at TestBash UK

After (hopefully no longer than) 30 minutes my talk was complete. The crowd had laughed at my jokes, applauded my video and gave a positive response. I was beaming afterwards! The following day I ran my workshop and people warmed to it really well. My favourite moment of the entire conference was just listening in on one of the groups and hearing a perfect example of a threat modelling discussion.

Threat modeling in action during the workshop

It was the proudest few days of my life, other than my wedding of course.

And the exciting part is that I’m not done there…

Takeaway Express

If anyone is reading this and hasn’t attended a conference before then I’d thoroughly recommend it. Not only do you get to attend great talks (and often also workshops etc) but networking is a huge part of what makes an event so great. I’d always thought that “networking” with people would be like my initial experience and impression of LinkedIn – trying to promote yourself on the jobs market – but it is so much more. It is a great mixture of socialising and learning with maybe a dash of schmoozing along the way.

And finally in other news

Just make sure you leave your weekend free afterwards because you might be pretty knackered! For example maybe not go to the zoo spread over a steep hill with your niece and nephew!

Categories
Ramblings

What is the future for a manual tester?

I started as a manual QA tester for a games company back in 2008 as part of a massive team, very disconnected from developers. Since then I’ve worked in smaller test teams, within feature teams (i.e. server software) and within a scrum team. This includes a chunk of time where I was the developer, with a tester in our small team.

This time next year I am expecting that my team will join a larger organisation within my company, where I would be the only dedicated tester within a scrum team. It does raise the question of what my role will be. Unsurprisingly this is often on my mind.

It also got me thinking more about what might be the future for someone who most enjoys manual exploratory testing. What roles might there be?

No doubt that for several years to come there will still be jobs out there for manual testers, either in dedicated teams or working more closely with the developers. I hope that we see more of the latter and fewer companies still having dedicated test teams. However it is clear that most people recruiting right now are most interested in automated testing. An idea that I am not fond of and have previously written about (twice in fact).

I recall listening to an AMA session where testing guru Alan Page suggests that developers will be responsible for writing automated tests, probably with the use of record and playback tooling. From my experience working as both a developer and test engineer, I definitely agree that having developers writing the automated tests is the way forward (although convincing some of my colleagues to use click and record seems ambitious).

Developers are obviously skilled in coding and as part of good engineering practice, should be thinking about edge cases and writing testing. It can help force them into thinking about writing code for testability as well. I’ve heard the argument that developers might miss edge cases that a tester writing automated tests would get, which I don’t buy. There may be developers who are more reluctant to be writing these tests or say they test, but I’m not accepting that. Not trusting your teams to do a good job isn’t a reason to hire automated test engineers. If you have a weakness, you need to develop it. Perhaps be coached somehow?

Over the past couple of years I’ve heard of more and more people who’ve become test specialists/coaches who work across multiple teams in a coaching capacity. This is intended to ensure that developers are capable of doing their own testing.

I do like this as a concept and can definitely see this being the way forward. I think it works well with methodologies like scrum and ensures that the whole team is responsible for quality.

Techniques like ATDD / BDD mean that those with a business viewpoint can get involved with testing and quality by defining the tests using gherkin. As tooling continues to improve here, it will get easier to collaborate to define behaviour and tests together. This sounds like a great time to get your test specialists involved and shifts that bug “know how” left.

This does however sound like it is suited for people who like changing company every year or two. That isn’t me. My other concern with this is that I really enjoy exploratory testing. It is why I switched back to test from dev. If I was to become a coach then would I be doing the job I love?

It possibly also leans into one weakness in having scrum teams being solely responsible for their own testing. Teams can easily wind up being in their own bubble, working on their features. Quite often what a tester brings is not just the “knack” of finding a bug but a wider product knowledge.

If I was able to pitch my perfect role, one that best leverages my skillset as a test engineer and brings most value to my company it would be to see test specialists that are more akin to POs/BAs/Scrum Masters in an organisation than someone who comes in to help teams solve their problems, then possibly move on.

Developers should write their own tests and functionally test what they are developing. However having a test/quality focused role that works with a handful of teams seems like a really intriguing prospect to me.

A typical work day could be joining a couple of stand ups then attending a refinement session, where I use knowledge of the products and knack for edge cases to help teams spot the gotchas early. I then might meet up to pair with a developer who is a bit unsure on how to test a complex problem or pick up testing of a story for a team who are short handed at the moment. I later have a look at a sprint review that I couldn’t make and see that a team have just got their feature across the line. I know its been a challenge for them so lets get the build installed and do some exploratory testing. Finally I might put together some training material on a new extension that I’ve found for our E2E automated testing framework.

This sounds awesome as a job to me. Being an almost free-spirit who helps the teams. Sadly it is most likely a fantasy although who knows? Perhaps there may be a role out there?

The future is rather uncertain and, if I’m being honest, a little unsettling. Will my role continue? Will I be able to find a new manual testing role should I want/need to? Will I be forced into automated testing, or (more preferably) a return to development? Will I move to coaching?

Curiously I am also thinking that with my interest and training in cyber security, I may end up being a pen tester. From my experience so far, this seems very much like classic testing – trying to find bugs in the system. To move forward in my career, will I find myself working in the same ways as all those years ago? …

/Rich

Categories
Ramblings

I get bored

As I reflect upon my career and what the future might hold, it made sense to consider my strengths but I realised that I should also think about my weaknesses.

In terms of hard skills, I know that the fact that the nature of the software that I’ve worked on means that I don’t have any skill or experience in tools like Cypress.io, Selenium and Postman (although I am learning Postman!). However I don’t believe that my lack of these sort of skills really matters. From my experience & training down the years, I have confidence that I *could* do them.

Soft skills are perhaps more interesting to explore. Common examples of good soft skills for testers are:

  1. Inquisitive
  2. Thorough / Attention to detail
  3. Communication
  4. Creative
  5. Analytical / logical thinking
  6. Eagerness to learn

Straight away I know my biggest weaknesses.

  • I can be sloppy
  • I am not always a great communicator, which I’ll pick up in a separate leadership post one day

Those who’ve worked with me may be surprised by the first one. My end of year reviews tend to be very positive about the quality of my work but I know in myself that I am definitely prone to errors.

For example when reading a story I often am guilty of not being thorough in reading it. Just as I would struggle with any wall of text. I lack the focus to read it thoroughly, find myself skipping words and misunderstanding the meaning of a sentence.

Similarly I know that when I am doing a lot of “checking” style testing, I am very prone to mistakes. I will see what I’m wanting to see and move on ASAP. This means that if there’s a glaring error on the page, I may be completely oblivious. This happened to us when a library uptick to address issues with combo boxes meant that a date control was no longer displayed. I went through all the screens and verified that the visual oddity couldn’t be seen.

When mistakes like this happens I know that my mind tends to be in a particular state. I’m bored.

Sometimes when tasked with boring work I will try and perform exploratory testing to see what bugs I can find. And I usually find something. However other days when I have a bunch of things to check, I won’t be in the mood to go off piste and bug hunt (or time doesn’t allow me to) and I trudge through the work and miss things.

I want to solve my boredom. I want to find ways to not only ensure my work is as higher standard as possible but also provide greater job satisfaction. In the event that someone is actually reading this, I’d be very open to ideas!

Do I try and gamify some of these tests somehow? Do I try a new technique to help spot bugs? Like try and blitz the story whilst recording it then watch it back with a more attentive eye.

Or do I simply commit to test the f**k out of these boring tasks? I do love my bug count after all…

Categories
Experience Reports Ramblings

Effective testing

Many people view testing as a checkbox exercise where you spend a chunk of time writing what you plan to test to verify the ACs (basically re-wording the ACs) and what you will regression test. This may be in test cases or tasks/comments on a story. Once reviewed/shared, the tester of the story would then go through these one at a time and check them off.

One advantage of this is that you are less likely to miss something for forget to check that a certain behaviour works. The other advantages are, well actually I don’t have a scooby. You can give them to anyone that can follow instructions I guess?

It probably isn’t a surprise to hear that I’m not fond of this approach.

This is where I babble on about exploratory testing right? Well no.

My preference isn’t quite exploratory testing as per the text books. I will have an idea of what I need to test and I may jot a few bullet points to remind myself. These aren’t as detailed as charters, lacking the clear scope and purpose in them. Instead these bullet points will serve as a reminder for if/when I get side tracked.

As I use the software and a new feature, it can often quickly become clear what it pretty bullet proof and what could be a little bit more flakey. Based on the story, my experience of the software (and software in general) plus knowledge of previous bugs I can identify the risks as I go. If I’m being effective in my testing, I can quickly adapt and refocus my efforts so that most of my time is spent trying out behaviour that is most likely to be broken.

Bringing this back to how some colleagues think that testing should work, when doing the more rote testing you might spend 50% of your time testing what the developers have already tested. On top of that you’ll probably be testing behaviour that through usage of the software you’ll see anywhere.

I feel like there’s a lot more value in just using the software. You can use heuristics, notes or charters to guide you but don’t give equal weight to code that never yields bugs compared to an area where there’s more likely to be bugs. I try to avoid having to explicitly test the more basic / core ACs. In most cases I will hit these as part of my exploration and testing.

After all, most interesting functional bugs tend to be a result of behaviour that wasn’t called out in the ACs or planning.

(This assumes the tester is involved throughout the SDLC, as they should be)

The other thing of course that any good tester will do is learn.

Software can be very complicated and it can be quite rare that everyone is an expert on using every single feature, protocol and so forth within the solution. Quite often I will be learning something new about the software, Windows or tooling. As we experiment we may find that the operating system has a nuance or configuration option that you previously weren’t aware of or find a piece of related functionality that hasn’t received much love in a while and no one thought about how it responds to the changes in your user story.

There is also the behaviour that may not be captured in the ACs. I wouldn’t expect the ACs to detail the expected behaviour every single possible scenario that could occur (you could be writing ACs until the end of time). Through your exploration you will hopefully be asking yourself questions that weren’t asked back during planning (especially before you’ve seen it for real). Just because no one asked the question, finding out the answer can still be valuable and insightful. It may be that the behaviour in some circumstances “doesn’t matter” as far as the Product Owner is concerned for the feature to ship, however it can still be worth sharing with them.

A learning isn’t necessarily just “this is odd” or “I have a new skill”. It may be that your new button correctly handles OS themes, even if your app doesn’t have it as a requirement.

In my opinion a good tester, after a day of testing will have more to say than “ACs all passed”.

As a bit of a silly example of what I mean in all this waffle, let’s say I was testing a video game that has added the ability to walk around in a user story. Do I need to study the ACs? A lot of it is hopefully obvious. Perhaps I’ll make a quick skim read and jot a few notes. The second I get in I might use the left thumbstick to move around. I’ll no doubt go in a bunch of different directions. What about other buttons/sticks? As I use it, can I notice speed differences depending on how much I tilt the stick? What happens if I spin it? If I bring up the Inventory added previously, does my character run around as I am picking my items?

After a little while I’ll have a good feel for it. I’ve possibly found a few issues with it and have a few notes. I can then go back and properly read through the ACs. No doubt I can tick off most of the ACs already and can then look at verifying the remaining ACs as well as exploring the areas that aren’t working as expected – or aren’t defined in the ACs. For example if I found that the inventory has issues, what more can I do?

So hopefully that explains how I think a skilled manual tester is most effective and useful when they are using their knowledge, experience and heuristics to define their testing and letting it evolve based on what they find, not dictated by a fixed list.

As a short note, recently I was involved in a nasty support case. After finding the bug, I was tasked with giving an estimate in getting a fix out. In doing this I called out the sorts of testing that would be required. A day or two later I was quizzed as to why I hadn’t written up a proper test plan yet. My response – I wasn’t planning on writing one. I never do. I think we were equally horrified with each other with the Tech Support manager thinking I was lazy & not treating it seriously with me wondering why I was stuck back in the dark ages.

Categories
Experience Reports

Using unit tests to unlock quality (Pt II)

In a previous blog entry I talked about unit testing and how I’ve learnt from my (many) mistakes when writing unit tests and practices that I’ve seen that wind me up.

Today I’d like to talk about how I’ve been writing unit tests recently, employing the ideas of TDD (test driven development), and some of the pros and cons of using this approach.

When I first learnt of TDD and was strongly encouraged to use it, I thought it was about writing tests then code. This is kind of true but it also a gross simplification and one that others that I’ve spoken with also have. At the time I really didn’t like it and rejected the idea but having learnt more, I think it is actually kind of swell.

TDD is more iterative and helps you design the code.

  1. Write a “single” unit test describing an aspect of the program
  2. Run the test, which should fail because the program lacks that feature
  3. Write “just enough” code, the simplest possible, to make the test pass
  4. “Refactor” the code until it conforms to the simplicity criteria
  5. Repeat, “accumulating” unit tests over time

Here’s a basic example of TDD for a method to take two strings and adds them:

  1. Start with the most basic case:
    • Assert.Eq(myThing.Add(“1”, “2”), 3)
  2. Write code to make that pass.
  3. Tidy up the code you’ve written
  4. Repeat the process as you build up functionality
  5. What’s next? Error handling with string parsing:
    • Assert.Null(myThing.Add(“cat”, “2”))
  6. After writing the test, see the result and fix if necessary (seems likely at this point).
  7. Okay, time to do the tricky bit. Again, write a new test, see the result and iterate:
    • Assert.Eq(myThing.Add(“one”, “2”), 3)
  8. Some edge cases:
    • Assert.Eq(myThing.Add(“-3”, “four”)
    • or: Assert.Eq(myThing.Add(input1, input2), expectedOutput)
  9. What’s next? Error handling:
    • StrToInt.returns(null) / StrToInt.Throws(ex)
  10. And so on…

One thing I quite liked was when I’m testing my interface for the new class within other classes. It had me thinking “how do I want to handle these situations?”. Previously I would have written a wad of code, handling errors as I see potential to bump into them etc then knowing what I intended the code to do, I’d write the test to ensure it passed. TDD got me more focused on desirable behaviour.

The other benefit that I found was that if I found adding an extra bit of functionality required touching other unit tests that weren’t interested in that change, I knew that my code design was wrong. I was building much more independent tests and therefore, I hope, more maintainable code. If we decide to change how to handle one bit of a method, I won’t be having to update every sodding test like we’d done in the past.

Of course the benefit of better and more maintainable code could just be because I’m more experienced (even if I barely write code since returning to a test role). However I remember feeling especially chuffed with the code.

I’ve heard that TDD can help reduce manual testing required. Personally I’m not sure if that is the case for me given that historically I’ve had very good coverage – even they were written in an overly complicated manner. Anyway, I’d be very apprehensive about reducing the functional testing on the basis of code being unit testing. However I was at least happier than I wouldn’t need to repeat manual dev testing.

There are of course drawbacks. I would have a torrid time if I tried doing this in an area that has really badly written code and tests. It was definitely easier to embrace when I was adding new features.

Also thinking back to some of my previous projects, I may have started work on a changeset with a less defined idea of what I wanted to do. We all know (hopefully) of exploratory testing but I’ve often embraced “exploratory coding”, where I’m exploring ideas of how to put together a class or how an API works through the code.

You can probably still use TDD with this early doors by using behaviour driven tests with little thought on implementation. However my problem here is that if I’m not confident on how something will work, I can find myself adding/removing parameters and changing my design of the code quite a bit until I get a “feel” for it.

I’ve found that if it isn’t a clear area that I’m working on, I might do my exploration of the code, see how it works, understand what I want to be doing and importantly, know that my code is like my exploration notes and not get attached. Then when I have an understanding, I’ll switch to TDD and write it “for real”. However I’ve only limited experience of doing this so I’m not sure how practical it is.

Finally in my experience so far I’ve found that it was definitely slower than some of my similar sized user stories in the past. In the short term it may negatively impact velocity and leave a bad impression but if you’re writing tests that are easier to maintain then this should benefit you in the long run.

Yes, it took me longer to write each changeset, but I wasn’t re-writing unit tests every time my next changeset built upon my previous code. The next time I work on this feature I expect to be quicker than I would have in the past.

In the long run, TDD seems like it will not only help me write better code and tests but whoever picks up working on that area will hopefully thank me for the effort. I’d certainly be grateful if the previous developer in an area has written maintainable and testable code.

Categories
Experience Reports

Using unit tests to unlock quality (Pt I)

When I started working as a developer my mentor taught me to write unit tests with each changeset, so I did. After switching team, my new lead & mentor had us doing the same and I learnt new techniques to write more complex unit tests. When a couple of newer members joined the team, getting unit tests written was something I pushed hard. After all, it was good practice that all good software engineers do.

One of my strengths, or so I thought, was writing unit tests for any and every method. No matter how ugly to code that the test was for was, as a (very small) team we had great coverage… even if it became a running joke that maintaining the tests was often most of a user story.

In hindsight I realise that I was wrong on two accounts.

Not all developers are writing unit tests anywhere to the level that I thought.

It surprised me when we kept having regressions in sections of code. I asked why unit tests weren’t catching them. The simple answer was the code was too hard to unit test.

Now in the developer’s defence here, this is a very old code base that they were building upon and there was no existing coverage but I want to talk about the idea what code can be too hard or “not possible”.

One of the most common challenges that I’ve seen is with calling APIs (Windows / first or 3rd party) or where your method relies on an external entity. Some examples might be using DirectX, accessing the file system or calling an API for a third party system.

The solution is, in theory, pretty simple. Mocking. Rather than calling DirectX directly, have a wrapper and call that. Keep your logic separate from the API calls and you can test it. This is good for developing maintainable code as well as good for your testing. There may be the odd exception where your wrapper might complicate things too much, but that should be a rarity not a norm.

The other reason for not using testing is where timing issue make the tests flakey. Now this is a good reason to not automate something as I believe that a flakey test is worse than no test. However again in most cases I have found the mocking is again the solution. In projects where I’ve been a developer we always have wrappers for our timers so that if we want to test the behaviour in response a timer elapsing, we just invoke the timer.

I’ve found dependency injection to be really useful in making my code testable. We’ve also used reflection as well where you can insert your mock into a created object. You can also set certain properties so that if you’ve got a private member for “isAlive” then you can test “personUnderTest.PokeWith(stick)” with different values for “isAlive”, without having to include steps like “personUnderTest.ThrowOffBridge()” in your setup (meaning changes to ThrowOffBridge can affect PokeWith).

Another thing that I’ve found a little unsettling is “it’s all pushed, I’ve just some unit tests to write.”

No, no no.

There’s a few big issues here:

  • It assumes that your code would pass unit testing before trying.
  • It assumes that your code is testable.
  • If either of those are not true then you will have to re-write the functional code, dev test it again then get it through review again.
  • It can lead you to write unit tests to pass, rather than to test.

My other learning is how bad my tests and code were.

Some of the methods that we wrote were massive and complicated. This meant that in order to unit test one part of the code, I needed to mock and setup absolutely loads of other code. The worst part was making changes. Because we decided that one small part of the business logic needed changing, I was fixing up dozens of unit tests. It was nasty.

I really have learnt the value in keeping things small and ensuring that your methods are serving one function, not “go do everything”.
Some words on how keeping things small is better.

The other major mistake that I made was being what I thought was clever in creating tests that I could set a bunch of inputs on different parameters then the expected output. For example changing how some of my mocks would be setup based on logic in my unit test. Only needing one unit test to cover a bunch of different business logic is genius right?

No. No it is not. It meant that I had tests that were very hard to debug when they failed. It also made it really awkward when we made a tweak or extension to the behaviour.

Lesson learnt: Keep your code and tests simple!

In my next post I will explore more on the technique(s) that I’ve been using to improve my unit tests.

Categories
Experience Reports Guide

Using dump files to guide testing

You don’t need to understand code to make use of dump files.

One tool that I’ve frequently used throughout my testing career (and also development) is WinDbg. I was a little surprised when I realised that very few other people use it so I thought that I’d share a little about why I use it and how to get going.

What can you do?

  • See the code path in a crash dump
  • View data in memory
  • View threads that are running when software is in a hang
  • Many more things that I’ve yet to try

This can be especially useful if you’re tasked with reproducing a crash reported by a customer and (as is unfortunately common) they say “I wasn’t doing anything”.

What you’ll need:

  • WinDbg or WinDbg Preview
  • Access to symbols files for your software (developers can probably help set you up).
  • A dump file that you want to look at (more on this later)

This is part of Debugging Tools for Windows. You can download it for free from Microsoft. There’s a newer “preview” version that is quite neat plus the older one that I’m more accustomed to using as part of the Windows 10 SDK. Both are linked from here:

https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools

If the link is broken, Google for WinDbg and you should find it.

Obtaining dump files

Hopefully your software outputs crash dumps but if not, you can add some registry keys to ensure that they are generated in a known location. Even if your software does create minidumps, you may value full dumps more:

  1. Open regedit and access: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDump
  2. Create the following:
    1. DumpFolder – REG_EXPAND_SZ – %LOCALAPPDATA%\CrashDumps
    2. DumpCount – REG_DWORD – 5
    3. DumpType – REG_DWORD – 2

For more see: https://docs.microsoft.com/en-us/windows/win32/wer/collecting-user-mode-dumps

If you want to create a dump file of running software, for example to inspect memory or dig into a hang then you can use task manager. Just right click on the process and click “Create dump file”.

Note that if you are using a 32bit application then you’ll need the 32bit task manager, e.g. c:\windows\syswow64\Taskmgr.exe

There’s also tools out there that will generate dump files on demand.

Symbols

The next thing that you’ll need to do is setup symbols path. These will help turn the 0s and 1s in a minidump into more readable strings. Create yourself a folder for the symbols, for example c:\symbols. Then in WinDbg you’ll need to set the path. In the old version open the File menu and you should see an option. For the Preview version go to Settings then Debugging.

SRV*c:\symbols\*https://msdl.microsoft.com/download/symbols;SRV*c:\symbols\*<YOURPATHHERE>

You may have to add a few paths in there but hopefully you get the idea.

Analysing crash dumps to get a call stack

Now on to the juicy part, analysing a crash dump. You can open it from the File menu.

From the View menu, you should be able to see the option for a stack / call stack. Bring that up whilst the dump is being loaded.

Now run the following commands (they take a few minutes):

.ecxr

!analyze -v

This should tell you a chunk of information about the crash. Based on this you can

An example of using this information

When looking for an example I found a crash dump from a game that I made many years ago. I have absolutely no idea what might of caused it so hopefully now I can figure out why.

My WinDbg analysis included the following:

System.NullReferenceException

This tells me that it tried using an object that didn’t exist. Either it hasn’t been set or has been deleted but is still in use.

007edc0c 081abfb6 X_Orbtek_II!X_Orbtek_360.XOrbtek.UnloadContent+0x6
007edc10 081abe73 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.Game.DeviceDisposing+0x1b
007edc20 081abe43 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.OnDeviceDisposing+0x13
007edc28 081abe15 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.HandleDisposing+0x15
007edc34 081ab7cf Microsoft_Xna_Framework_Graphics!Microsoft.Xna.Framework.Graphics.GraphicsDevice.~GraphicsDevice+0x2f
007edc40 081ab72f Microsoft_Xna_Framework_Graphics!Microsoft.Xna.Framework.Graphics.GraphicsDevice.Dispose+0x1f
007edc60 081ab543 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.Dispose+0x133
007edc74 081ab3f5 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.GraphicsDeviceManager.System.IDisposable.Dispose+0x15
007edc80 081ab0e3 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.Game.Dispose+0xb3
007edcb4 081ab015 Microsoft_Xna_Framework_Game!Microsoft.Xna.Framework.Game.Dispose+0x15

From this I can tell that the game was being closed. It has happened when unloading content so likely its tried to . It may be possible dig deeper. As you gain more skill with the software it is possible to learn more about what was in memory to understand at exactly what point it crashed.

And here’s the key part

If I was trying to reproduce this crash, I can take the knowledge learnt from the crash dump to guide how I will try and get to the bottom of it. Unfortunately the example dump I picked is a tricky one but I could maybe come up with something like “Explore exiting the game with different assets loaded to discover the source of the crash”.

A 10-20 snoop in the dump file might save me a huge chunk of time in trying to reproduce a crash. Obviously I can’t share real examples from my professional life in any detail but knowing that mouse over a control caused a “random” crash or that the software crashed after hitting “Save” and the top of the call stack was “MyApp!MyApp.FileIo.SaveFile.ApplyTextOverlay” then I can focus on that area.

There’s loads more that you can do but hopefully this has been useful!