Categories
Ramblings

Exploring my testing

When I first started testing within the games industry we would perform general “destructive testing”. This basically meant there was no specific work so we went off to find bugs (or slack off). I liked to pick on a particular area and would explore that and the behaviours, looking for any little nuances.

Over the years I’ve tried using this destructive testing within my day-to-day testing of user stories, going beyond the remit of what my assigned test cases would say and trying to break the feature. I especially liked doing this when I was picking up something new or unfamiliar – my time to shine – and also to my shame, in a bit of a grumpy mood. The buzz of finding a bug, something that has previously slipped through the net, always cheered me up. Even if I didn’t find a bug, it was often enjoyable and could also be informative.

The one downside of saying that you will do this is that it has a very negative name. I am going to break all of your toys. I didn’t like using ad hoc though. That felt like telling my team “I’m going to go do stuff”.

In more recent times as I’ve learnt more about testing, and perhaps matured (that is debatable), I’ve started using the term “exploratory testing” instead. However I am aware that I’m not really doing it correctly as I was never writing charters, just bulleted lists as a reminder of areas to cover.

I still struggle with them a little. Part of my problem is that I often have an idea of the sort of thing I’m looking for but I often feel like I’m shoehorning thato into the Explore … With … To Discover … format and can end up being “Explore feature with what I always use to discover any regressions”. Very meaningless.

  • Explore changing AD config with existing <redacted> users to discover if they have a seamless experience.
  • Explore alarm ownership combined with features like protection, procedure and escalation to discover if <redacted> users are handled the same as <redacted> users. [~90mins]
  • Explore video lockout with <redacted> to discover if it now works.

These are some of my charters for a large user story. Terms internal to the team have been redacted.

Interestingly when having a read of the Exploratory testing APIs section of Mark Winteringham’s “Testing Web APIs” book I learnt about an alternative template that might suit me better. Going forwards I am going to try writing my charters using “Look at … To test for …”.

The definite positive that I’ve found from my time using more structured/formal exploratory testing is when I’ve had a report to produce. Whilst I usually just have a rolling comment in a test task on Jira (or whatever tool I’m using) to keep my notes, occasionally using Google Docs/Sheets, for some larger testing I used a tool that I had built myself. Whilst obviously I can’t share the reports publicly, my team responded very positively to them. Typically I’d only expect people to look at my final comment on the story. To my surprise after attaching my first exploratory report as a PDF, we were reading them as a team and chatting about some of the findings. No doubt having pictures helped, as opposed to a wall of test that might have seen been in my “test task rolling comment” approach.

Here’s an example report that my tool can create, based on my tool:

I have just started using Xray Exploratory App and it seems pretty promising. I haven’t used it for any functional testing but when I took part in a Capture the Flag style tournament, it was a great way to keep notes on things I learnt about the system and commands that works.

I am starting to feel like I can genuinely call myself an exploratory tester.

Next step – getting my team using it!

Categories
Experience Reports Ramblings

I spoke at TestBash UK

In late September I attended my first in person testing conference, TestBash UK.

Early arrival at TestBash UK

I’ve previously been to online events, in person agile and development conferences and an alternative style event – TestBashX Edinburgh, but this was especially exciting.

I was attending as a speaker.

Life Goals

Now before I talk more on my actual experience as a speaker, I want to go back a few years. Throughout my life, my career decisions have been to put myself in a position where I can make a difference beyond the team. To do something people may care about. Whether it was the industry when working in games, becoming an “expert” in the growing ONVIF field or my ideas around “Behaviour-driven Lean Testing”, it all boiled down to one thing.

I wanted to do *something*. To be *someone*. The idea that I could meet a stranger and for them to know of my work was a big dream.

(I know, groan)

When I created my Threat Agents game I wasn’t sure of its value initially but people were very excited by it. Jump forwards less than a year and I am attending TestBash UK as a speaker.

Anxiety

As the event drew closer, I started getting nervous. I didn’t know anyone there. I have social anxiety and whilst speaking didn’t scare me (too much), turning up at a conference did. That first moment of walking up to the bar to have a drink with people, I was trembling.

However what struck me and made it such a wonderful event was how welcoming and friendly everyone was. If I spent longer than 2 minutes looking like a deer in headlights, someone would come over and introduce themselves. When looking for a seat, I’d be invited over.

I got to meet so many lovely people. There were folk from throughout the UK and beyond, each with different levels of experience, from someone new to testing to a veteran over decades. I spoke to many people with a mix of skill sets and different passions within testing.

Pride

My talk was (not unsurprisingly) on threat modelling, in particular my journey getting into threat modelling and how I’ve brought it to my team.

I’d been practising it over and over, walking around my living room whilst speaking to an empty sofa. How would it feel doing this in front of people? Especially because I’m a very anxious, shy and nervous person (at first).

There was only way to go about it – go for it. Embrace it.

Belly’s gonna get ya! – Speaking at TestBash UK

After (hopefully no longer than) 30 minutes my talk was complete. The crowd had laughed at my jokes, applauded my video and gave a positive response. I was beaming afterwards! The following day I ran my workshop and people warmed to it really well. My favourite moment of the entire conference was just listening in on one of the groups and hearing a perfect example of a threat modelling discussion.

Threat modeling in action during the workshop

It was the proudest few days of my life, other than my wedding of course.

And the exciting part is that I’m not done there…

Takeaway Express

If anyone is reading this and hasn’t attended a conference before then I’d thoroughly recommend it. Not only do you get to attend great talks (and often also workshops etc) but networking is a huge part of what makes an event so great. I’d always thought that “networking” with people would be like my initial experience and impression of LinkedIn – trying to promote yourself on the jobs market – but it is so much more. It is a great mixture of socialising and learning with maybe a dash of schmoozing along the way.

And finally in other news

Just make sure you leave your weekend free afterwards because you might be pretty knackered! For example maybe not go to the zoo spread over a steep hill with your niece and nephew!

Categories
Ramblings

What is the future for a manual tester?

I started as a manual QA tester for a games company back in 2008 as part of a massive team, very disconnected from developers. Since then I’ve worked in smaller test teams, within feature teams (i.e. server software) and within a scrum team. This includes a chunk of time where I was the developer, with a tester in our small team.

This time next year I am expecting that my team will join a larger organisation within my company, where I would be the only dedicated tester within a scrum team. It does raise the question of what my role will be. Unsurprisingly this is often on my mind.

It also got me thinking more about what might be the future for someone who most enjoys manual exploratory testing. What roles might there be?

No doubt that for several years to come there will still be jobs out there for manual testers, either in dedicated teams or working more closely with the developers. I hope that we see more of the latter and fewer companies still having dedicated test teams. However it is clear that most people recruiting right now are most interested in automated testing. An idea that I am not fond of and have previously written about (twice in fact).

I recall listening to an AMA session where testing guru Alan Page suggests that developers will be responsible for writing automated tests, probably with the use of record and playback tooling. From my experience working as both a developer and test engineer, I definitely agree that having developers writing the automated tests is the way forward (although convincing some of my colleagues to use click and record seems ambitious).

Developers are obviously skilled in coding and as part of good engineering practice, should be thinking about edge cases and writing testing. It can help force them into thinking about writing code for testability as well. I’ve heard the argument that developers might miss edge cases that a tester writing automated tests would get, which I don’t buy. There may be developers who are more reluctant to be writing these tests or say they test, but I’m not accepting that. Not trusting your teams to do a good job isn’t a reason to hire automated test engineers. If you have a weakness, you need to develop it. Perhaps be coached somehow?

Over the past couple of years I’ve heard of more and more people who’ve become test specialists/coaches who work across multiple teams in a coaching capacity. This is intended to ensure that developers are capable of doing their own testing.

I do like this as a concept and can definitely see this being the way forward. I think it works well with methodologies like scrum and ensures that the whole team is responsible for quality.

Techniques like ATDD / BDD mean that those with a business viewpoint can get involved with testing and quality by defining the tests using gherkin. As tooling continues to improve here, it will get easier to collaborate to define behaviour and tests together. This sounds like a great time to get your test specialists involved and shifts that bug “know how” left.

This does however sound like it is suited for people who like changing company every year or two. That isn’t me. My other concern with this is that I really enjoy exploratory testing. It is why I switched back to test from dev. If I was to become a coach then would I be doing the job I love?

It possibly also leans into one weakness in having scrum teams being solely responsible for their own testing. Teams can easily wind up being in their own bubble, working on their features. Quite often what a tester brings is not just the “knack” of finding a bug but a wider product knowledge.

If I was able to pitch my perfect role, one that best leverages my skillset as a test engineer and brings most value to my company it would be to see test specialists that are more akin to POs/BAs/Scrum Masters in an organisation than someone who comes in to help teams solve their problems, then possibly move on.

Developers should write their own tests and functionally test what they are developing. However having a test/quality focused role that works with a handful of teams seems like a really intriguing prospect to me.

A typical work day could be joining a couple of stand ups then attending a refinement session, where I use knowledge of the products and knack for edge cases to help teams spot the gotchas early. I then might meet up to pair with a developer who is a bit unsure on how to test a complex problem or pick up testing of a story for a team who are short handed at the moment. I later have a look at a sprint review that I couldn’t make and see that a team have just got their feature across the line. I know its been a challenge for them so lets get the build installed and do some exploratory testing. Finally I might put together some training material on a new extension that I’ve found for our E2E automated testing framework.

This sounds awesome as a job to me. Being an almost free-spirit who helps the teams. Sadly it is most likely a fantasy although who knows? Perhaps there may be a role out there?

The future is rather uncertain and, if I’m being honest, a little unsettling. Will my role continue? Will I be able to find a new manual testing role should I want/need to? Will I be forced into automated testing, or (more preferably) a return to development? Will I move to coaching?

Curiously I am also thinking that with my interest and training in cyber security, I may end up being a pen tester. From my experience so far, this seems very much like classic testing – trying to find bugs in the system. To move forward in my career, will I find myself working in the same ways as all those years ago? …

/Rich

Categories
Ramblings

I get bored

As I reflect upon my career and what the future might hold, it made sense to consider my strengths but I realised that I should also think about my weaknesses.

In terms of hard skills, I know that the fact that the nature of the software that I’ve worked on means that I don’t have any skill or experience in tools like Cypress.io, Selenium and Postman (although I am learning Postman!). However I don’t believe that my lack of these sort of skills really matters. From my experience & training down the years, I have confidence that I *could* do them.

Soft skills are perhaps more interesting to explore. Common examples of good soft skills for testers are:

  1. Inquisitive
  2. Thorough / Attention to detail
  3. Communication
  4. Creative
  5. Analytical / logical thinking
  6. Eagerness to learn

Straight away I know my biggest weaknesses.

  • I can be sloppy
  • I am not always a great communicator, which I’ll pick up in a separate leadership post one day

Those who’ve worked with me may be surprised by the first one. My end of year reviews tend to be very positive about the quality of my work but I know in myself that I am definitely prone to errors.

For example when reading a story I often am guilty of not being thorough in reading it. Just as I would struggle with any wall of text. I lack the focus to read it thoroughly, find myself skipping words and misunderstanding the meaning of a sentence.

Similarly I know that when I am doing a lot of “checking” style testing, I am very prone to mistakes. I will see what I’m wanting to see and move on ASAP. This means that if there’s a glaring error on the page, I may be completely oblivious. This happened to us when a library uptick to address issues with combo boxes meant that a date control was no longer displayed. I went through all the screens and verified that the visual oddity couldn’t be seen.

When mistakes like this happens I know that my mind tends to be in a particular state. I’m bored.

Sometimes when tasked with boring work I will try and perform exploratory testing to see what bugs I can find. And I usually find something. However other days when I have a bunch of things to check, I won’t be in the mood to go off piste and bug hunt (or time doesn’t allow me to) and I trudge through the work and miss things.

I want to solve my boredom. I want to find ways to not only ensure my work is as higher standard as possible but also provide greater job satisfaction. In the event that someone is actually reading this, I’d be very open to ideas!

Do I try and gamify some of these tests somehow? Do I try a new technique to help spot bugs? Like try and blitz the story whilst recording it then watch it back with a more attentive eye.

Or do I simply commit to test the f**k out of these boring tasks? I do love my bug count after all…

Categories
Experience Reports Ramblings

Effective testing

Many people view testing as a checkbox exercise where you spend a chunk of time writing what you plan to test to verify the ACs (basically re-wording the ACs) and what you will regression test. This may be in test cases or tasks/comments on a story. Once reviewed/shared, the tester of the story would then go through these one at a time and check them off.

One advantage of this is that you are less likely to miss something for forget to check that a certain behaviour works. The other advantages are, well actually I don’t have a scooby. You can give them to anyone that can follow instructions I guess?

It probably isn’t a surprise to hear that I’m not fond of this approach.

This is where I babble on about exploratory testing right? Well no.

My preference isn’t quite exploratory testing as per the text books. I will have an idea of what I need to test and I may jot a few bullet points to remind myself. These aren’t as detailed as charters, lacking the clear scope and purpose in them. Instead these bullet points will serve as a reminder for if/when I get side tracked.

As I use the software and a new feature, it can often quickly become clear what it pretty bullet proof and what could be a little bit more flakey. Based on the story, my experience of the software (and software in general) plus knowledge of previous bugs I can identify the risks as I go. If I’m being effective in my testing, I can quickly adapt and refocus my efforts so that most of my time is spent trying out behaviour that is most likely to be broken.

Bringing this back to how some colleagues think that testing should work, when doing the more rote testing you might spend 50% of your time testing what the developers have already tested. On top of that you’ll probably be testing behaviour that through usage of the software you’ll see anywhere.

I feel like there’s a lot more value in just using the software. You can use heuristics, notes or charters to guide you but don’t give equal weight to code that never yields bugs compared to an area where there’s more likely to be bugs. I try to avoid having to explicitly test the more basic / core ACs. In most cases I will hit these as part of my exploration and testing.

After all, most interesting functional bugs tend to be a result of behaviour that wasn’t called out in the ACs or planning.

(This assumes the tester is involved throughout the SDLC, as they should be)

The other thing of course that any good tester will do is learn.

Software can be very complicated and it can be quite rare that everyone is an expert on using every single feature, protocol and so forth within the solution. Quite often I will be learning something new about the software, Windows or tooling. As we experiment we may find that the operating system has a nuance or configuration option that you previously weren’t aware of or find a piece of related functionality that hasn’t received much love in a while and no one thought about how it responds to the changes in your user story.

There is also the behaviour that may not be captured in the ACs. I wouldn’t expect the ACs to detail the expected behaviour every single possible scenario that could occur (you could be writing ACs until the end of time). Through your exploration you will hopefully be asking yourself questions that weren’t asked back during planning (especially before you’ve seen it for real). Just because no one asked the question, finding out the answer can still be valuable and insightful. It may be that the behaviour in some circumstances “doesn’t matter” as far as the Product Owner is concerned for the feature to ship, however it can still be worth sharing with them.

A learning isn’t necessarily just “this is odd” or “I have a new skill”. It may be that your new button correctly handles OS themes, even if your app doesn’t have it as a requirement.

In my opinion a good tester, after a day of testing will have more to say than “ACs all passed”.

As a bit of a silly example of what I mean in all this waffle, let’s say I was testing a video game that has added the ability to walk around in a user story. Do I need to study the ACs? A lot of it is hopefully obvious. Perhaps I’ll make a quick skim read and jot a few notes. The second I get in I might use the left thumbstick to move around. I’ll no doubt go in a bunch of different directions. What about other buttons/sticks? As I use it, can I notice speed differences depending on how much I tilt the stick? What happens if I spin it? If I bring up the Inventory added previously, does my character run around as I am picking my items?

After a little while I’ll have a good feel for it. I’ve possibly found a few issues with it and have a few notes. I can then go back and properly read through the ACs. No doubt I can tick off most of the ACs already and can then look at verifying the remaining ACs as well as exploring the areas that aren’t working as expected – or aren’t defined in the ACs. For example if I found that the inventory has issues, what more can I do?

So hopefully that explains how I think a skilled manual tester is most effective and useful when they are using their knowledge, experience and heuristics to define their testing and letting it evolve based on what they find, not dictated by a fixed list.

As a short note, recently I was involved in a nasty support case. After finding the bug, I was tasked with giving an estimate in getting a fix out. In doing this I called out the sorts of testing that would be required. A day or two later I was quizzed as to why I hadn’t written up a proper test plan yet. My response – I wasn’t planning on writing one. I never do. I think we were equally horrified with each other with the Tech Support manager thinking I was lazy & not treating it seriously with me wondering why I was stuck back in the dark ages.

Categories
Experience Reports Ramblings

Why I believe that manual testing is a great job

I’ve had an unusual journey to my current role (Senior Test Engineer, doing primarily manual testing).

My career started as a QA tester in games as a “foot in the door” to be a games developer. This was very common in the industry. However after establishing myself and becoming a Senior, I moved to Games Design rather than development. Being games, I was eventually redundant and with the desire to get paid again, I took a role as a Software Test Engineer.

I was good at it. I learnt new techniques and skills. I was using Wireshark to see communications between devices and understand why things may be behaving incorrectly. However I was also bored. Most of our testing was running test cases that had been written (and often already executed) by the developers. I then moved into an “Engineering Support” role where I’d take on all support cases passed to Engineering, taking the load off our senior & lead developers. I loved trying to analyse the system and using my “tester brain”, but constantly handling escalated cases with no useful information was miserable.

This is when I made the leap to development. After 5 solid years, working on a variety of different products, I was at the stage where I really ought to be taking on the responsibility to become a senior software engineer but I had very little appetite for it. Instead of taking the lead on new development technologies and emerging languages I found myself more interested in improving our testing. When the opportunity for a senior manual test engineer role came up, I went for it.

A few people have asked me “why?” and treat it as a step down (and even a waste of my talents), however I believe that it has made me more valuable to the company

I like to feel that I am a fairly creative person and am also good at problem solving & analysing data. This lends well to both professions. There’s common ground like being involved in the planning phase, breaking down a feature and identifying the risks and challenges that are there. The “tester brain” is really handy here. Developers then get to flex their brain in designing the code to solve the problem whilst testers will be performing exploratory testing and identifying things that were hard to see when the feature was conceptual. Whilst developers get the thrill of seeing the code they’ve written become a feature that customers use, I certainly enjoy the buzz of finding a bug. Finally there’s debugging. I can really hunting through logs, network traces and code to understand a “weird bug”. This applies to both roles (and is something I’ll touch on in a later blog).

Testing can be boring and laborious, especially when you are mainly doing “checking”. Being given a bunch of things to check, following a load of steps then providing the result is rubbish. It is just as bad as writing what seemed like endless documentation during my time in development.

During my time in development I was always undone by build infrastructure. Particularly with C++ and Apple-based applications, I had a torrid time getting things built for the first time and often my projects were light on feature work and about pulling in latest dependencies etc. I didn’t understand most of the failures or why it wouldn’t just work. Words cannot describe how happy I am that this is a rare occurrence for me nowadays (although newer technologies do seem to have alleviated a lot of the pain here).

Ultimately I prefer manual testing to development. I find that I get to spend more time doing the interesting bit (finding bugs vs writing feature code) and that because (I believe that) I am a great tester and an decent developer, I add a lot more value to the company in helping us deliver quality features in my current role than as a tester.

But what about automation testing?

What I loved about development is seeing something work. Knowing that it will be deployed for customers to use. I felt like I was making a difference in delivering the product. Automation includes the same enjoyment of writing code but ultimately it lacks that feature delivery buzz. As a role it feels inferior to being a developer. You’re doing the less interesting development tasks. Similarly if I’m spending my time writing automated tests, I am not doing exploratory testing. I am not digging through logs and code to see if I can understand the behaviour.

I believe that writing scripts, tools and on occasion tests to reduce my effort and time spent doing boring work is a valuable use of my time. Automation can be great here but to check that the ACs are in place, it is usually quicker to fire up the software and check. Then I can focus on experimenting and exploration, the best bit of being a test engineer.

So the next time someone asks as to whether I’d want to become an automation test engineer, perhaps I should ask “why would I want to do that?”.

Categories
Experience Reports Ramblings

2021 – A year in review

I believe this is a fairly common thing to do and hopefully useful for myself. Lets have a look back at the bizarre year that was 2021.

Key events:

  • Officially became an employee of Motorola Solutions, following a takeover last year.
  • Moved to a new office, which I visited a handful of times.
  • Started a new “secondary role” as a Cyber Champion.

I had a few achievements that I ought to be pleased with:

  • Became involved in cyber security, which involved a chunk of learning theory, running a variety of different types of scans and finally I ran a load of talks for my office during cyber awareness month.
  • I created a new card games called Threat agents and got myself a single copy manufactured.
  • As it has been historically agreed that automating our older, long running programs would be too involving and I was going mad with regressions testing, I wrote my own automation tool. It wasn’t great at reporting results but it did test for stability and found some interesting issues.
  • Changed how we do release testing. It probably doesn’t seem like much but I had been pushing to revamp things for a couple of years and eventually managed to get it implemented.

Whilst there’s plenty positives from the year, ultimately it wasn’t a massive success. I had a couple of big objectives for the year that failed to accomplish:

  • Develop my exploratory testing. I’ve always had the “knack” for finding bugs but I want to learn more about how people do it more professionally. I just need to take the time.
  • Use an off the shelf automation framework for automated UI and/or API testing. I am very confident that this is something I can do, however until I’ve had some proper experience I can’t add it to my CV (even if I’m not a fan of test engineers writing all these tests).

I don’t think I should beat myself up for not managing the above and I certainly don’t want to be giving myself objectives to judge myself by, however it is good to consider what I can hope to achieve in the coming year. For 2022 I want to:

  • Sell threat agents. I’ve had positive feedback for the concept and it seems like selling it might be an easier way to get it out there than sharing for free!
  • Related to this I want to become more experienced in threat modelling. Especially if I am able to give talks on the subject.
  • Improve my knowledge and technique for exploratory testing.
  • Develop my coaching skills. In particular trying to get some of my colleagues on board with some of my ideas and to feel like I’m having an impact.
  • Discover where I fit in the world. Given that automation is all the rage in the jobs market but I prefer manual exploratory testing to try and break the software, what roles are available to me going forwards? Do I have to give up on my ideals? What might lie ahead for me?

If I don’t meet these and have success elsewhere, well so long as I’m going forwards that is all that matters right?

Categories
Experience Reports Ramblings

Threat modelling: Don’t forget your test engineer

I am a test engineer at my current work. After watching a number of talks at Ministry of Testing I also signed up for a secondary role; Cyber Champion. Through this role I’ve been learning about many aspects of cyber security and then running brown bags for our office to help people learn more about the various aspects of cyber security and I’ve also been doing vulnerability scanning. However what I most want to talk about is threat modelling.

If you’ve not heard of it, Threat modelling, at least within the context of software, is an exercise to identify vulnerabilities within your solution. I’ve written some words about it on my Threat Agents site (will explain “Threat Agents” shortly) so I won’t go into too much detail. In short, you put together a data flow diagram then look for vulnerabilities in it. Most people use a mnemonic called STRIDE to achieve this.

If this isn’t familiar then I’d recommend checking out my Threat Modelling write up on my threat Agents site to learn more, or have a look at Ministry of Testing, OWASP or have a quick Google.

Now to the point. Many teams may approach threat modelling by pulling in their senior software engineers, those with the most experience developing the software. However this is a poor idea. Bringing less experienced people to the table could lead to attacks that are “known but unsaid” and therefore easily forgotten or other blind spots that have been learnt throughout the years.

But there’s someone else that you really should bring along. Someone who spends most of their day trying to identify the risks in a feature. Someone who has the nack of finding holesand flaws. Someone who has probably has the widest knowledge of your solution.

Your test engineer.

Next time you are threat modelling, be sure to invite your test engineers. They don’t need to have any security experience or programming background. If they have the ability to spot that “X + Y – Z = Crash”, they are likely to also spot that “R + T – U = Vulnerability”.

If you’ve not done threat modelling before then it can seem quite daunting. Certainly when I was about to have my first sessions I felt pretty anxious that I’d be out of my depth, despite having read and understood plenty on it, including STRIDE. However after completing my first session, I loved it. Not only was it a useful exercise for the business but I really enjoyed threat modelling. As a test engineer I was in my element.

To help people get over that initial hurdle and avoid the risk of setting around a table, looking at a threat model going “errrr” (what my first session would have been without a great coach), I have created a card game called “Threat Agents“.

This takes the elements of STRIDE, adds my quirkiness to them and some structure to help you get going. The game is free to download and get to print off your own copies.

Categories
Ramblings

Automation Test Engineers re-enforcing 2 tier engineers

Before I begin, I have spent several years as a software engineer and was decent enough at it. As part of this I would write my own automated tests. Since switching to test, I’ve developed a host of handy test tools, developed simulators and even made my own automation tool that used our SDKs to test stability through a huge range if activities.

My point is, this is coming from someone who has experience of automation, even if I consider myself as a manual tester.

Anyway, the point…

The job market in my city is predominantly junior test engineers or senior automation test engineers. Companies are desperate to hire people who can write and execute automated tests. I would like to ask these companies, why get a dedicated person in to do this?

It might seem a little wild, but why do you need to hire someone for this role? Are these companies not writing automated tests? Or are the developers writing them?

You can probably see where I’m going here. Developers are more than capable of writing automated tests and when surely if a company is trying to follow good working practices like scrum, LeSS, ATTD, BDD, TDD and buzz word driven development, then surely the developers are writing the automated tests as part of the DoD for a PBI/story to move to dev done?

Having now made the case for automated tests to be in the ownership of developers, I now want to talk about why being an automation test engineer is regressive.

There has long been the concern or battle as to whether test engineers are second class engineers. I’m not entirely sure that picking up bits of work that software engineers often dislike or see as beneath them is helping to further the value of dedicated testers.

I’ve definitely felt like my skills and role as a test engineer has been most valued when embedded within the feature team, mostly picking up stories in dev done & awaiting testing. However seeing people taking up roles where they act as the safety net in a separate test group where work is lobbed over the partition kind of saddens me.

People who have invested lots of time, effort and maybe even money into learning automation may be scoffing at me right now. I’m not saying it is wasted effort. Far from it. If you enjoy it, rather than being an automation engineer, what about just being an engineer picking up any PBI like the rest of the team?

If you do love your testing and want to keep testing, like myself, there is plenty work to do. Use programming skills that you’ve learnt to automate some of your tests. For example creating a script to further load the system or maybe to help parse results from log files. How about pairing with a developer so they write the functionality whilst you write tests.

Even without the coding, a tester’s skillset is still massively valuable. Get yourself involved in backlog refinement. Go larvae hunting. Coach your team. Get involved in security. Help your team shift left.

Going forward, rather than replacing a team of manual testers with a team of automation testers, let’s use our skillset to identify risks, bugs and possible UX concerns as early as possible.

Categories
Ramblings

Do we trust our code?

Often a user story, PBI or whatever can include a number of alternate paths, scanerios or examples. The team has ideally listed them during backlog refinement and 3As.

When a diligent software engineer picks up the item, they can write their automatic acceptance tests and also provide manual testing as well, listing the testing in changes changeset notes.

So when an engineer, whatever their job title, picks up testing of the work, do they perform acceptance testing? Is there value in this?

In an ideal world, this can be considered busy work that we can skip. We should be able to trust our code when it is committed and instead focus our efforts on testing that is more valuable like exploratory testing to find defects or learn about the feature and its quirks or nuances. We can focus on performance, scalability and more complex testing.

But this requires us to trust our code. Something that historically isn’t the case… hence why we have testing! (and bugs)

So what can we do to remove the need for mindlessly going through a truth table, long list of ACs or a wad of scenarios? I don’t know and this is what I hope to learn. However I do have a few ideas of things to explore:

  • Code coverage analysis for automated tests with our available build tooling to discover if this can give a (genuinely) useful metric at a day to day level.
  • Automated test coverage within code reviews to discover if someone who doesn’t develop the code can identify what is adequately covered by automated testing.
  • Pair programming with a developer and tester to discover if this helps both improve coverage and visibility of coverage. Perhaps writing automated tests.