Wednesday, 19 August 2015

How much testing is enough?

Introduction

Risk analysis is absolutely key to being an effective tester. I have rarely found it effective or possible to “test everything”, and so there is always an element of deciding what and how much I want to test before I have confidence in the quality of a piece of work. Even in cases where I do need vast test coverage, I still need to prioritise what I will test first. The reason I do this is because I want to provide reports on the most important defects as soon as possible, as this is where I deliver the most value.

What to test and how much to test?

So how do I answer these questions? With more questions of course! There are a variety of factors that determine how I will answer, but some general questions that help me decide are:
  • What is it? What does it do?
  • How complex is it?
  • How much do I understand about it?
  • Does it have good documentation I can refer to?
  • Does it have clear requirements?
  • How much time do I have to test? Is there a deadline?
  • What resources do I have available to me?
  • What tools do I have available to me?
  • How critical to the business is it?
  • Does it interact with or affect other critical systems?
  • Who uses it? How do they use it?
  • Is it a modification to an existing system or a brand new system?
  • If it's a modification, does the system have a history of instability?
  • What is the modification and what does it affect?
  • Are there any pre-existing defects I need to know about?
  • Are there any performance or security concerns?
  • What are the most important parts of the system? Do I have an order of priority?
There are many, many more questions I could ask. Some of these questions I might already know the answers to, but they still influence my decisions on what and how much I test. Its important to realise that some of these questions have impacts on one another, it is only with the full picture that I can effectively identify risks. For example, with limited time and resources, it directly impacts how much I will test and therefore I would prioritise testing the critical areas of the system that are new or have changed.
Asking these questions also allows other members of the team to consider them and helps them gain an insight into my work and what I’m looking for. Over time this can provoke them into providing better information and lead to more collaboration.

But surely you always prioritise critical areas of the system?

Not necessarily, there may be many critical areas and it may not be possible for me to test all of those areas given the time and resources I have available. We may in fact consider some critical areas to be very stable and know that they have not changed. I may decide to accept the risk of not testing these areas in order to focus on areas that I know are more unstable or have been affected by change.

No testing? That’s crazy!

I’m not saying no testing, I’m just suggesting that no testing can be an option - but one of many. Ideally if I had any critical areas that I felt I couldn’t comprehensively test in the time frame I would still look to perform some testing. This can range from very basic smoke tests, time-boxed exploratory tests or some further prioritised order of tests. If this particular area is something that is regularly covered during regression testing, then it may have automation scripts I could run. I may choose to only run automation scripts for that area.
However, ultimately, you will be making a decision to not test something, somewhere. Therefore you must be comfortable with being able to draw this line based on your informed understanding of the risks.

What if there is no deadline?

Then I would ask the business how long is too long. The business will want the work by some ideal time, otherwise they would not ask for the work to be carried out. They will not wait indefinitely and there is always value in delivering work quickly.
Usually a business may give you no deadline simply because they do not understand enough about testing but want you to do a good job. They don’t want to give you an arbitrary deadline because they don’t know themselves how much testing is enough. It is important to start a dialogue at this point to really explore what the business wants and collaboratively come to a decision on how much testing you really want to do.

Summary


  • In order to decide what to test, you need to gather information regarding time, resources, priorities, etc.
  • Not testing specific areas is a valid option.
  • Comprehensive testing is never an option in an agile environment.
  • There is always a desired deadline even if it is not explicitly stated.

Sunday, 16 August 2015

The role of a tester in backlog grooming and planning

Quote.png

Backlog grooming

As an on-going activity, the product owner and the scrum team should be actively reviewing their backlog and ensuring the work is appropriately prioritised and contains clear information. This allows each piece of work to be more easily planned into a sprint as it can be more accurately estimated.

As a tester I actively try to be involved in this process as it is my first opportunity to assess the requirements and the information provided. It also allows me a chance to gather information required for testing, which allows me to provide more reliable estimates.

The objective is to be in a position where for any piece of work presented in planning you know exactly what the work requires. You should then have a good idea of what you will test and therefore provide reliable estimates. If this is not the case, then the work cannot be effectively planned into the sprint.

Planning and estimation

At the start of each sprint, you have a sprint planning meeting. In this meeting the team collectively decide what work they can commit to being done by the end of the sprint. This may or may not have an estimation process. This is your last opportunity as a tester to ensure that you have all of the information you need before work is started. If the appropriate backlog grooming has been done, then this should be a relatively straight forward process, however inevitably there will be pieces of work that need clarification or may have missed something.

Some example typical thoughts I have in a planning meeting for a piece of work are as follows:
  • Do I understand the requirements?
  • Does everyone else understand the requirements? (and does their understanding match mine?)
  • What requirements have not been written down?
  • Are there any external factors such as legal requirements or third parties?
  • How can I test this work? Is it even testable?
  • Do I need any additional tools to test this work?
  • Do I have any dependencies in order to test this work, such as needing live data or having to work directly with a customer or third party?
  • Does this work conflict with anything else in this sprint?
  • Does this work conflict with work being done by other teams?
  • Are there any repetitive checks that I could automate for this work?
  • Do I need to consider other forms of testing such as security or performance testing?
  • Is the balance of development workload to testing workload viable?

Hopefully I should have asked most of these questions during backlog grooming, and I wouldn’t always ask all of these questions, it very much depends on the context of the work. But hopefully this demonstrates that you can easily think of a lot of questions and it is important to ask these before and during planning.

Once you have the answers to all of these questions, you should have a good idea of what you are going to test. From this you can provide a more reliable estimate of how much testing you would like to do. Not only should you have a good idea of how long it would take, but also you should be better equipped to analyse risk.

Summary

  • Planning a sprint is easier with clearly defined work and when the team has prepared for the planning meeting.
  • To achieve this, backlog grooming should be used to ensure tickets are prioritised appropriately and contain enough information.
  • Backlog grooming should also be used to prepare for planning such as considering if you need testing tools or environments.

Friday, 14 August 2015

What is a Tester for? What should a Tester Do?

How might you answer these questions currently?


According to Kem Caner (“Exploratory Testing” Nov 2007): “Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.”

So then, a tester is someone who

“investigates a product or service and gives feedback on how well it does the job it was built to do.”

That’s awesome. So when a Software Tester has finished their task, you should have a pretty good idea of how well the developer has implemented the requirements. So the answer to the question “What should a Tester do?” might be:

“A tester should perform investigations on a product or service under test and report the level of Quality which they find.”

And an answer to the question “What is a tester for?” might be:

“A tester is there to provide an insight into the Quality level of a product or service under test, to allow a business to make an informed decision about whether that Quality is high enough for release, and take remedial action if not.”

- we could summarise this as

”A tester is there to allow a company to prevent a degradation in Quality”.

So what else is there to talk about?

On the face of it, I think they are pretty fair answers. But, now let’s look into things a bit more...
Developers and Software Testers are all human beings (within a loose definition!) - and this means they make mistakes, have preconceived ideas, make assumptions and all the other inconsistent weird stuff that human beings do.

So…?

Developers are the happy go lucky, positive thinking types who like to think if something can work, it almost certainly will. Testers are the cynical, schadenfreude loving types who assume everything is broken, and when it isn’t suspect foul play! Ok so maybe that’s a bit extreme and stereotypical, but the point is, when a developer uses a feature on a product they have built they will use it differently to the way a tester would. When a developer interprets a set of requirements, they will interpret them differently to the way a tester would. Neither is necessarily right or wrong, the truth maybe somewhere in between. The mindset and skill set of the two groups is significantly different - and it’s this difference that allows us to produce the best output.

So why is the difference important?

If everyone in a development process thinks in the same way, approaches things in the same way, makes the same assumptions, takes the same shortcuts - what will happen? I’d hazard a guess that at the end of the project they would all pat each other on the back and agree that what they have done here is truly superb, their best work, a thing of immense beauty. But on the day the product goes live, a customer would try to use the product and come to the conclusion it doesn’t do many of the things they think it should, and certainly not in the way they expected it to. The issue here is that the people in that development process had a very narrow mindset. They were unable to conceive of any ideas outside of that narrow corridor of thought. There was no difference. Difference introduces challenge. If we are challenged then we are forced to justify our position, but furthermore we are required to consider a different position. Not only will this deliver a better outcome in the immediate future, but it will help us develop our own approach. If we are constantly thinking about new approaches and mindsets, then our own approach and mindset will hopefully expand and develop. We could never think of everything, but maybe if we are continually challenged, we can start to think of a lot more?

So, how could you answer the questions now?

Ok, so it looks like we’ve gone off track here, we’re meant to be talking about what a tester is for, right? Right. If we look back at the ‘answers’ I wrote earlier, what do we think of it now, in light of what’s just been talked about?
Maybe a better answer to “What should a Tester do?” would be:

“A tester should continually challenge the developers to consider different viewpoints and interpretations of the requirements. A tester should continually discuss the way requirements are being interpreted and implemented by the developers. A tester should perform investigations on a product or service under test and report the level of Quality which they found.”

Notice that this answer doesn’t alter the original answer, it supplements it.
But the really interesting thing is how we now answer the “What is a tester for?” question. If a tester is supposed to do more than just test and report, then what does this mean for the tester? What I think it means is that testers can and should be pulling themselves out of their pigeon hole and getting involved, much more involved, in the whole development process. In other words, a tester is not just there to prevent the degradation of Quality:

“A tester is there to increase Quality and add value.”

This opens a very interesting door - one we will walk through in another post. A door that let’s us change from just thinking “how well has the developer has implemented the requirements?” to “are those requirements correct?”, “what does the customer want?”, “how does the customer use our product?”. It means we can take our challenging behaviour on an adventure outside of the development team and into the business...

Tuesday, 11 August 2015

Testers in development teams

Introduction

For this blog post I’m going to talk about my experiences as a tester in a development team working to “scrum” which is an agile development methodology. I write this as someone who has worked to more traditional waterfall style processes prior to working in scrum and as someone who has worked for several years within scrum. I’ve also worked to a “kanban” model too but I won’t cover the pros and cons of the other methodologies here. The goal is to hopefully help someone in a similar situation to me have a clearer idea of how they can fit as a tester into scrum.

I’m going to assume you know the basics of how scrum works but if you are new or unsure about it, please have a read of the wikipedia article first:
I would also add that my thoughts on this topic are from the view of an agile project, where speed is absolutely key. Most of these thoughts would possibly not apply to a project where speed is not desired, though I feel the role of a tester and the skills they use are the same no matter the situation.

Should a development team include a tester?

I’m going to start by tackling this first. You may notice in the link above that there is no distinction between team members in a scrum development team. A common view that I have come across before is that testing should be a process external to development. There are some that feel that testers shouldn’t become too close technically with the system they are testing because they may make some conscious or subconscious assumptions, because of their heightened insight. There is a fear that testers would lose their ability to think like an end user and would think too much like a developer.

For me, this is nonsense, I consider it my job to be aware of end users and to think beyond what I know or see. I have found it to be faster and more efficient in general to work as closely as I can with developers and I have built a much better rapport with them. This has meant that the levels of trust and understanding are much higher and developers see the value in testing. As soon as testers and testing are perceived to be a separate process, I have found the two communicate less and become more frustrated with each other.

In scrum this is crucial as you don’t have the time to waste with the traditional methods. This means that a tester cannot sit outside of development and only test pieces of work when they are complete. This would lead to development work being completed in a sprint, defects being found after a sprint and then the development work having to be revisited in a new sprint. This is obviously not an efficient way of working.


team!.png


So for me, a tester must sit within a sprint team and be considered part of the development team. While their skills and role is different to a developer, they are an important contributor to considering a piece of work “done”. It is also important to realise that developers are capable of contributing to the test effort. While automation tests do not replace manual exploratory testing, they are very useful for taking the heavy load of more focused tests. Developers can be an alternative resource for constructing automation scripts, expanding upon their unit and integration tests with more advanced Selenium UI tests for example.  Testers should view themselves as being responsible for improving the quality of the work and not as an executor of tests. If tests can be easily automated by developers, then it doesn’t always have to be the tester who creates and executes the automation.

Developers can also provide assistance and help with setting up more intricate or complex tests, reproducing problems and also providing more technical explanations about how the system functions. This can be an invaluable source of information if you have little documentation or wider business knowledge about the system (though it should not be relied upon on its own).

At the same time, a tester can do more than simply test work as it is completed. In some cases, it is possible and very useful to test pieces of work while they are still being developed. Working in close collaboration with the developer, a tester can provide quick and effective feedback on the work and really focus the developer on producing quality work. The tester can also continue to perform preparation activities such as gathering any additional information, ensuring they have the appropriate tools and environments or planning their testing.

I personally feel a tester doesn’t just bring an ability to exploratory test, but also to gather knowledge, critically challenge designs and re-focus a team on what is important to deliver. These skills are best put to practice before the sprint even begins through to finishing the work, rather than after the work has been completed - where it is too late to really reap the benefits of these skills.

Summary


  • In Scrum, testers are most effective when they are part of a team of developers.
  • A collaborating team is far more productive than separated focused individuals in short time periods.
  • Developers can contribute to developing and supporting automation testing.
  • Testers should actively ensure they are not becoming too close to a system and are aware of external areas to the team - such as end users, other systems, business concerns, legal requirements, etc.

Sunday, 9 August 2015

Starting a new job as a tester

First day on the job - so many questions!

Introduction

For this blog I’m going to talk about my experiences in developing a testing department at a software company from scratch. I’ve now been working at the company for 2 years and I have learnt a great deal. I hope that sharing my experiences on here and talking about the lessons I have learned will help someone else in a similar situation. I’m sure while I’m writing this blog I will also learn a lot simply looking back and trying to articulate my thoughts!

This blog post was written from the experience of joining a company that has no existing testing. While I hope that most of my experiences are relevant to any kind of testing and company setup, I don’t believe there is a “one size fits all” solution and I would likely adapt or change my approach for a totally different situation.

The company had around 60 employees with a typical setup consisting of various departments including development, support, sales, marketing, accounts, infrastructure and a “product” department consisting of business analysts.
The system under test was a software application that stored and manipulated data and the end users ranged from customers interested in the data collection to support using their administration tools to accounts managing invoices.

What did I do?

It’s your first week at your new job. Not only do you need to learn what the company or business is and what they do, but you’re busy learning everyone’s name, the processes and where to get lunch! In the first few days it can be stressful trying to learn so much so quickly. Some companies have lengthy induction processes with lots of meetings and some companies leave you to your own devices quite quickly.

In my case, you might find yourself being the only tester in the company. I actually applied for the job because I wanted the challenge and knew I would learn a lot. I got the impression from the interviews that the company didn’t really have any idea what testing really entailed and were happy to let me tackle it how I saw fit. This meant that I had to really be quite aware of what I wanted and know which information was relevant to me and which wasn’t. However, when you don’t know what is important or relevant, you are heavily reliant on what you are presented. I spent my first few weeks learning what was really relevant to testing.

At first I was drowned with very technical information regarding how to work with Ubuntu, how to setup MySQL, Eclipse and various other systems. There was particularly focus given to Git and the development department’s branching model and version control. While this information was important, worrying about these things at the beginning wasted time really as I had simply too much to learn at once and spread my focus too thin.

In my first few weeks I mainly relied on the information I could gather within the development department. We had a typical scrum setup which featured a Scrum Master, Product Owner and a team of developers. In this format I could find out a lot of information from the Product Owner and the developers. The only problem with this was that the information from the Product Owner is naturally filtered from the rest of the business. However, as a base to start from it was enough to start getting the answers to my questions.

I quickly learnt the system in some of the key areas but I found I was lacking in knowledge in others. I only discovered what I didn’t know when issues arose or when different work was required for development. I feel I could have done more to find out this information much earlier and avoided some of these issues as well as provide much faster feedback. I would have tested differently with this knowledge.

What would I change?

I feel the biggest thing I would change is to focus more on information gathering without testing in the first few weeks. While I think it was a valid approach to simply start playing with the system and learn about what the current changes were, it only gave me an understanding of what the system currently does from a low level. It also limited what I understood to either new or changed areas or to whatever I could find through my exploratory testing. There were many areas I simply never knew about but were critical areas for some end users. In future I would instead focus on finding out this very important contextual information and use it to learn the system from a high level first.

I also spread my focus too thin attempting to learn how to manage my environment, conduct my testing, learn the processes used by the development teams, learn about the system and wider business all at once. Naturally I ended up focusing on the areas that had the most readily available information and these were my environment, the processes used by the development teams and how to conduct my testing. I feel I could have chosen to focus more on knowledge gathering for the system and not focused so much on the other areas until later. In hindsight I feel it’s quite important to start building a rapport with the various sources of information outside of your immediate vicinity as these take time to build. The information you gather from end users also greatly shapes how you go about testing and this can affect your requirements for a test environment and the types of tests you want to conduct.

In order to gather this information, I would focus on the following questions:
  • What does the system do?
  • Who uses it?
  • How do they use it?
  • What are the most important or critical areas that must always work?
The main goal of these questions is to create a picture in my head of what the ultimate goal is of the system. I want to think as much as I can like an end user as soon as I can, as this will dictate how I will test the system and help guide my understanding. It will also allow me to bring the most value from my skills in the shortest amount of time. The most obvious tests are those that simply check that the system functionally does what it is supposed to. It is from this base that I can then develop more complex tests.

In my experience, the answers I get to these questions will vary wildly from who I ask. I feel it’s worth spending my time asking multiple people, especially people from different departments. In the same way that as a tester we will think about a system in terms of its inputs and outputs and various scenarios, a developer or a salesman will think about the system from their own point of view. From a developer I’m gaining technical information about the inputs, outputs, functionality and influences. From a salesman I’m gaining contextual information on why the system is designed a certain way and how it is sold to users. In my case, we also had an administration section that the support department used and a billing system that the accounts department used. These are yet more examples of different views of the system as well as end users who hold important information that I would like to incorporate into my tests!

I have found that a crucial skill for a tester is tying all of this information together and constructing a unique critical view of the system. The amount of information you gather can dramatically change how you approach your testing, what tests you carry out and how you perform the testing.

Summary

  • Focus on gathering knowledge.
  • Have confidence to ask questions and speak to people in other departments.
  • Don’t worry about how to test until you feel confident you have gathered enough knowledge about the context of what the system does, who the end users are and how it is used.
  • Don’t worry about finding defects on your first day on the job.
  • Starting testing straight away is still a valid approach and you will learn a lot about the system very quickly, but don’t compromise your information gathering activities early on.

Background

I feel it's important to highlight my background because it helped a great deal in giving me the confidence, skills, tools and knowledge to really hit the ground running when faced with a job where I would be given an open book to start and a very technical product to learn.

I started my career as a tester at Sony after graduating with a Computer Games degree. I had originally intended to work as a programmer but took the first opportunity I could find. I gained a good deal of experience on a wide variety of different projects. Despite only having a couple of years of experience from this job, I learned a lot about testing; especially the key skills.

Thanks to this experience and my background in design and programming in academia, I had a lot to fall back on to give me confidence with my decisions and guide myself on how I wanted to learn and progress. It also meant in an open environment where I was left to my own devices I could pull together what I needed and even write my own tools to help with my testing.