Wednesday, 23 December 2015

Haystack Analogy

I’d had a few beers at the 2014 Christmas party, and had ended up having a conversation with one of my colleagues who works in the Support department. As Testers, we’ve all had this conversation at one point or another:
“Why didn't you find X defect?” you get asked.
“Actually we did find that defect” you reply, “but the decision was made not to fix it.”
“What about this defect? Or this one?”
“OK, we missed those.”
“Why didn't you find it?”
It’s a valid question, Support spend a lot of their time listening to customers who have fallen foul of these defects, in their eyes, they’re picking up the pieces, listening to customers regale stories about how much time X defect is costing them, or how much money Y defect will result in them losing.

So why didn't we find it?

Simply because you can’t find everything. You can’t test everything and you can't test for every eventuality.

The challenge is how to communicate this.

In trying to explain this, I coined a scenario which I call the Haystack Analogy.

At its simplest, the codebase is a Haystack, and the Tester is a person looking through this haystack for needles. These needles represent all the defects within the codebase.

When you start out developing a piece of software, you will only have in your hand one or two pieces of straw. It’s pretty easy to look at those pieces and discover any needles lurking within.

As your codebase grows, the pieces of straw get stacked upon each other, shuffled around, and invariably needles get mixed in. Then you have external factors, which could come in the form of wind, blowing the haystack around. After many years, you’re presented with a giant haystack, blown about, full of needles, now the job of a tester becomes a lot more difficult.

System, Regression and End to End testing all have the task of finding as many defects as possible, so here are a few questions you could ask regarding testing that haystack:

  • With 1 person testing, how long would it take to sift through the entire haystack and find all the needles?
  • If the whole haystack was searched, and no needles were found, could you be certain that there were no needles present in the first place?
  • At what point during your search do you say that you've found as many needles as is possible?

The answer to the first question is not one that can reasonably answered. It is just too time consuming to test the entirety of a system, so I would never expect any tester (or testers) to do this.

In answer to the second question, no. No software can ever be 100% defect free, so the answer to that question is pretty easy.

The last question is where a tester earns their worth. There are a few more questions you can ask off the back off that - Should a tester stop after a set amount of time? Should a tester stop when all major features have been tested?


The challenge of a tester is to identify the risk associated with change. If you know a previously tested and working part of the system has not changed, then the risk of not extensively testing that area is low. If however, a new change has been introduced, or an area of code has been refactored, then the risk that that change has introduced unwanted behaviour is now quite high.

Going back to the haystack, if you've identified which areas have had change, which areas are likely to have been affected by external factors, which areas are crucial to the business, then you can assign a timeframe to test based on estimations for how long each individual area would take to cover.

Remember, testing can only ever say that defects are present, so once those areas have been searched for needles, then you as a tester can say how confident you are in the running of the system.

This still doesn't mean that any of the defects found during testing have been fixed - severity, time constraints, cost/benefit considerations and more come into play when deciding whether or not to fix defects. But at least if a needle does slip through your fingers, at least you can say why it happened.

Tuesday, 10 November 2015

How to Add Value (A Guide for Testers)


In a previous post I talked about "What are Testers for? What should a Tester Do?". I discussed the sphere of influence which a Tester has and suggested that maybe the boundaries we work to, as Testers, aren't in the right place.

I also covered the idea of Testers adding value. I think there is a general perception in the industry that Testers are there to prevent a loss of Quality - they just 'test the codes' and make sure there aren't any 'bugs', they are gatekeepers.

Well if that were true it would suck, I don't think I'd enjoy a role where all I got to do was kick the tyres. I also don't consider that to be adding value. Sure, it is valuable but in this scenario, the best possible outcome is that the Quality which existed in the developers intended solution has been delivered.

Before we go on it would help if you read my previous post about ‘what’ Quality is here.

So now we know what Quality is, let’s explore how it changes through the development process.

Where is Quality?

Let's think about how much 'potential Quality' there is at each stage of a development cycle, and how it can change as it becomes 'realised Quality':
Quality degrades through each phase

  1. Business Vision: The business has identified a market it wants to enter or a demand it wants to supply.
  2. Quantifying the Idea: This is the point at which you start breaking that idea into 'pieces' of work and thinking about who may do the work.
  3. Requirements: Once the idea has been broken into 'pieces', specific requirements are written to attempt to quantify what work needs to be done and how we will know when it has been achieved.
  4. Implementation: The requirements are then turned into 'stuff' - the embodiment of those requirements.
  5. Release/Launch: The 'stuff' is then sent off into the wild with the hope that it fulfils the Business Vision it was built to deliver.

I think of it as if there is a bucket (the process and the people) and water (the Quality). If we have a good process and good people, then as we build things, we get to the end with as much water in our bucket as we started with. But, what happens if we have a hole in our process, or several holes? Our bucket wouldn't let us deliver any water/Quality at all! The really important thing here is that we can't top up our bucket. Once we lose Quality it STAYS lost.

So looking at the graph, the best possible outcome is that the blue line stays horizontal - we deliver something into the world which completely fulfils the original vision of the business, nothing more, nothing less.
If everything is perfect...!

Why does Quality degrade?

  • Poor Process - This is a wide topic, but suffice it to say, if your process does  not encourage efficiency, openness, accountability, focus and delivery, your going to lose Quality.
  • People - People are everything, not having the right people will make everything much harder, if not impossible. Similarly, if people are not motivated or focussed, Quality will suffer.
  • Communication - Passing the information about the idea down the process chain to the end can end up being like ‘Chinese whispers’, meaning that over time the idea is warped and degraded and the output does not fully realise the original idea.
  • Context/Understanding - Decisions will be made throughout the process and out of context, the wrong choice may be made. Without a good understanding of the idea and the goal, people can make less than optimum decisions.
  • Changing Business Needs - The longer the time between the Business Vision and the Launch, the more chance there is that the original idea is out of date. The market may have changed, customers expectations may have changed etc. This is mitigated by utilising the correct processes and methodology.

The Testers part in this.

You'll remember the point I made in this article about the scope and boundaries a Tester may or may not have. If a Tester’s only input is during the implementation phase, then the best possible impact they can have on Quality is to maintain it through the 4th section on the graph:
Is this the only place we can make a difference?

If the scope of the Tester is purely limited to that section of the process, then this is the result. But, what if we really go after that idea of expanding our influence and our challenging behaviour, what could that mean for Quality?

Let’s Start at the End.

Release/Launch. Now, I’m not going to suggest that as a Tester you should be knowledgeable or skilled enough to second guess the Sys Admins and Infrastructure team in your company. But, that doesn’t get you off the hook! As a Tester you must be focussed on minimising risk and risk often comes with difference.
Look at the differences between how you test during the implementation phase and the system when it is deployed into Production. These differences fall into two categories: Infrastructure and Use.

  • The closer your testing environments are to the Production infrastructure, in terms of spec and scale, the better. Let’s say you have an environment which once tested became the live Production environment. The risk of ‘deployment’ related issues is reduced considerably versus a test environment which bears little resemblance to the Production environment. As a tester it’s your responsibility to drive toward this goal and ensure you get the ‘best’ test environments you can.
  • Use it like a customer! Always use the ‘C’ word! If your testing is focussed on exercising the system in the same way that it will be used once it’s launched then you are going to be finding the important issues which may affect the customers.

Deployment Success/Failure criteria. Making a change, testing it and releasing doesn’t mean the change is working. You need to monitor the release to ensure the behaviour it is exhibiting is what was desired, and that unexpected and/or undesired behaviour does not present itself. The best way to know this, is to include ‘success/failure’ criteria into your release plan.

What else is required?

Requirements. When we go to planning and are given pieces of work to undertake, this work will have requirements associated. Depending on your processes, habits and traditions, these may take different forms but they will try to convey ‘what it needs to do’.

This phase is absolutely critical.

This phase takes the abstract, delicate, ‘vision’ of an idea and attempts to solidify it into a hard and fast mould. A bit like making a plaster cast, the requirements are the mould, if the mould is loose, not detailed and not rigid, then the plaster model is will not be what the artist envisaged.

The science of good requirements is a well documented area, as a tester, you should be intimate with this knowledge! I won’t cover it here. However, as a Tester you are in a unique position to be able to guide, influence and improve requirements standards. You should have excellent business context understanding, combined with a good interpretation of how the requirements may be implemented. This means you can bridge the skills and knowledge gap between Business Analyst/Product Owner and Developer in a way no one else can.


Quantifying the Idea. So someone in the business (or in the industry) has a had an idea. Before we head into requirements and development, we need to really get to grips with the idea. As a Tester this means, I’m thinking about the following questions:

  • Who would use this new feature?
  • What is the use case? Are there some case studies we could look at (even theoretical one’s are useful)?
  • How will this affect the use of other features?
  • How do we expect the new feature to change our revenue?
  • Do I know enough about this subject yet?

During this phase I would expect in most cases that wire frames and prototypes are built, theories should be tested and validated and alternative options investigated. In some ways it’s helpful to use this phase to think of as many reasons as possible why we shouldn't build this idea.

That’s not to say we should be negative, let’s face it, if nobody built anything we wouldn't have much work on would we!? The point I am trying to make is, that the best ideas will always stand up to scrutiny. You will find that scrutiny actually improves ideas, because they change and adapt as weaknesses and flaws are discovered.

As a Tester you are again in a unique position to work on this phase. Your business context knowledge, experience of how users interact with the existing product, their complaints, and the things they like, are all available to you. Combine this with your ability to think critically, but objectively about things and your inbuilt curiosity and desire to learn make you a valuable asset here.

Sure the decision makers (Business Analysts, Product Owners, etc.) have much of this information, and it is certainly not your job to be hijacking these decisions away from them, but you do have a different skill set and way of thinking. Use it.

And finally...

Business Vision. So this is a tougher nut to crack. ‘Business Vision’ is seated in the domain of the ‘higher ups’, the Business Leaders. I'm not about to suggest you should be scheduling weekly checkpoints with your CEO, to test his 5 year plan for the company (although…!?). Of course, that’s not the way to go. But we really have to be paying a lot of attention to this. The Business Vision is going to dictate the work coming through the business for the foreseeable future, you need to be aware of these so you can up-skill and gain knowledge of the new things you will be encountering.

It’s not good enough to wait until you need the skills and the knowledge, before you start to learn. Be proactive. When the Business says “we’re changing”, be ready, waiting. This is the way in which you can ensure that - as the decisions are being made, the requirements set, the code written - you are right there, challenging, validating, questioning and adding value, every step of the way.

Final thoughts

It is easy to think of ourselves as "just Testers". Let's face it, there's always so much to do, without seeking out more work! But I would really urge you to consider expanding your horizons. I am increasingly coming around to the idea that the title of "Tester" is an inaccurate representation of what our real skills and value are. You have a unique skill set and a unique viewpoint, often able to bring together aspects from all around the business, which other disciplines have less access to. You really should be using it!
Go add value!

Monday, 9 November 2015

Automation in testing - writing your own tools


A consistently hot topic in testing is “automation” and how it can be used to improve testing. In this blog post I’m going to talk about how you can really get the most out of “automation” very quickly.

Automation? So you’re going to replace me with a machine?

Nope, I’m going to suggest you augment your abilities as a tester like a cyborg! Part human, part machine. Frequently people assume automation is referring to having a machine perform all of the tests you might perform manually. But actually automation can be used to help you manually test faster. Are you spending a lot of time tearing down databases and re-creating data? Are you spending a lot of time repetitively reading and comparing files? Automation can help you focus on the fun part of testing!

So where do I start?

Before I get onto some of the ideas for what you can automate, you need tools in order to create the automation! First of all, do you have any programming knowledge? If you don’t, don’t fret! (if you do, please ignore me as I’m writing this assuming you don’t). There are things called “scripting languages” - these are a form of programming language that is usually interpreted rather than compiled. What does that mean? Well it means they are much easier to work with and some of them even have more natural english language-like syntax. These scripting languages are a great way to quickly and easily put together programs that automate tasks for you and hopefully you will find them much more accessible than compiled programming languages such as Java or C.

So which language is best?

The one you are most comfortable with really, if you already know a language, then there is nothing wrong sticking with that language. If you don’t know any languages, then some I would recommend are Python and Ruby. I personally prefer Python and I will write the rest of this blog on Python but Ruby is equally good place to start.
Check them both out and decide for yourself which you like the most:
The reason I recommend Python and Ruby is because they are very commonly used and can be run on any machine be it Windows, Mac or Linux. There are lots of examples of free code or libraries (collections of code) available online for you to use to tackle almost any problem too. I personally prefer Python simply because it is more human-readable than Ruby and many other languages.

Ok, so I’ve done a few tutorials but what do I start automating?

Think about tests you’re running and where you spend the most time, can any of it be handled by a script? Is there something about the setup of the test that can be automated?

I don’t know?

That’s ok, it is hard at first to really know what you can and can’t do until you’ve attempted something or seen it done before. Naturally I would recommend trying some ideas as this is always a good way to learn. But to give you some ideas of what is possible, I’ll cover some of the tools that I’ve created myself.

Data Setup

My first port of call whenever I’m considering automation-assisted testing is to look at the data setup for the test. Sometimes there can be a lot of tests that require a lot of setup before hand but you don’t want to vary that setup too much, you simply want to have an environment where you can focus on a particular area. Perhaps you’re attempting to recreate a bug 20 pages into a website or an application. Or perhaps you’re testing an invoicing system and you simply want some base data to repeatedly test against. Sometimes these kinds of tests involve a lot setting up and then restarting to try something else - you want a “clean” set of data that isn’t spoiled too much by testing you’ve done before.

Automation can help a lot in quickly setting the exact same “clean” data up again. You may ask “but why not just have a saved state of an environment?” - sure, that is a valid alternative strategy to deal with this problem. However, it’s a different kind of maintenance - with that you are maintaining a database that you need to keep updating. With a script you may also need to keep updating with changes to the system you are testing. Personally I prefer maintaining scripts, I feel you can more easily create scripts that are less affected by change than keeping a database in a clean state.

Some examples of tools I’ve created to help speed my testing up by creating data are:
  • A script that utilises the Selenium Webdriver library to open a browser window and create data in a web application for me. I wouldn’t normally recommend using Selenium for data setup because it is designed for checking UI elements and is therefore quite slow. But in this particular case this was a legacy system with no alternative than a UI for creating the data. I felt it was worth mentioning because Selenium is a useful library if you need to script around a UI. This script became a time-saver simply because it could be left running while I worked on other things.
  • A script that controlled two SIP telephones and had one call the other using the sipcmd program and later, the PJSIP library. This was used to quickly and easily create call traffic, especially in large amounts (e.g. making 100 calls). During some of my testing I’ve had instances where I’ve had to simulate telephone calls and it was useful to automate it to create volume. It also had the benefit of being able to then log and provide details about the phone call that would have been more difficult to see manually.
  • A script that uses the requests library to interact with a REST API. This allowed very rapid data setup within seconds and the requests library is extremely straightforward to use. I fully recommend this approach to data setup because of its speed.

Speeding up tests

The second area I focus on is to look at where I spend the most time during a test. Am I manually performing something that is just checking? Can a machine do this checking instead? I once had to test a change to a billing system that produced invoices for customers. I wanted to run the invoices on the old system and the new system and compare them to see if I could quickly identify any obvious problems between the two. Manually, this was a lot of “checking” that didn’t really require much intellectual thought - I would be literally comparing one value to another. This is a candidate for automation, so I can focus on more intellectual tests such as focusing on edge cases or perhaps scenarios where we don’t currently have a customer (so I can’t test through this method).

Some examples of tools I’ve created for speeding up tests are:
  • A script that compares groups of CSV (comma-separated values) files. I used this to very quickly compare the results of running invoices in one version of a billing system with another. It very simply compared the totals of the invoices - so it wasn’t “testing” very much, but it allowed me to very quickly identify any easy or obvious failures in the billing if the totals didn’t match. This is a good example of a script that augmented my manual testing and allowed me to provide information much faster.
  • A script that uses the requests library to quickly check every customer-facing API endpoint returns a 200 OK response. This was useful for very quickly performing a very basic check that the API appeared to be functioning - which would quickly catch any easy bugs.

When not to automate

So this all sounds great right? Remove all the boring, time-consuming testing and leave a machine to do it? Hold your horses! It’s not quite as straight-forward as that. There are times when automation really isn’t a good idea. Here are some common pitfalls for automation:
  • The desire for invention. Beware of becoming more interested in engineering automation than providing valuable testing. Make sure that any automation you are writing is going to deliver some value to your testing. Don’t simply automate parts of your testing simply because you can.
  • Avoid creating a mirror image of the system you’re testing. This is very easy to do, say you are testing a calculator - it’s easy to end up writing a script that enters 2+2 and then calculates the answer itself. So both the calculator and the script calculate the answer is 4. Why is this bad? Because in a more complex example where a failure occurs, how do you know which is wrong? Have you written a script that calculates the answer wrong or is the calculator failing? Instead you should be writing a script that already knows the answer. You shouldn’t be calculating the answer during the test.
  • Once-only tests with little to no repetitive nature to them. When I say repetitive nature, I mean there isn’t anything in this one-off test that requires you to repeat something like data setup or checks - these are obviously good to automate potentially. But otherwise one-off tests are nearly always quicker to perform manually and the cost of creating your automation script won’t be recovered in value because it may never be used again.

One more thing…

This post has been all about creating your own tools via scripting, but sometimes a tool may already exist! It's always worth searching around the Internet for existing tools that may help your testing, but learning scripting is useful in situations where you don’t have time to search and try different options and just want to quickly script something small.
Existing tools can take many forms such as Python libraries, professional software products or perhaps plugins or extensions to another piece of software your company already uses.


  • Automation is useful to augment your testing, improving your speed and efficiency.
  • Scripting languages are a good place to start learning how to create your own automation tools.
  • It’s hard to know what you can script at first, so never stop asking the question “can I script this?”.
  • However, beware of creating automation simply because you can, make sure it’s valuable.
  • Even if you can’t create your own tool, have a look around and see if anyone else has.

Wednesday, 14 October 2015

An introduction to testing


I’ve had a few requests for an article that explains what testing involves for a complete newcomer, so here it is! As I’ve progressed in my career in testing, I’ve discovered it’s a subject that isn’t widely taught in academia and very few people ever choose it as a career path. I’d like to be able to contribute to changing this and hopefully one day inspire others to choose it and feel the same passion to become a better tester!

What is testing?

Testing is the process of designing, executing and analysing tests. According to, the definition of a “test” is:
“A procedure for critical evaluation; a means of determining the presence, quality, or truth of something; a trial”
In other words, it typically means taking something - be that a piece of software, an aircraft or a piece of chocolate - and evaluating some truth from it while comparing it to some form of specification.
The result of carrying out these tests is that you gather information and learn something about the thing you are testing. Testing is all about learning as much as possible about whatever you are testing - through this you can provide information regarding how a system actually works, what specific code does, bugs and the overall quality.
The reason companies hire dedicated testers is simply because there is so much to learn and test! Typically, most companies operate as departments - with a programmer and a salesman focusing on totally different aspects of the company. Because of this, very quickly no one in the company knows everything about their product. As a tester or as a testing team you play a part in trying to provide this full picture - or at least provide people information on what they don’t know.

Why not document this information or automate these tests?

Absolutely! As a tester I am always promoting the use of documentation to ensure that the information I have to uncover is not difficult to uncover again.
Automation used well can also save a lot of time in discovering simple or basic failures particularly for more repetitive or laborious tasks, taking the burden off manual testing to focus on more creative types of tests.
However, both of these are expensive activities. Both require work to write, design and maintain and can only be written when the information has been defined.
Also, automation in itself is not testing because it cannot understand the context, what it is testing. The automation will only test what you tell it to test and will not, for example, investigate problems it notices during the test. So while it can save you some time performing repetitive checks, you still need a human to critically observe a system too.

Hence, the position of a tester exists to help guide the decisions on what tests to run, what to automate and what to document. This is all very dependant on what is being tested and the non-functional requirements of the business. There are companies that have requirements which mean a larger amount of automation and documentation is required than others. Some companies will have programmers carry out testing and others will have huge testing departments of over 100 testers. Others may even rely on end-users to test their products. However in all of these scenarios, the objectives of testing are the same - to gather information on the system and learn.

How do I become a tester? How can I learn?

Regarding academia, currently there doesn’t seem to be any academic courses on testing at all, a quick search on UCAS in the UK shows that there are no degree courses on testing as a subject.
For software testing, there are qualifications such as the ISTQB/ISEB which many companies recognise, however it is not necessary to hold this qualification to become a tester and there is a lot of disatisfaction in the software industry for this qualification.
From my experience, there are only two routes into software testing:
  1. Applying for a junior testing role in an organisation which is willing to hire inexperienced staff and train them.
  2. Working in a different role and switching to testing within the same company.

The main piece of advice I can give though is, if you are applying for a testing role - try to learn as much as you can about the company, what you might be testing and anything else you can think of. What will make you a good tester is your ability to learn and to understand what you still don’t know and seek this information out - testers should always be inquisitive and asking a lot of questions.

Ok, so if there aren’t many useful courses, what about online?

Yes! There are plenty of places online to read about testing practices! I’m mainly knowledgeable about software testing so I can only recommend the following resources on software testing as a start:
There are also some good places to ask questions, read further into various topics and become involved in the testing community:

However, I’ve found the following video is a fantastic introduction to testing, it articulates what you are trying to do as a tester pretty well:

Saturday, 26 September 2015

Microservices discussion by XP Manchester


A couple of weeks ago, I was invited by some fellow programmers to attend an event on microservices organised by XP Manchester. Microservices are a hot topic right now in software development so I wanted to go along partly from my own interest in the subject area but mainly to think about how testing may be impacted or what considerations there may need to be for testing. The event was focused on programming and software architecture but it's discussional format allowed for questions so a variety of different points were talked about through the evening.

What is a “microservice”?

The conclusion from the evening was there is no agreed definition! However, I think we can summarise that microservices as an architecture is the process of breaking down your system into smaller parts. The main reason you want to do this is for scalability, but it is an expensive process and the conclusion for the evening was “Microservices are very hard!”.

What does testing have to do with this?

I had initially gone to the event hoping to understand better some of the implications on testing. But I actually found myself taking a step back and observing the meet up and the discussion from a philosophical view. I actually found a lot of parallels to the debates on automation testing in the testing world. So while the content of the discussion had little to do directly with testing, I think there are some lessons that apply to automation testing and many other “hot topics”.

The temptation to try something new - the fallacy of “best practice”

One of the points raised on microservices was that as a concept it had been around for several decades, but only very recently has it become a popular subject. This is mainly down to the famous use cases of Netflix and Spotify. It seems it is very easy for people to see companies such as these and want to copy their models. The problem is, solutions like microservices are very expensive and complex, they are solutions to very particular problems - they are too expensive as a solution to be used at all times. It is tempting for people to consider them a “best practice”, which is totally inappropriate. I see the same kind of attitudes on automation testing too - that large companies talk about automation testing and everyone else decides to follow it as a best practice. Automation testing is also very expensive and is not a best practice, it is a solution to a particular problem. I also see automation testing get discussed as the “best thing to do” in a similar vein to microservices.
At the event, someone mentioned a great analogy - you wouldn’t use the same design and manufacturing methods to build a model plane as you would a real Jumbo Jet. Just because famous or bigger companies use particular solutions, doesn’t mean these solutions are appropriate for your situation.

Only looking at the benefits, without considering the cost

Another point that I could relate to is the belief from some people that microservices is easier and simpler - that by breaking down your monolithic code base, you’re breaking down the complexity. This is false, the complexity is always still there, it’s just been spread around - which makes it both easier and harder to manage in different respects. While a particular area of code is easier to manage in isolation, the overall integration or full system is much harder to manage in terms of infrastructure, deployment and debugging.
I see the same problem in automation testing - a common view I’ve come across is that automation testing is always quicker than a human manually typing on a keyboard. Just like with microservices, people are ignoring the bigger picture here - focusing on the speed of executing a test, rather than considering what you gain and lose in the wider process. Automation testing is more than just executing a test, there is a lot of work to do before and after the execution! The cost of automation testing is the time it takes to write the test up, analyse its results every time its run and the maintenance cost of the test. With a human manual tester, the cost of writing the test is massively reduced because you are not writing code - in some cases perhaps nothing needs to be written! Analysing the results can be much quicker for a human to do - they are both able to run the test themselves, notice irregularities and analyse all at the same time - something a computer cannot do. Maintenance is also a lot less for manual testing because a human can adapt to a new situation easily.

Because microservices and automation testing are very expensive, the cost must be weighed up against the benefits. Only if the benefits outweigh this cost does it make sense. Typically, the value in automation testing comes from repeatable activities - and over time this will overcome the high costs. But for anything that isn’t repeatable, it’s difficult to really justify automation over simply carrying out the testing manually.

Additional thoughts

On a different note, I’d also like to talk a little about how the event was organised by XP Manchester as I felt it was a very successful format that I hadn’t experienced before. The format was one where everyone was asked to sit in a circle with 5 chairs in the middle of the circle. 4 people would sit on the chairs, leaving one vacant and discuss the topic (with the discussion being guided by a moderator). If someone wanted to join the discussion, they would sit on the vacant 5th chair and someone else from the discussion must leave. Meanwhile, the rest of us in the circle had to remain silent and listen. I felt this format was fantastic for keeping a focused discussion while allowing 30 people to be involved or listen to it. It was a refreshing change from traditional lecturing approaches or just a chaotic discussion with 30 people talking to each other at once. In some respects it was the best of both worlds. Credit to the guys at XP Manchester for running a great little event that produced some useful intellectual discussion!


  • There are definitely a lot of relatable problems when it comes to decision making for programmers, as there are for testers.
  • Don’t be tempted to follow a “best practice” or “industry standard” without considering whether it is right for you.
  • Always consider the costs of your decisions, always treat so called “silver bullet solutions” with suspicion - is this solution really the best? Is it really as easy as people are suggesting?
  • For groups of 30ish people, if you want to generate a focused, intellectual discussion for people to listen and learn from but don’t want to use a lecture/seminar format - then consider the format described above!

Friday, 18 September 2015

Sec-1 Penetration Testing Seminar


Recently I was invited to a seminar by Sec-1 on Penetration Testing. It was a great introduction to the discipline and I took away quite a few really great points. I’ve never really performed Penetration Testing myself and I only have a general knowledge of it - enough to be able to identify and understand some basic problems. I’m looking to expand my knowledge of this type of testing so that I can bring much more value to my functional testing. In this blog post I will talk about the main points that I took away from the seminar.

You are only as secure as your weakest link

You may spend a lot of time securing one particular application server but if you have just one older or less secure server on the same network, your security is only as strong as that one weak sever. There may be an old server that is used for your printer in the office and it may not be considered as a security concern, but it can be used by hackers to gain access and compromise your network.

Whitelist, don’t blacklist

If you write rules that attempt to prevent specific malicious interactions, you end up with a maintenance nightmare by continually having to maintain and updates these rules for each new threat. It is much more effective to instead write your rules to expect the correct interaction.

Keep up to date as much as possible

It can be a pain to keep libraries, servers and software up to date because the updates can break existing application functionality. But where possible it should be encouraged because these updates may contain important security updates. However, you cannot rely on the change logs for the update to tell you about any important security fixes because typically they are described as “minor bug fixes”. Companies do this because it can be considered bad publicity to admit they are regularly fixing security holes.
Keeping up to date will save time in the long run as your systems will become more secure without you needing to create your own security fixes later for systems you have no updated.

Only a small amount of exploits are needed to cause major vulnerabilities

At the seminar they demonstrated a variety of techniques that could be used to get just enough information to open larger vulnerabilities. Through SQL injection, a poorly secured form could provide full access to your database - allowing hackers to potentially dump your entire database which they can then use to gain further access to your network or sell on to other malicious entities.

Attacks do not have to be direct

Even if your own system is highly secure, hackers can target totally independent systems such as targeting websites your employees visit and gathering password data. Typically a lot of people still re-use passwords and this can be an alternative way into your system. In the same vein, you are open to attack through integration partners if their own systems are not as secure as yours. Again, you are only as strong as your weakest link.


  • I found the seminar useful and I certainly learnt a lot more. I can certainly recommend Sec-1’s seminars to anyone who only has a general knowledge of penetration testing and wants to understand more.
  • Keeping software and hardware up to date has more benefits to security than it may first appear because security fixes are not always made public knowledge.
  • Penetration testing requires specialist skills and knowledge. However, I feel there is still some worth to having a better understanding as a functional tester. It allows me to pick up on potential areas that may concern security and helps me to drive quality in terms of security by challenging lax attitudes on these “minor issues”.

Friday, 11 September 2015

What is Quality? (When to use the 'C' word)

What is Quality?

One of the biggest challenges we face in software development, is speaking in a way which has the same meaning for all those involved. This is particularly tricky if you consider some of the words we use all of the time, but which are really ambiguous. One of these words is ‘Quality’.

Ask 10 colleagues to define ‘Quality’, I’d hazard a guess that you’d get 10 different definitions. But, how can this be? Surely “what Quality is” underpins our whole existence, especially those of us who are testers?

So why is it so hard to define? Well, because it’s hard, that’s why. (I know, I know, answering questions with the question).

I think that the reason it’s hard to answer is the same reason that it’s a problem at all. These definitions fall into an awkward place between us all thinking we are on the same page (assumption) and yet all having our own definitions. In other words, it’s definition is subjective. If the definition is subjective, then how can you as a person, or as a business, know if you are delivering high Quality or not?

If you Google “what is software Quality?” you will get 468,000,000 results, and a headache, and a growing sense of despondency and despair.

If we can’t agree on our definition of Quality, then it is going to be very difficult for us to ‘achieve’ good Quality. Even if we did, how would we know!?

Who decides what ‘Quality’ is?

So who makes the decision, ‘this thing is high Quality’? Well there are a number of potential candidates:

  • The business leaders? Is it the people who decide that a product needs to be built, or updated? They must know what is required, so it follows that they would know if the product fulfills those requirements well, wouldn’t it?
  • The engineers who produce the product? These people know about the product at it’s lowest level, so maybe they can really understand the product better than anyone?
  • The end-user? The people who use this product are surely going to have an opinion about how high it’s Quality is?

I'm sure in your line of work you can relate in some way to these three groups. Another way to consider them could be the ‘creators/financiers’, the ‘builders’ and the ‘customers’. There’s no doubt that anyone in these groups will have an opinion about the Quality of the product. More than likely they will have a valid opinion. So where is this leading us?
“Quality is the combined opinions and definitions of everyone who is involved with a product”
Well, it’s concise, it’s simple and easy to understand. This could be it? Well I think this is the problem, by buying into a definition like this, we end up back in the same mess. Sure it’s credible, justifiable, it even appears to be common sense! But, this is no better than the millions of other definitions you may find. It doesn't give us any focus or clarity.

So how can we get around this issue? Well we need to get back to basics. Every product or service has something in common, to justify it’s existence, it must be useful to it’s end-users. A business can produce something which in many ways may be a world beating product, but if the end-users don’t understand it, or want it, then it will ultimately fail (and really isn't of such high Quality).

For this reason we can’t escape the fact that the Product must deliver for the end users. This must surely mean that the really important people in this equation are the end-users.

I would take that further. Let’s be frank, in a business environment, you can’t get away from the question ‘will this make us money’ - if you can’t answer with a ‘Yes’ then you’re either a doomed business or a charity. I can’t get away from the idea that the way end-users view you, your product and your business is the be all and end all. Unhappy end-users are not a route to business or technical success.

Having come to the conclusion that the most important link in the chain is the last one, this can help us to focus on the things which are most important. ‘End-users’ is a bit clunky; I prefer to think of this group as ‘customers’. They are not necessarily ‘business customers’ but could be internal customers as well. What it boils down to is this:
“Quality is the perception of the Product (and Business) by it’s customers.”
I think this definition is robust enough to replace the 468,000,000 results of our Google search. You may think it just number 468,000,001, but let’s put it to the test:

Recently there have been high profile security breaches at some very high profile companies. Prior to the breach, the perception of those companies was usually very good. Following the breach, the perception of those same companies falls significantly, yet the product didn't change. At the other end of the scale, as an individual working on a piece of work, you may have an internal customer (another department, or your manager). As you complete the work, you submit it and it may be a very good piece of work. The ‘customer’ however had interpreted the requirements of this work in a different way to you - the product is sound, but the perception of it by your customer is poor.

This idea teaches us a very important lesson, which ‘Software Quality’ definitions largely ignore: Perception is just as important as physical Quality. What this means in practice is that you must always be honest about your product, don’t over sell it - don’t say it can do things that it cannot do. On the flip side, if you claim it can do something, you need to move heaven and earth to make it do that thing, otherwise customers perceptions will suffer.

The last thing this definition helps us to achieve is to focus on the people who really matter when we are talking about Quality. Customers. They may be business customers (the public, the people who buy the product, hire the services, etc) or internal customers (other people or departments in your business which need you services or products for them to succeed), but we must always focus on the customers.

As a Tester, I use this definition to help me every day. In every situation I find myself in, I find it is essential to always question: “What does this mean for the customer?”, “Is this what the customer would expect?”, “Does the customer even care about this?”, “Do we know what the customer wants from this?” etc etc. This focus helps me to ensure that we only work on the right things, at the right time and in the right way. With this continual, unrelenting focus on our customers, we will always be pushing Quality higher.

Always use the ‘C’ word.