Saturday, 26 September 2015

Microservices discussion by XP Manchester

Introduction

A couple of weeks ago, I was invited by some fellow programmers to attend an event on microservices organised by XP Manchester. Microservices are a hot topic right now in software development so I wanted to go along partly from my own interest in the subject area but mainly to think about how testing may be impacted or what considerations there may need to be for testing. The event was focused on programming and software architecture but it's discussional format allowed for questions so a variety of different points were talked about through the evening.

What is a “microservice”?

The conclusion from the evening was there is no agreed definition! However, I think we can summarise that microservices as an architecture is the process of breaking down your system into smaller parts. The main reason you want to do this is for scalability, but it is an expensive process and the conclusion for the evening was “Microservices are very hard!”.

What does testing have to do with this?

I had initially gone to the event hoping to understand better some of the implications on testing. But I actually found myself taking a step back and observing the meet up and the discussion from a philosophical view. I actually found a lot of parallels to the debates on automation testing in the testing world. So while the content of the discussion had little to do directly with testing, I think there are some lessons that apply to automation testing and many other “hot topics”.

The temptation to try something new - the fallacy of “best practice”

One of the points raised on microservices was that as a concept it had been around for several decades, but only very recently has it become a popular subject. This is mainly down to the famous use cases of Netflix and Spotify. It seems it is very easy for people to see companies such as these and want to copy their models. The problem is, solutions like microservices are very expensive and complex, they are solutions to very particular problems - they are too expensive as a solution to be used at all times. It is tempting for people to consider them a “best practice”, which is totally inappropriate. I see the same kind of attitudes on automation testing too - that large companies talk about automation testing and everyone else decides to follow it as a best practice. Automation testing is also very expensive and is not a best practice, it is a solution to a particular problem. I also see automation testing get discussed as the “best thing to do” in a similar vein to microservices.
At the event, someone mentioned a great analogy - you wouldn’t use the same design and manufacturing methods to build a model plane as you would a real Jumbo Jet. Just because famous or bigger companies use particular solutions, doesn’t mean these solutions are appropriate for your situation.

Only looking at the benefits, without considering the cost

Another point that I could relate to is the belief from some people that microservices is easier and simpler - that by breaking down your monolithic code base, you’re breaking down the complexity. This is false, the complexity is always still there, it’s just been spread around - which makes it both easier and harder to manage in different respects. While a particular area of code is easier to manage in isolation, the overall integration or full system is much harder to manage in terms of infrastructure, deployment and debugging.
I see the same problem in automation testing - a common view I’ve come across is that automation testing is always quicker than a human manually typing on a keyboard. Just like with microservices, people are ignoring the bigger picture here - focusing on the speed of executing a test, rather than considering what you gain and lose in the wider process. Automation testing is more than just executing a test, there is a lot of work to do before and after the execution! The cost of automation testing is the time it takes to write the test up, analyse its results every time its run and the maintenance cost of the test. With a human manual tester, the cost of writing the test is massively reduced because you are not writing code - in some cases perhaps nothing needs to be written! Analysing the results can be much quicker for a human to do - they are both able to run the test themselves, notice irregularities and analyse all at the same time - something a computer cannot do. Maintenance is also a lot less for manual testing because a human can adapt to a new situation easily.


Because microservices and automation testing are very expensive, the cost must be weighed up against the benefits. Only if the benefits outweigh this cost does it make sense. Typically, the value in automation testing comes from repeatable activities - and over time this will overcome the high costs. But for anything that isn’t repeatable, it’s difficult to really justify automation over simply carrying out the testing manually.

Additional thoughts

On a different note, I’d also like to talk a little about how the event was organised by XP Manchester as I felt it was a very successful format that I hadn’t experienced before. The format was one where everyone was asked to sit in a circle with 5 chairs in the middle of the circle. 4 people would sit on the chairs, leaving one vacant and discuss the topic (with the discussion being guided by a moderator). If someone wanted to join the discussion, they would sit on the vacant 5th chair and someone else from the discussion must leave. Meanwhile, the rest of us in the circle had to remain silent and listen. I felt this format was fantastic for keeping a focused discussion while allowing 30 people to be involved or listen to it. It was a refreshing change from traditional lecturing approaches or just a chaotic discussion with 30 people talking to each other at once. In some respects it was the best of both worlds. Credit to the guys at XP Manchester for running a great little event that produced some useful intellectual discussion!

Summary


  • There are definitely a lot of relatable problems when it comes to decision making for programmers, as there are for testers.
  • Don’t be tempted to follow a “best practice” or “industry standard” without considering whether it is right for you.
  • Always consider the costs of your decisions, always treat so called “silver bullet solutions” with suspicion - is this solution really the best? Is it really as easy as people are suggesting?
  • For groups of 30ish people, if you want to generate a focused, intellectual discussion for people to listen and learn from but don’t want to use a lecture/seminar format - then consider the format described above!

Friday, 18 September 2015

Sec-1 Penetration Testing Seminar

Introduction

Recently I was invited to a seminar by Sec-1 on Penetration Testing. It was a great introduction to the discipline and I took away quite a few really great points. I’ve never really performed Penetration Testing myself and I only have a general knowledge of it - enough to be able to identify and understand some basic problems. I’m looking to expand my knowledge of this type of testing so that I can bring much more value to my functional testing. In this blog post I will talk about the main points that I took away from the seminar.

You are only as secure as your weakest link

You may spend a lot of time securing one particular application server but if you have just one older or less secure server on the same network, your security is only as strong as that one weak sever. There may be an old server that is used for your printer in the office and it may not be considered as a security concern, but it can be used by hackers to gain access and compromise your network.

Whitelist, don’t blacklist

If you write rules that attempt to prevent specific malicious interactions, you end up with a maintenance nightmare by continually having to maintain and updates these rules for each new threat. It is much more effective to instead write your rules to expect the correct interaction.

Keep up to date as much as possible

It can be a pain to keep libraries, servers and software up to date because the updates can break existing application functionality. But where possible it should be encouraged because these updates may contain important security updates. However, you cannot rely on the change logs for the update to tell you about any important security fixes because typically they are described as “minor bug fixes”. Companies do this because it can be considered bad publicity to admit they are regularly fixing security holes.
Keeping up to date will save time in the long run as your systems will become more secure without you needing to create your own security fixes later for systems you have no updated.

Only a small amount of exploits are needed to cause major vulnerabilities

At the seminar they demonstrated a variety of techniques that could be used to get just enough information to open larger vulnerabilities. Through SQL injection, a poorly secured form could provide full access to your database - allowing hackers to potentially dump your entire database which they can then use to gain further access to your network or sell on to other malicious entities.

Attacks do not have to be direct

Even if your own system is highly secure, hackers can target totally independent systems such as targeting websites your employees visit and gathering password data. Typically a lot of people still re-use passwords and this can be an alternative way into your system. In the same vein, you are open to attack through integration partners if their own systems are not as secure as yours. Again, you are only as strong as your weakest link.

Summary

  • I found the seminar useful and I certainly learnt a lot more. I can certainly recommend Sec-1’s seminars to anyone who only has a general knowledge of penetration testing and wants to understand more.
  • Keeping software and hardware up to date has more benefits to security than it may first appear because security fixes are not always made public knowledge.
  • Penetration testing requires specialist skills and knowledge. However, I feel there is still some worth to having a better understanding as a functional tester. It allows me to pick up on potential areas that may concern security and helps me to drive quality in terms of security by challenging lax attitudes on these “minor issues”.

Friday, 11 September 2015

What is Quality? (When to use the 'C' word)

What is Quality?


One of the biggest challenges we face in software development, is speaking in a way which has the same meaning for all those involved. This is particularly tricky if you consider some of the words we use all of the time, but which are really ambiguous. One of these words is ‘Quality’.

Ask 10 colleagues to define ‘Quality’, I’d hazard a guess that you’d get 10 different definitions. But, how can this be? Surely “what Quality is” underpins our whole existence, especially those of us who are testers?

So why is it so hard to define? Well, because it’s hard, that’s why. (I know, I know, answering questions with the question).

I think that the reason it’s hard to answer is the same reason that it’s a problem at all. These definitions fall into an awkward place between us all thinking we are on the same page (assumption) and yet all having our own definitions. In other words, it’s definition is subjective. If the definition is subjective, then how can you as a person, or as a business, know if you are delivering high Quality or not?

If you Google “what is software Quality?” you will get 468,000,000 results, and a headache, and a growing sense of despondency and despair.

If we can’t agree on our definition of Quality, then it is going to be very difficult for us to ‘achieve’ good Quality. Even if we did, how would we know!?

Who decides what ‘Quality’ is?

So who makes the decision, ‘this thing is high Quality’? Well there are a number of potential candidates:

  • The business leaders? Is it the people who decide that a product needs to be built, or updated? They must know what is required, so it follows that they would know if the product fulfills those requirements well, wouldn’t it?
  • The engineers who produce the product? These people know about the product at it’s lowest level, so maybe they can really understand the product better than anyone?
  • The end-user? The people who use this product are surely going to have an opinion about how high it’s Quality is?

I'm sure in your line of work you can relate in some way to these three groups. Another way to consider them could be the ‘creators/financiers’, the ‘builders’ and the ‘customers’. There’s no doubt that anyone in these groups will have an opinion about the Quality of the product. More than likely they will have a valid opinion. So where is this leading us?
“Quality is the combined opinions and definitions of everyone who is involved with a product”
Well, it’s concise, it’s simple and easy to understand. This could be it? Well I think this is the problem, by buying into a definition like this, we end up back in the same mess. Sure it’s credible, justifiable, it even appears to be common sense! But, this is no better than the millions of other definitions you may find. It doesn't give us any focus or clarity.

So how can we get around this issue? Well we need to get back to basics. Every product or service has something in common, to justify it’s existence, it must be useful to it’s end-users. A business can produce something which in many ways may be a world beating product, but if the end-users don’t understand it, or want it, then it will ultimately fail (and really isn't of such high Quality).

For this reason we can’t escape the fact that the Product must deliver for the end users. This must surely mean that the really important people in this equation are the end-users.

I would take that further. Let’s be frank, in a business environment, you can’t get away from the question ‘will this make us money’ - if you can’t answer with a ‘Yes’ then you’re either a doomed business or a charity. I can’t get away from the idea that the way end-users view you, your product and your business is the be all and end all. Unhappy end-users are not a route to business or technical success.

Having come to the conclusion that the most important link in the chain is the last one, this can help us to focus on the things which are most important. ‘End-users’ is a bit clunky; I prefer to think of this group as ‘customers’. They are not necessarily ‘business customers’ but could be internal customers as well. What it boils down to is this:
“Quality is the perception of the Product (and Business) by it’s customers.”
I think this definition is robust enough to replace the 468,000,000 results of our Google search. You may think it just number 468,000,001, but let’s put it to the test:

Recently there have been high profile security breaches at some very high profile companies. Prior to the breach, the perception of those companies was usually very good. Following the breach, the perception of those same companies falls significantly, yet the product didn't change. At the other end of the scale, as an individual working on a piece of work, you may have an internal customer (another department, or your manager). As you complete the work, you submit it and it may be a very good piece of work. The ‘customer’ however had interpreted the requirements of this work in a different way to you - the product is sound, but the perception of it by your customer is poor.

This idea teaches us a very important lesson, which ‘Software Quality’ definitions largely ignore: Perception is just as important as physical Quality. What this means in practice is that you must always be honest about your product, don’t over sell it - don’t say it can do things that it cannot do. On the flip side, if you claim it can do something, you need to move heaven and earth to make it do that thing, otherwise customers perceptions will suffer.

The last thing this definition helps us to achieve is to focus on the people who really matter when we are talking about Quality. Customers. They may be business customers (the public, the people who buy the product, hire the services, etc) or internal customers (other people or departments in your business which need you services or products for them to succeed), but we must always focus on the customers.

As a Tester, I use this definition to help me every day. In every situation I find myself in, I find it is essential to always question: “What does this mean for the customer?”, “Is this what the customer would expect?”, “Does the customer even care about this?”, “Do we know what the customer wants from this?” etc etc. This focus helps me to ensure that we only work on the right things, at the right time and in the right way. With this continual, unrelenting focus on our customers, we will always be pushing Quality higher.

Always use the ‘C’ word.

Wednesday, 2 September 2015

Important defects or significant information?

licensetoTest.png

Introduction

As a tester I feel I am a provider of information, information that allows others to best judge quality and risk. If this is correct, should I merely report all information with no care for judging importance or priority? If I filter the information to what I think is important, surely I'm influencing the decision process of others? I feel this is, as ever, a murky gray area that has no easy answers.

Who decides importance?

Project Managers, Product Owners, Business Analysts, stakeholders - whoever determines what is worked on - decides importance. There is absolutely no question of that. As a tester I am not the one who decides what is worked on. I am not usually in a position where I am in conversation with the entire business or have sufficient knowledge of ‘the big picture’ to make these decisions and it isn't the job I'm hired to do. Of course, there may be circumstances where these jobs are blurred (there are Project Managers who test). Still, in a typical company set up, I'm rarely hired as a tester to manage projects.


However, that doesn't mean as a tester I can’t have some knowledge of the wider project or business concerns. It vastly improves my testing ability to have this knowledge! So, I do have the information to assist others in measuring or deciding importance. I am a gatherer of information, and it is how I communicate this which is all important. In doing this, I will need to use careful language to ensure that I am not under-emphasising or over-emphasising particular parts of that information.


For example, a product owner is gathering metrics on how a system is used. They measure how often particular features are used by customers and decide how important bugs or problems affecting those features are, based on this metric. If I have found a critical bug with a feature I have to be very careful to highlight and justify the critical nature of the bug. If I only described the bug as “There is a problem with this feature”, the product owner may choose to dismiss the bug if it’s a relatively low-use feature. But what if the bug corrupts the database? This surely has to be fixed? This is why careful use of language is important. A tester needs to understand the significance of the information they have gathered; and convey this in a balanced way

Informing not blocking

So if a tester helps decide significance, how far should they go in justifying significance? This is the tricky part - you must ensure you are not blocking the business, or even more importantly, you must ensure you are not seen as blocking the business. You need to balance providing useful information, which allows others to make decisions, with the language you use in describing that information. For example, if I find a defect which I believe is very significant and I feel I need to highlight, I could use the following kind of language:


“There are lots of important defects, affecting all sorts of areas. We must fix them immediately and we cannot release until they are fixed!”


This is very poor use of language and isn't providing any useful information. Firstly, I'm declaring the importance with finality - I am not the one to judge the importance of the defects. Secondly, I'm telling people what to do and then failing to provide any justification. This also gives an impression that I am demanding the defect is fixed.
Now consider if I worded it like this:


“There are two defects that are significant. The web application server fails to start because it is missing a configuration file and the database updates have deleted the accounts table. I also have a further list of defects that I think are significant but these two I think need attention first.”


Here I am highlighting what I know are significant defects and providing information about them so that people can make their own conclusions. By highlighting the significance, I focus people’s attention on those defects. By providing summaries of the defects, I allow people to make their own judgement of whether the defects are really important. So here I am not declaring importance, but I'm suggesting a course of action backed up by the information I have. This helps promote an image of my testing as being informative, rather than demanding.

And the blocking? Surely I can’t let a defect go live?!

Its also important to therefore have an attitude which allows you to accept and understand the decision that is made. Even when you have presented this information, the business may decide to accept the defect and not fix it. At this point you have done your job and should not feel responsible for the decision. Testers are not the gatekeepers for defects going live. Testers are more like spies - gathering intelligence to inform decisions made from a strategic level. Just like a spy, you may commit significant time and energy to delivering information you may have felt is significant - sometimes beyond what you were asked to do. But the spy doesn't act on the information, they merely deliver it. 007 is not a good spy in this respect, he’s trying to defeat the defect all by himself and it may only be another henchman. We want to be a team player, provide information to others and help attack the boss pulling the strings!

So I just let someone else decide, so I shouldn't care?

No you absolutely should care! You should be passionate about quality and take pleasure in delivering clear and accurate information to the relevant people. Recognising someone else takes the decision is not a sign that your information, your work, doesn't matter. Its a sign that there is more to consider than one viewpoint - the business or organisation cares about the wider view. But it can only make the best decisions if the smaller views that are communicated up are given care and attention. The value of any group must surely be the sum of its parts, so by caring you are implicitly helping the wider group care.

Summary


  • Testers don’t decide importance, however, they can influence importance by providing information on significance.
  • The language you use shapes how others perceive significance, so your words must be carefully chosen - they should be objective, not subjective.
  • Testers should help the wider business to make informed decisions, not become the gatekeepers for defects.