Thursday, 17 August 2017

Best of the BSides - A friendly security conference in Manchester

https://media.licdn.com/media/AAEAAQAAAAAAAAd-AAAAJDU1ZDQ1MWY4LWNmYzEtNGNlMi04MTgzLTRhNTBiODgxNmIwYg.png

Introduction

Today I attended a great little conference in Manchester called BSides Manchester. This was a free conference about security ran by members of the security community in a similar way to TestBash. In fact the whole event was a bit of a “SecurityBash” in so many respects, which is awesome and I recognised many familiar topics, concerns and ideas. Whether you're experienced with security or a newbie, I highly recommend this conference. I went along with no expectations, just hoping to learn as much as I could, expose my brain to new ideas and even if I didn’t pick it all up immediately, it could give my brain a place to start. Not only did I actually learn quite a bit but I also noticed that there was a great deal of similarities to testing so I thought I’d talk about the conference from that angle.

The similarities and parallels to testing

In no particular order:
  • The security community seem to be very keen to promote leaner and more effective ways of improving security such as getting involved earlier and trying to be involved in discussions about new projects or approaches. This is exactly the same as with testing in general and both are frustrated when they are only asked for their opinion very late in projects. Perhaps this is the biggest area we share in common and maybe we could share our experiences and lessons with each other. Perhaps we can also be allies on this, for example where a tester has managed to get involved in the project early, we could be advocates for involving security professionals earlier too and vice versa.
  • Carolyn Yates gave a great talk on the bowtie method which is very applicable to testing too and reminds me of how we look to use examples like mind-maps to effectively visualise our work. She also made the point that “not all tools need be programs, sometimes they can be visual aids” which I think as testers we can certainly appreciate too.
  • There was a great talk by Collette Weston about echo chambers - in particular the difficulty for women and other industry minorities to break into the InfoSec industry and community and what can be done about it. I think we can all agree this is an issue across the software industry as a whole too and while I feel testing is a little bit better in this regard, it’s definitely not as good as it could be. This talk also prompted a great discussion about how some companies had started trying to diversify their security personnel (including hiring people with biology degrees) and I know in testing its well appreciated that we benefit greatly for our diverse backgrounds.
  • In two separate talks by Ian Trump and Charl Van Der Walt there were discussions of what the future might hold and how artificial intelligence and the advance of technology will shape the industry and the work of security professionals. It seems obvious but I found it quite re-assuring to know that it’s not just testers who are wondering how these advances will affect their jobs. There was also discussion of the effects of automation and whether people were really considering these effects on the loss of jobs and how humans interact and use the automation. This echoes the concerns I’ve heard many testers raise and reminds me of my old blog post on this subject.
  • Naturally there were several more technical talks focusing on particular types of hacks, attacks and penetration tests. This included discussions of how to defend against these attacks too. The mindsets and techniques that security professionals use to find and report these exploits is exactly the same as how testers find and report bugs. I think we have a lot in common on this subject (as, well, it is a form of testing) and I think we could do more to engage with the security community and share our experience - just as much as we can learn a lot from them too! All the things we talk about in testing were present here - such as trying to turn exploits into the most damaging problem they could find to justify and explain to companies why they need to fix it. I believe as testers we can also become more effective at general testing by learning about these exploits too - both in helping raise security issues earlier but also giving us more ideas for other kinds of testing. Perhaps we could share our knowledge, approaches and experience of exploratory testing with them.
  • Another common theme of the conference was that security was not really a technological problem, but a people problem. This is of course not a new revelation, there are many historical quotes and philosophical discussions, for example, “a bad workman blames his tools”, “pick the right tool for the job” and so on. However, as humans we clearly find it difficult to keep these lessons in mind and it is easy with bias to miss that we are making assumptions about our problems. As testers I feel we should be very aware of this too and typically many challenges we face are nothing to do with the particular technologies involved. Many software bugs are caused by humans and machines are simply doing as they are told, the same applies for security exploits.

The differences

Of course, for all our similarities, there are also differences:
  • As part of the discussion about diversity in the industry from Collette’s talk, there was also discussion about autism and how there was a general belief that many “black hats” may have struggled at school, dropped out and only picked up hacking because they had no other options. It was pointed out that because many companies require specific levels of education (such as GCSEs), it meant there was no way for these individuals to become security professionals. Why is this different to testing? Well in the testing industry I don’t feel we have such a specific concern with autism (though it will definitely also affect the testing industry and community too!), I feel our concerns are more about increasing awareness of testing as a possible career in the first place!
  • I think this one is probably obvious but the security community is more naturally technically focused and capable, in tandem with the above point, most people seem to join the industry because of their interest in it and interest in technology. As such, while there is diversity, I get the general impression the diversity of backgrounds is a lot more acute as opposed to the very broad backgrounds of testers. As a result I feel testers tend to be less technically focused and more a balanced spread of soft skills to go with the technical subjects. That said, the conference did feature plenty of talks that were more about the soft skills, although probably a different balance compared to some testing conferences.
  • I feel that the security community is even more aware of justifying their testing and explaining the effects of the exploits they find than the average tester because of both the ethical act of the testing and it’s very technical nature. Not only must they be very careful in not breaking laws or damaging a company but they also have to be very good at explaining why they think something is a significant problem and helping the company fix it. I think as testers we have a lot we can learn from this, not because we don’t do a good job of this, but our testing is a lot safer and doesn’t always require as much explanation. However, I think this will change over time if we get more involved with DevOps, challenging requirements and testing in production.

You should go to!

All in all, it was a great conference, I took a lot away and enjoyed myself. It was very reassuring to see so many similarities to testing and seeing ways in which we could work together. I hope to go to some other conferences in other areas around software development like Programming, Project Management, UX, Business Analysis, Operations and Systems Administration and continue learning from them. Maybe even to begin talking about testing at their meetups and conferences and see more sharing across our disciplines.

Thursday, 6 July 2017

Some quick bites of performance testing

Introduction

I’ve recently been attempting to write some blog posts about my recent experiences with performance testing, each time I try they are very long winded and feel like a mouthful to read. So this is an attempt to provide some quick summarised points, mistakes, lessons and general tips that I’ve learned or relearned over the past 2 months.

Where to start?

Why performance test? - If you’ve been asked to perform some performance testing, find out why. If you’re thinking you might need to, think about why that is. You need this context in order to make sure the performance testing is useful to the other members of your team.
What do you mean by “performance test”? - The phrase “performance testing” encompasess a lot of different kinds of tests and information that you could find out. Do you want to try load tests, stress tests, spike tests, soak tests? Are you looking to test one component, an integration or a whole system? Be aware of people not understanding and meaning these words and phrases the same way. Someone might ask you to perform “some load tests”, but they don’t mean only load tests, they may really mean “can you explore the performance of the product”. They may not ask for stress tests, they may not be thinking of planning capacity for the future, but that doesn’t mean you shouldn’t raise it as an area to explore. People may be concerned about one specific component but actually they hadn’t thought of load testing an integration point too.
What numbers are we using? - Are there NFRs (non-functional requirements) or functional requirements? Is the application already running in live, if so, what does the current load and performance look like? If it’s a new application, what do we expect the load and performance to be? What would we expect it to be next year? In 2 years? What would be “too much” load? What would a spike realistically look like? Are there peaks and troughs in the load profiles?
You might not know everything right now, so start with the basics - Start with basic functionality smoke tests, move on to small load tests that check the acceptance criteria then start exploring around that as you learn more about the system.
You can’t performance test something if you don’t understand how it works - The application might be very fast, if you send bad data. How do you know you’re sending the correct data? What happens when you send bad data? How do you know what good or bad looks like?
Isolated, stable, “like-live” environment - The tests should be run against something that you control, anything could affect performance and you want to control as many variables as possible. You want the environment to be as close to production hardware and configuration as possible so you can rule out issues like the hardware not being powerful enough.
Understand the infrastructure and architecture of your tools and environment - Consider where you are going to run the tests from, what is going to generate the load? Think about where the environment is in respect to that. Try to make sure the load generator is on the same network and isn’t throttled or blocked by proxies or load balancers accidentally (unless you’re testing them). Make sure your tests aren’t affected by the performance of the server generating requests or the bandwidth of the connection.
It’s ok to start with an environment that’s not like-live such as a local environment to help design your tests - this ties in with understanding how it works, but you can design the tests against a smaller environment while you wait for a larger environment to be built. This is useful when you’re trying to figure out how to get API requests just to work and check what to check for in the responses, or tweak the timings of particular scenarios where you only need to run 1 or 2 tests.
Stuff you might need that might take time to get sorted (so get the ball rolling!):
  • Access to a server to run the load generator from.
  • Access to monitoring of the servers and application logs.
  • Access to any databases.
  • An ability to restart servers and reset databases between test runs.
  • Access to an environment you can start exploring right now.
  • Documentation of how the system works.

Mistakes & Lessons

Completely random test data may not be very useful - If the test data is completely random, it means you are running a different test on every run. You can use weighted distributions instead - this is where you give a probability that a particular result will occur. For example, 90% of the time, it will pick one value, but randomly it will pick another value 10% of the time. Why is this useful? It gives you control over the randomness and lets you explore different patterns that might affect performance.
If you’re just designing the tests and want to test them out with small load on a dev environment, don’t guess the numbers to try a small load test - I did this and accidentally brought down an environment being used for UAT (user acceptance testing). I had picked a number off the top of my head and assumed it was a safe number, well, turned out it wasn’t. Always discuss with other people about what numbers to try and warn people before you run any test, even if you think you’re not going to stress the environment, don’t just rely on guess work.
Not all of the data needs to be automatically generated - Be pragmatic and try to understand which parts of the data matter for performance. There may be some parts of the data that have no affect on performance. It’s not always possible to know which parts, but start with pieces of data you would expect to have an effect and gradually include other parts later. Initially I started writing a very complicated automation for generating a variety of data before I realised that most of it could be identical as it wasn’t expected to affect performance.
In tandem with the above point, consider how to discover information quickly - You may spend a long time writing a very complicated performance test that covers all kinds of data and scenarios, only to find that the application or the environment hasn’t been configured correctly. Simpler, quicker tests can be run earlier to discover bits of information about whether you are ready to performance test or discover very obvious issues. Simply rapidly sending API requests manually through Postman may stress test the server and that can be done in a few minutes!
Consider as many user stories as possible - Anna Baik shared this one in the testing community slack that I would never have thought of - Health Check endpoints. One of the users of the system is your internal monitoring which may regularly hit a health check endpoint. This can affect performance! What other user stories are there that you may not have considered?

General tips

Find a way to monitor your performance tests live as they run! - If you’re using a tool such as Gatling, you can configure real-time monitoring. This is extremely useful as you can quickly tell how the test is going and stop it early if it’s already killed the application. You can also do this through monitoring the application through tools such as AppDynamics or using any tools provided by cloud service providers such as AWS CloudWatch. The more information you can have to observe how the application and its hardware behaves, the better.
Treat performance tests as exploratory tests - Expect to run lots of tests and to keep changing and tweaking the tests. Be prepared to explore different questions and curiosities. Treat your first runs of your tests as opportunities to check your tools and tests actually work how you expect. Try to avoid people investing too much in the result of the first load test - you will learn a lot from it, but it won’t tell you “good to ship” first time.
No seriously, it will be more than just “one test” - Imagine if someone asked you to verify some functionality in just one test? Do you really believe you will not make any mistakes and the product will perform as expected first time? If you have that much faith, why run the performance test? If you’ve decided there is value in performance testing, then surely you’ve accepted that you will take the time to run as many tests as it takes to have some better confidence and reliable information?
Errors might be problems with your tests, not just problems with the application - Just as with automated tests, expect there to be mistakes and errors with your tests. Don’t jump too quickly to conclusions about why errors might be occurring.
Separate generating test data from your test execution - Consider what you are performance testing, does the performance test need to create data before it does something else with it? Or is it unrealistic for it to generate load that creates data and does data need to pre-exist? In my case I needed to create 1000s of user accounts, but the application wasn’t intended to handle 1000s of user accounts being created all at once. So I created a separate set of automation to handle building the data prior to the performance test run.
Gradually introduce variables such as different users or different loads - For example, if you have two different types of user - an admin and a customer - try the customer load test on its own and the admin load test on its own before running them together. If there is a significant problem with one or the other, you can more easily identify it. In other words, try to limit how many tests you run at once and how many variables you play with at once.
When you run a stress test, measure throughput - This lets you measure how much data you are sending and help you figure out if your stress test is reaching the limits of your machine, the network or the application you're testing.

Test ideas


  • What happens when the load spikes? Does the application ever recover after the spike? How long does it take to recover?
  • What happens if we restart the servers in the middle of a load test?
  • How efficiently does the application use its hardware? If it’s in a cloud service, would it be expensive to scale?
  • What happens when we run a soak test (a load test that runs for a long time with sustained load, e.g. 12 hours or 2 days).
  • What happens when we run with a tiny amount of load?
  • What happens when we send bad requests?
  • What do we believe to be the riskiest areas and how can we assess them?
  • What do we believe to be the safest areas and how can we assess them?

Wednesday, 22 March 2017

NWEWT #2 Growing Testers

Introduction

Last weekend I attended the second edition of the North West Exploratory Workshop on Testing (NWEWT). If you don’t know what an exploratory workshop is or want to know more about NWEWT, read my previous blog post here:

Attendees

The attendees were as follows, the content of this blog post should be attributed to their input as much as mine, the thoughts I have here were brought together through collaboration:
Ady Stokes
Ash Winter
Callum Hough
Claire Reckless
Dan Ashby
Duncan Nisbet
Emma Preston
Gwen Diagram
Jit Gosai
Marc Muller
Vernon Richards
Vishnu Priya

Growing testers

This years theme was “growing testers”, looking to spark discussion on our own experiences as we’ve grown ourselves as testers and how we help other testers grow. We had a mix of new faces, some of whom it was their first time public speaking, and experienced people, which led to a nice mix of discussions exploring the topic from one end to the other.

I’m not going to go through all of the talks and everything that was discussed here, I’d just like to quickly blog about the discussions that really hit a chord for me and where my thoughts are on the subject.

Main takeaways

The major takeaway for me was Ash Winter’s ‘wheel of testing’, I really liked this idea and I think it struck a chord with me because I’m relatively new to managing testers and trying to guide them in their career progression. The more ideas I can try and explore to make my own ones, the better, I feel.
Ash explained that the wheel came from his dislike of competencey frameworks and the typical talk around growth being a linear path, whereas really it’s quite a chaotic and winding path. So he came up with a wheel to visualise the different areas a tester could focus on to improve. I’ll let Ash publish and explain his wheel himself, but effectively it contained different core areas of testing, with specialised or more focused subjects going outwards. The idea was not to tick off particular areas or focus people on any one path, but demonstrate what paths are available and engage testers in a discussion.

I also liked Marc Muller’s model which took 5 areas of testing skills and mapped them onto a radar chart. He asked testers to score themselves from 0 to 10 in each area and used this to get a picture of his team. I liked the simple visual nature of this chart and just as in Ash’s model it’s a useful tool to open up the conversation with testers on what the different skills mean to them and what they would like to improve.

Several people gave experience reports of what it was like for them to grow as a tester and I recognised so many familiar aspects to my career. It seems things have still not changed in that respect, people are still falling into it and accidentally happen across the testing community.

Naturally the topic of growing testers eventually led to the topic of “the future of testers”. While we didn’t go too far into this, as it’s a huge topic in itself, it was clear there was a fairly large difference in opinion and my takeaway from this is that I’d love to get into it more!

My talk

My interpretation of growing testers had two aspects to it, one was an introspective look at how I’ve grown as a tester and how I manage and attempt to help testers within my team grow. Another aspect was how to improve the growth of testers in the industry. The former I didn’t feel I was making any interesting points on, so in hindsight I wish I had dropped that part. But the latter point I’ve realised I’m quite interested in and curious about.

I argued that to help grow more and better testers in the software industry, we (society in general, not just the testing community) could be doing more to improve awareness about testing through education. I referred to the example of Scratch which is used to educate children on programming at school - could we be doing something similar for testing or somehow bringing elements of testing into those exercises?

I believe we can, I believe we could be improving how software development in general is taught (or not taught!) throughout education. I don’t mean testing degrees or testing qualifications though. How testing could be brought into education, how people could be made more aware could take many forms:
  • The obvious option being degrees or qualifications like GCSEs.
  • Supplemental modules or specialisms within existing computer science or software development or engineering courses.
  • A change in the way programming is taught in existing modules or courses. Rather than focusing on pure coding problems, could we be focusing on delivery of software? We don’t have to call it “testing” but we could be changing programmers to be more used to understanding the wider challenges of software development and better advocates of testing. If a programmer recognises the need for a critical eye on their work, even if they don’t call that “testing”, aren’t they more likely to ask for it?
  • A better promoted option in careers discussions. Career discussions generally are quite poor at university from my experience from 2010. We all wondered what the hell we could be other than programmers but had no idea. Simply having someone talk to use about the different roles in the real world would have made a difference.
  • A one off talk from an experienced tester, maybe tied in with the career discussions.
  • Including assignments for programmers to build software that other students will test and project manage. Maybe not very practical but maybe there is a way to make this work. The best way to demonstrate the effectiveness of testing is actually try and produce software for somebody else.
  • Introducing ideas and techniques such as pairing, mobbing, code reviews, TDD, BDD, continuous delivery, logging and monitoring. These are not about testing but can be discussed quite easily in the context of testability. Through these subjects we could discuss testing. I also feel these ideas can be introduced even at a young age, at least to get people used to the people skills and communication challenges. If we could make people more aware of this before entering work would help I think.
  • Sandwich courses, where students take a year out from their course to work in industry. If I had understood testing better I think I would definitely have taken this option because testing is a great way to learn about development just as much as it’s a career in itself.

After this conference I’m pretty damn motivated to conduct more research about how software development in general is being taught through the various levels of education. I’m well aware that it may be a large time sink and require some commitment but I’ve thought about pursuing this avenue for a while now. Having spent a majority of my life in education, I really enjoyed it and I believe it can be much better and much more inspiring. 

Through the Q&A session we had after my talk, it felt like there were mixed feelings on this subject. I think its fair to say some people felt that education isn't the best place to learn about testing. Also some people agreed with the sentiment around Scratch as a way to perhaps find more testers and spread awareness. I definitely feel there is more to research and discuss on this subject and there is something in helping academia improve.

The other side of the interview table

Introduction

I’ve recently been in the privileged position of being on the other side of the interview table for several interviews over the past year. I’ve decided I’d like to share my experience and get some ideas written down.

Reading CVs

So before an interview, you usually need to review CVs and pick ones that you feel warrant pursuing. Why do we pick out CVs? Because interviewing is a costly process, it takes time and focus away from our daily work, particularly in my case at a mid-sized company where we don’t tend to interview on a regular basis. We simply don’t have the time to interview everyone that we receive a CV for, so we are forced to filter them down.
My general approach for this was the following:
  • Read through the CV thoroughly  - everything on the CV is a small clue about the person.
  • I looked at first for some sign of personality in the CV, something that told me why this person was looking for work and what motivates them to work.
  • I noted any skill that I thought may be relevant, not just programming skills. For example, skills with Business Analysis tools or experience on a Support team. Anything that could be valuable and bring something different to my test team.
  • Depending on the role we were looking for, I would review the years of experience.
  • I would make a note of any certifications, I personally don’t put a great amount of value on ISTQB certifications, but I considered them just the same as any training a candidate might mention.
  • I always looked for some mention that the person attended meetups, conferences, workshops or is somewhat actively engaged with the testing community. While this doesn’t rule people out (as it’s pretty rare that I see it on CVs), when people do mention it, it makes them stand out.
  • I would carefully analyse the wording chosen, especially when talking about skills or previous employment. While I wouldn’t necessarily reject a CV because of a typo, it’s pretty embarrassing when people have them in sentences such as “I have a keen eye for qaulity”.

My experience so far didn’t include the initial CV collation and filtering, however, I have done this once or twice with sets of 10 or 12 CVs. Perhaps if I was filtering a stack of 100 CVs, I probably wouldn’t be as thorough reading the CVs and may be more arbitrary about the criteria I reject them on.

My general experience with this part of interviewing is there is not much right and wrong here. Only you can decide what a “good” CV is and what matches your criteria for the role. I have my own personal preferences for people that add a little personality to their CV, with opinions and motivations but other people may value lists of skills or abilities more highly.

I will say though that many, many people seem to have very, very similar CVs, which makes it hard to pick a few to take forward to interview. This is why you may end up using pretty arbitrary rules for filtering and it also biases you towards those CVs that look a bit different. As an interviewee you can use this to your advantage, but as an interviewer I feel you need to be careful not to let this bias lead you too much. Sometimes a dull CV hides a gem of a candidate!

Preparing for the interview

Who is the person? What do I want to find out?
If it’s been quite a while or if I’ve been quite busy with other work between reading the CV the first time and the date of the interview, I will first start with refreshing my memory on the CV. I will try to think about what I like about this person from the CV that I want to see more of, and try to think of questions that will give them opportunity to impress in these areas. Equally, I will also look for areas that I dislike and try to think of questions that explore these.  Some examples of these I’ve had in the past:
  • A tester mentioned working closely with developers and managing the relationships with them - I’ve asked them to expand on that, what’s worked well, what hasn’t etc.
  • Some CVs have simply listed skills without description of what their level of experience or confidence with them is, or how they’ve used them. So I’ve targeted questions on those skills to try and explore where they really are with them. “I know Java” would usually prompt questions from me about how they’ve used it and how confident they are with it, even specific questions regarding it.
  • Some CVs have also described their previous testing experience mainly in terms of “producing Test Cases and Test Plans according to the specifications” which prompts me to probe quite a bit about the candidates feelings on exploratory testing and how they would handle an environment without many written test cases.
Due to the nature of everyone’s CV being different, this means I end up with a different set of questions each time. Currently I feel this is a little inadequate because I feel I end up with inconsistent or biased opinions on the candidates where I’ve asked better questions to some than others.
Interview format
Something that I’ve had not had much chance to experiment with yet is scripting or planning the interview format. But I feel there are several variables that can change and I could experiment with:
  • How many people are going to be involved in the interview?
  • How long will the interview be?
  • Will we include a technical test?
  • How many interviews will we conduct with each candidate (e.g. 2nd stage or 3rd stage interviews)?
  • Do we ask different questions or the same questions to each candidate? Do we stick to a script?
  • Do we ask the candidate to perform homework or a task before the interview?
  • Do we ask the candidate to conduct a task (such as a presentation) during the interview?
I’ve been in various interviews with a mix of the above and I’m undecided on what does and doesn’t work. However it’s worth considering and planning these things before the candidate walks through the door! I also feel I can improve how I learn from each interview and compare them. I would like to spend more time in future making sure the experience with each candidate is more consistent and keep better notes on them. In other words I feel I need to plan better how I am going to make a decision on which candidate to choose, rather than leaving it to gut feeling and all of its biases.

The interview itself

Think about your performance
Regardless of whether you are either the interviewer or the interviewee, my number one rule for interviews is to think of interviews as a two-way conversation. Both parties are interviewing each other to figure out if they like each other. As the interviewer I feel it’s important to respect this even if the candidate doesn’t and give them plenty of opportunities to ask questions. Not only that but I try to keep discussions as honest, informal and friendly as possible. If it can feel more like chatting casually in a cafe or a bar, the better, because both interviewer and interviewee are going to think of better questions and answers.

With this mind, I try to be cautious not to assault the candidate with lots of questions one after another. It’s not easy to describe when it makes sense to hold off and give the candidate space, it depends on several factors:
  • The personalities of everyone in the interview.
  • The mental state of the candidate.
  • How difficult the questions being asked are.
  • How the conversation has been going (i.e. sometimes the flow is so natural that we may be chatting fairly casually and rattling through lots of questions and that’s ok).
  • How much time we have.

I’ve noticed that people very rarely tend to ask questions after the interview, despite being told they can. While I still encourage this, I’ve taken this to mean it’s very important that the interviewee gets chance to ask as much as they can in the interview. If possible, I try to see if I can learn from their questions rather than from the answers they have for mine.

Multiple interviewers
All of the interviews I’ve conducted have been with other interviewers in the room, asking questions. The worst thing that can happen is where you trip over each other, interrupting or awkwardly looking at each other to ask the next question. This is why preparing the interview format and discussing a script or questions beforehand is important to me. For me you get so little time with candidates that you have to spend every minute, every second very carefully. I absolutely hate when an interviewer pursues a line of questioning which has been covered before or that I don’t consider very useful for this reason.

What would a script look like? Would it be a set of strict questions, one after another that we would follow to the letter? No of course not, as I said earlier it’s important to keep the interview casual and informal, letting it flow with the candidate, adapting all of the time. I would like to try scripts in future where we plan out what kinds of questions and discussions we would like have and assign each interviewer to “lead” each. So someone would conduct the introductions, outro and facilitate the interview, another would ask deeper questions on a topic, etc. I would still allow each interviewer to interrupt or go off script but the key is to try and make sure we get the most out of the interview while keeping it natural.

It’s all about opportunities, not tests
If you are thinking of including some kind of task, examination or test of the candidate to assess their skills, bear this in mind - do not look for failure. What do I mean by this? Interviews are very compromised things, there is a lot of pressure involved and people don’t perform anywhere near like they do when they work normally. It is rarely an accurate representation of what the person is like to work with. With this in mind, I try to view questions and tests as opportunities for the candidate to impress me. If the candidate misses or messes up these opportunities, I try to keep in mind that this may be due to the unusual pressure. I feel if I view it as a series of opportunities to impress, then I avoid placing too much emphasis on particular parts of the interview and look for more well-rounded candidates. It also means people have a chance to recover, where they may mess up the start of an interview, but relax and impress later. Or they may impress in their preparation but fluff up their performance because they are not comfortable with interviews. I’m also open to my own questions being terrible and the candidate impressing me in a way that I didn’t expect, on something I didn’t ask them about.

Life is continuous learning and lessons
Even if you don’t hire them, make sure to always give feedback to the candidate and if there are areas they didn’t know or understand, always take the opportunity to teach them if possible. You may not be hiring them but it can be impossible for the candidate to improve themselves if they never receive feedback. I used to find it very frustrating when no-one ever told me why I didn’t get the job, even if I had done nothing wrong it would been helpful for my confidence to know the reasons.

Interviewing testers

So what about testers? What do we talk about and discuss, what is important for testing? My first reference for this is Dan Ashby’s excellent interview mindmap found here:
‘Nuff said! But some additional thoughts for me:
  • Discussing definitions of “testing” and why people like testing are important because everyone has different ideas and understanding. This as much about making the candidate feel comfortable with what they are applying for as it is establishing they are the right fit for us.
  • Discussing “agile” or “devops” are also opportunities to make clear how we work. I’m not looking for people rattle off dictionary definitions of these words but I want to understand what they think it is and how they adapt to topics that affect testing. Its also for me to explain what I believe it is and how the company has interpreted or implemented those ideas. The discussion and understanding  is the important part, not testing the candidate for definitions.
  • In terms of technical tests or exams, I’m very skeptical. While there may be certain contexts where you are looking to hire testers with programming experience, I personally don’t view programming as a key testing skill. However, if I could design a technical test that gives a good picture of how capable of learning technical subjects, I would try that! I value testers with the right attitude and approach and the ability to learn a great deal, already knowing programming is useful but not critical. The critical ability is the capacity to learn. I’ve worked with and hired great testers who know little about programming and have contributed a lot of value, if not more value that those that knew programming.
  • I’ve experimented with tests of candidate’s testing abilities and seen different ideas, but I’m again unconvinced how much you can judge. You can try and assess them on bugs they find in an application or ask them to explore their lateral thinking skills with a task such as mind-mapping a pencil. I’ve seen some interesting results from these tasks but I’m concerned that these tasks bias us towards candidates that are great on the spot. I suspect there are great testers who don’t perform very well in these situations but are excellent given more time and less pressure.

Summary

  • Its rare that we trained how to interview so it’s worth spending time planning how you are going to learn and improve, because it is an area that has particular skills and considerations like any other.
  • I’ve got several areas I’d like to focus on improving or learning more about in future, particularly around planning and facilitating interviews.
  • It’s easy to feel interviews are about asking lots of questions and testing the interviewee, based on your experience as an interviewee. But the best interviews are where you make it a more natural and informal chat.
  • Opportunities to impress, not testing for failure!
  • Make sure to always take the time to give feedback, especially if you don’t hire the candidate. Tell them why you are not hiring them, so they can improve.