Wednesday, 22 March 2017

NWEWT #2 Growing Testers

Introduction

Last weekend I attended the second edition of the North West Exploratory Workshop on Testing (NWEWT). If you don’t know what an exploratory workshop is or want to know more about NWEWT, read my previous blog post here:

Attendees

The attendees were as follows, the content of this blog post should be attributed to their input as much as mine, the thoughts I have here were brought together through collaboration:
Ady Stokes
Ash Winter
Callum Hough
Claire Reckless
Dan Ashby
Duncan Nisbet
Emma Preston
Gwen Diagram
Jit Gosai
Marc Muller
Vernon Richards
Vishnu Priya

Growing testers

This years theme was “growing testers”, looking to spark discussion on our own experiences as we’ve grown ourselves as testers and how we help other testers grow. We had a mix of new faces, some of whom it was their first time public speaking, and experienced people, which led to a nice mix of discussions exploring the topic from one end to the other.

I’m not going to go through all of the talks and everything that was discussed here, I’d just like to quickly blog about the discussions that really hit a chord for me and where my thoughts are on the subject.

Main takeaways

The major takeaway for me was Ash Winter’s ‘wheel of testing’, I really liked this idea and I think it struck a chord with me because I’m relatively new to managing testers and trying to guide them in their career progression. The more ideas I can try and explore to make my own ones, the better, I feel.
Ash explained that the wheel came from his dislike of competencey frameworks and the typical talk around growth being a linear path, whereas really it’s quite a chaotic and winding path. So he came up with a wheel to visualise the different areas a tester could focus on to improve. I’ll let Ash publish and explain his wheel himself, but effectively it contained different core areas of testing, with specialised or more focused subjects going outwards. The idea was not to tick off particular areas or focus people on any one path, but demonstrate what paths are available and engage testers in a discussion.

I also liked Marc Muller’s model which took 5 areas of testing skills and mapped them onto a radar chart. He asked testers to score themselves from 0 to 10 in each area and used this to get a picture of his team. I liked the simple visual nature of this chart and just as in Ash’s model it’s a useful tool to open up the conversation with testers on what the different skills mean to them and what they would like to improve.

Several people gave experience reports of what it was like for them to grow as a tester and I recognised so many familiar aspects to my career. It seems things have still not changed in that respect, people are still falling into it and accidentally happen across the testing community.

Naturally the topic of growing testers eventually led to the topic of “the future of testers”. While we didn’t go too far into this, as it’s a huge topic in itself, it was clear there was a fairly large difference in opinion and my takeaway from this is that I’d love to get into it more!

My talk

My interpretation of growing testers had two aspects to it, one was an introspective look at how I’ve grown as a tester and how I manage and attempt to help testers within my team grow. Another aspect was how to improve the growth of testers in the industry. The former I didn’t feel I was making any interesting points on, so in hindsight I wish I had dropped that part. But the latter point I’ve realised I’m quite interested in and curious about.

I argued that to help grow more and better testers in the software industry, we (society in general, not just the testing community) could be doing more to improve awareness about testing through education. I referred to the example of Scratch which is used to educate children on programming at school - could we be doing something similar for testing or somehow bringing elements of testing into those exercises?

I believe we can, I believe we could be improving how software development in general is taught (or not taught!) throughout education. I don’t mean testing degrees or testing qualifications though. How testing could be brought into education, how people could be made more aware could take many forms:
  • The obvious option being degrees or qualifications like GCSEs.
  • Supplemental modules or specialisms within existing computer science or software development or engineering courses.
  • A change in the way programming is taught in existing modules or courses. Rather than focusing on pure coding problems, could we be focusing on delivery of software? We don’t have to call it “testing” but we could be changing programmers to be more used to understanding the wider challenges of software development and better advocates of testing. If a programmer recognises the need for a critical eye on their work, even if they don’t call that “testing”, aren’t they more likely to ask for it?
  • A better promoted option in careers discussions. Career discussions generally are quite poor at university from my experience from 2010. We all wondered what the hell we could be other than programmers but had no idea. Simply having someone talk to use about the different roles in the real world would have made a difference.
  • A one off talk from an experienced tester, maybe tied in with the career discussions.
  • Including assignments for programmers to build software that other students will test and project manage. Maybe not very practical but maybe there is a way to make this work. The best way to demonstrate the effectiveness of testing is actually try and produce software for somebody else.
  • Introducing ideas and techniques such as pairing, mobbing, code reviews, TDD, BDD, continuous delivery, logging and monitoring. These are not about testing but can be discussed quite easily in the context of testability. Through these subjects we could discuss testing. I also feel these ideas can be introduced even at a young age, at least to get people used to the people skills and communication challenges. If we could make people more aware of this before entering work would help I think.
  • Sandwich courses, where students take a year out from their course to work in industry. If I had understood testing better I think I would definitely have taken this option because testing is a great way to learn about development just as much as it’s a career in itself.

After this conference I’m pretty damn motivated to conduct more research about how software development in general is being taught through the various levels of education. I’m well aware that it may be a large time sink and require some commitment but I’ve thought about pursuing this avenue for a while now. Having spent a majority of my life in education, I really enjoyed it and I believe it can be much better and much more inspiring. 

Through the Q&A session we had after my talk, it felt like there were mixed feelings on this subject. I think its fair to say some people felt that education isn't the best place to learn about testing. Also some people agreed with the sentiment around Scratch as a way to perhaps find more testers and spread awareness. I definitely feel there is more to research and discuss on this subject and there is something in helping academia improve.

The other side of the interview table

Introduction

I’ve recently been in the privileged position of being on the other side of the interview table for several interviews over the past year. I’ve decided I’d like to share my experience and get some ideas written down.

Reading CVs

So before an interview, you usually need to review CVs and pick ones that you feel warrant pursuing. Why do we pick out CVs? Because interviewing is a costly process, it takes time and focus away from our daily work, particularly in my case at a mid-sized company where we don’t tend to interview on a regular basis. We simply don’t have the time to interview everyone that we receive a CV for, so we are forced to filter them down.
My general approach for this was the following:
  • Read through the CV thoroughly  - everything on the CV is a small clue about the person.
  • I looked at first for some sign of personality in the CV, something that told me why this person was looking for work and what motivates them to work.
  • I noted any skill that I thought may be relevant, not just programming skills. For example, skills with Business Analysis tools or experience on a Support team. Anything that could be valuable and bring something different to my test team.
  • Depending on the role we were looking for, I would review the years of experience.
  • I would make a note of any certifications, I personally don’t put a great amount of value on ISTQB certifications, but I considered them just the same as any training a candidate might mention.
  • I always looked for some mention that the person attended meetups, conferences, workshops or is somewhat actively engaged with the testing community. While this doesn’t rule people out (as it’s pretty rare that I see it on CVs), when people do mention it, it makes them stand out.
  • I would carefully analyse the wording chosen, especially when talking about skills or previous employment. While I wouldn’t necessarily reject a CV because of a typo, it’s pretty embarrassing when people have them in sentences such as “I have a keen eye for qaulity”.

My experience so far didn’t include the initial CV collation and filtering, however, I have done this once or twice with sets of 10 or 12 CVs. Perhaps if I was filtering a stack of 100 CVs, I probably wouldn’t be as thorough reading the CVs and may be more arbitrary about the criteria I reject them on.

My general experience with this part of interviewing is there is not much right and wrong here. Only you can decide what a “good” CV is and what matches your criteria for the role. I have my own personal preferences for people that add a little personality to their CV, with opinions and motivations but other people may value lists of skills or abilities more highly.

I will say though that many, many people seem to have very, very similar CVs, which makes it hard to pick a few to take forward to interview. This is why you may end up using pretty arbitrary rules for filtering and it also biases you towards those CVs that look a bit different. As an interviewee you can use this to your advantage, but as an interviewer I feel you need to be careful not to let this bias lead you too much. Sometimes a dull CV hides a gem of a candidate!

Preparing for the interview

Who is the person? What do I want to find out?
If it’s been quite a while or if I’ve been quite busy with other work between reading the CV the first time and the date of the interview, I will first start with refreshing my memory on the CV. I will try to think about what I like about this person from the CV that I want to see more of, and try to think of questions that will give them opportunity to impress in these areas. Equally, I will also look for areas that I dislike and try to think of questions that explore these.  Some examples of these I’ve had in the past:
  • A tester mentioned working closely with developers and managing the relationships with them - I’ve asked them to expand on that, what’s worked well, what hasn’t etc.
  • Some CVs have simply listed skills without description of what their level of experience or confidence with them is, or how they’ve used them. So I’ve targeted questions on those skills to try and explore where they really are with them. “I know Java” would usually prompt questions from me about how they’ve used it and how confident they are with it, even specific questions regarding it.
  • Some CVs have also described their previous testing experience mainly in terms of “producing Test Cases and Test Plans according to the specifications” which prompts me to probe quite a bit about the candidates feelings on exploratory testing and how they would handle an environment without many written test cases.
Due to the nature of everyone’s CV being different, this means I end up with a different set of questions each time. Currently I feel this is a little inadequate because I feel I end up with inconsistent or biased opinions on the candidates where I’ve asked better questions to some than others.
Interview format
Something that I’ve had not had much chance to experiment with yet is scripting or planning the interview format. But I feel there are several variables that can change and I could experiment with:
  • How many people are going to be involved in the interview?
  • How long will the interview be?
  • Will we include a technical test?
  • How many interviews will we conduct with each candidate (e.g. 2nd stage or 3rd stage interviews)?
  • Do we ask different questions or the same questions to each candidate? Do we stick to a script?
  • Do we ask the candidate to perform homework or a task before the interview?
  • Do we ask the candidate to conduct a task (such as a presentation) during the interview?
I’ve been in various interviews with a mix of the above and I’m undecided on what does and doesn’t work. However it’s worth considering and planning these things before the candidate walks through the door! I also feel I can improve how I learn from each interview and compare them. I would like to spend more time in future making sure the experience with each candidate is more consistent and keep better notes on them. In other words I feel I need to plan better how I am going to make a decision on which candidate to choose, rather than leaving it to gut feeling and all of its biases.

The interview itself

Think about your performance
Regardless of whether you are either the interviewer or the interviewee, my number one rule for interviews is to think of interviews as a two-way conversation. Both parties are interviewing each other to figure out if they like each other. As the interviewer I feel it’s important to respect this even if the candidate doesn’t and give them plenty of opportunities to ask questions. Not only that but I try to keep discussions as honest, informal and friendly as possible. If it can feel more like chatting casually in a cafe or a bar, the better, because both interviewer and interviewee are going to think of better questions and answers.

With this mind, I try to be cautious not to assault the candidate with lots of questions one after another. It’s not easy to describe when it makes sense to hold off and give the candidate space, it depends on several factors:
  • The personalities of everyone in the interview.
  • The mental state of the candidate.
  • How difficult the questions being asked are.
  • How the conversation has been going (i.e. sometimes the flow is so natural that we may be chatting fairly casually and rattling through lots of questions and that’s ok).
  • How much time we have.

I’ve noticed that people very rarely tend to ask questions after the interview, despite being told they can. While I still encourage this, I’ve taken this to mean it’s very important that the interviewee gets chance to ask as much as they can in the interview. If possible, I try to see if I can learn from their questions rather than from the answers they have for mine.

Multiple interviewers
All of the interviews I’ve conducted have been with other interviewers in the room, asking questions. The worst thing that can happen is where you trip over each other, interrupting or awkwardly looking at each other to ask the next question. This is why preparing the interview format and discussing a script or questions beforehand is important to me. For me you get so little time with candidates that you have to spend every minute, every second very carefully. I absolutely hate when an interviewer pursues a line of questioning which has been covered before or that I don’t consider very useful for this reason.

What would a script look like? Would it be a set of strict questions, one after another that we would follow to the letter? No of course not, as I said earlier it’s important to keep the interview casual and informal, letting it flow with the candidate, adapting all of the time. I would like to try scripts in future where we plan out what kinds of questions and discussions we would like have and assign each interviewer to “lead” each. So someone would conduct the introductions, outro and facilitate the interview, another would ask deeper questions on a topic, etc. I would still allow each interviewer to interrupt or go off script but the key is to try and make sure we get the most out of the interview while keeping it natural.

It’s all about opportunities, not tests
If you are thinking of including some kind of task, examination or test of the candidate to assess their skills, bear this in mind - do not look for failure. What do I mean by this? Interviews are very compromised things, there is a lot of pressure involved and people don’t perform anywhere near like they do when they work normally. It is rarely an accurate representation of what the person is like to work with. With this in mind, I try to view questions and tests as opportunities for the candidate to impress me. If the candidate misses or messes up these opportunities, I try to keep in mind that this may be due to the unusual pressure. I feel if I view it as a series of opportunities to impress, then I avoid placing too much emphasis on particular parts of the interview and look for more well-rounded candidates. It also means people have a chance to recover, where they may mess up the start of an interview, but relax and impress later. Or they may impress in their preparation but fluff up their performance because they are not comfortable with interviews. I’m also open to my own questions being terrible and the candidate impressing me in a way that I didn’t expect, on something I didn’t ask them about.

Life is continuous learning and lessons
Even if you don’t hire them, make sure to always give feedback to the candidate and if there are areas they didn’t know or understand, always take the opportunity to teach them if possible. You may not be hiring them but it can be impossible for the candidate to improve themselves if they never receive feedback. I used to find it very frustrating when no-one ever told me why I didn’t get the job, even if I had done nothing wrong it would been helpful for my confidence to know the reasons.

Interviewing testers

So what about testers? What do we talk about and discuss, what is important for testing? My first reference for this is Dan Ashby’s excellent interview mindmap found here:
‘Nuff said! But some additional thoughts for me:
  • Discussing definitions of “testing” and why people like testing are important because everyone has different ideas and understanding. This as much about making the candidate feel comfortable with what they are applying for as it is establishing they are the right fit for us.
  • Discussing “agile” or “devops” are also opportunities to make clear how we work. I’m not looking for people rattle off dictionary definitions of these words but I want to understand what they think it is and how they adapt to topics that affect testing. Its also for me to explain what I believe it is and how the company has interpreted or implemented those ideas. The discussion and understanding  is the important part, not testing the candidate for definitions.
  • In terms of technical tests or exams, I’m very skeptical. While there may be certain contexts where you are looking to hire testers with programming experience, I personally don’t view programming as a key testing skill. However, if I could design a technical test that gives a good picture of how capable of learning technical subjects, I would try that! I value testers with the right attitude and approach and the ability to learn a great deal, already knowing programming is useful but not critical. The critical ability is the capacity to learn. I’ve worked with and hired great testers who know little about programming and have contributed a lot of value, if not more value that those that knew programming.
  • I’ve experimented with tests of candidate’s testing abilities and seen different ideas, but I’m again unconvinced how much you can judge. You can try and assess them on bugs they find in an application or ask them to explore their lateral thinking skills with a task such as mind-mapping a pencil. I’ve seen some interesting results from these tasks but I’m concerned that these tasks bias us towards candidates that are great on the spot. I suspect there are great testers who don’t perform very well in these situations but are excellent given more time and less pressure.

Summary

  • Its rare that we trained how to interview so it’s worth spending time planning how you are going to learn and improve, because it is an area that has particular skills and considerations like any other.
  • I’ve got several areas I’d like to focus on improving or learning more about in future, particularly around planning and facilitating interviews.
  • It’s easy to feel interviews are about asking lots of questions and testing the interviewee, based on your experience as an interviewee. But the best interviews are where you make it a more natural and informal chat.
  • Opportunities to impress, not testing for failure!
  • Make sure to always take the time to give feedback, especially if you don’t hire the candidate. Tell them why you are not hiring them, so they can improve.

Monday, 19 December 2016

The temptation to split dev and test work in sprints - don’t do it!

Introduction

About 3 and a half years ago, I was new to sprints and scrum. Coming from the videogames industry, I was used to a process where I would test code that came from developers and return bug reports. I had heard the words “sprint” and “scrum” before but I had no idea how testing fit into them, so I joined a company where I could figure that out. This is what I figured out.

What’s a sprint?

If you’re not familiar with scrum or agile, then a sprint is effectively a short-term project plan where a team of people decide the work that they can complete within a 1, 2 or 3 week window. Work is “committed” (promised) to be completed in that time frame and the team tracks their progress. After each sprint, reviews and retrospectives are held to help the team find what works well and what helps them complete more work to a higher standard while still maintaining their commitment. The main focus of sprint work is to complete the work and trying to avoid leaving work unfinished.

Where does testing fit?

So normally teams set up a task board with columns titled something similar to “To Do, In Progress, Done”. Sometimes people add more columns or use different names but the usage is similar. Anyone from the same background as me would be tempted to then suggest that an additional column could be added between “In Progress” and “Done”. The logic being that “when you’ve finished your development work, I’ll test it”. In my head, this was trying to work with what I knew already in this new environment. We ended up with columns similar to “To Do, Build/Dev, Testing, Done”.

Bad idea

So at first, I thought things were working ok, I feel one of my strengths is learning and picking up things fast so I got stuck in and kept up with the 5 developers in my team. Most of the time I was fortunate that the work dropped sequentially or wasn’t particularly time consuming to test. This didn’t last long though and eventually we started to fail to complete work on time. This happened either because I was testing it all at the end of a sprint or because the work was highly dependant upon each other and the problems with integration weren’t found until very late.
This meant we had to continue some work in future sprints. Now I no longer had plenty of time to write my test plans at the start, but I was busy testing last sprint’s work and then testing this sprint’s work! I no longer had time to spend learning more automation or exploring newer areas to me like performance testing. All of my time was consumed trying to test all of this work and I couldn’t do it. What went wrong?

A change in approach

I would love to say I quickly realised the problem and fixed it but it took me a long time to realise the issue. I think part of this I will put down to not knowing any better and partly working with developers who didn’t know any better. Either way, a while later I realised that the problem was that I was trying to test everything and the developers started to rely on me for that. I’ve since realised that there is a fair bit of psychology involved in software development and this was one of my biggest lessons.
We eventually decided to stop splitting up work between roles, mainly because we found that developers tended to treat work that was in “test” as “done” to them, freeing themselves up to work on even more development work. This created a bottleneck, as the only tester as I was testing work from yesterday while they were busy with today. Instead, I came to the realisation that there is little benefit to splitting the work up in this way, at least not through process. We should be working together to complete the work, not trying to focus on our own personal queue. I shifted from testing after development was thought complete, to trying to test earlier, even trying to “test” code as developers were writing it, pairing with them to analyse the solution.

Understanding what a “role” means

I think for me this lesson has been more about realising that playing the role of “tester” does not necessarily mean I carry out all of the “testing” in a team. It does mean I am responsible for guiding, improving and facilitating good testing, but I do not necessarily have to complete it all personally. An additional part of this lesson is that I cannot rely on other people to define my role for me - as a relative newbie to testing I relied on the developers to help me figure out where I should be. While I’ve learnt from it, I also know that I may need to explain this learning again in future because it is not immediately obvious.

So where does testing really fit?

Everywhere, in parallel and in collaboration to development. Testing is a supportive function of the team’s work, it now doesn’t make sense to me to define it as another column of things to do. It has no set time frame where it’s best to perform, and it doesn’t always have a great deal of repetition in execution. It is extremely contextual.
In addition, that’s not to say you shouldn’t test alone or separately to ongoing teamwork. You absolutely must test alone as well, to allow you to focus and to process information. It’s just that you must choose to do this - where it is appropriate.

Definition of “Done”

One of my recent approaches was to define the definition of “Done” as:

“Code deployed live, with appropriate monitoring or logging and feedback has been gathered from the end user”

Others may have different definitions, but I liked to focus the team on getting our work in a position where we could learn from it and take actions in the following sprint. For me, it meant we could actually pivot based on end user feedback or our monitoring and measure our success. Instead of finishing a sprint with no idea whether our work was useful or not, planning a new sprint not knowing whether we would need to change it.

Summary

  • Avoid using columns like “Dev” and “Test” in sprint boards. It seems to lead to a separation of work where work is considered “Done” before it is tested.
  • Instead, try to test in parallel as much as possible (but not all of the time), try to test earlier and lower down a technology stack (such as testing API endpoints before a GUI is completed that uses them).
  • Encourage developers to test still and instead try to carefully pick when and where to personally carry out the bulk of the testing. Try to coach the team on becoming better at testing, share your skills and knowledge and let them help you.
  • Altering the definition of “Done” seemed to help for me, it was useful to focus the team on an objective that meant we really didn’t have to keep returning to work we had considered completed. In other words, make sure “done” means “done”.

Which languages are best to learn for testing?

Introduction

I’ve seen this question raised quite a lot in testing circles regarding which programming language is best to learn. Like it or not, the current trend in the industry seems to be asking much more of testers, with a view to creating more automation and having a much greater understanding of the technology they are testing.

Why learn a programming language?

What is your motivation for wanting to learn a programming language? In order to test well, you don’t need to know any programming language. There are very particular situations or contexts where programming may be useful to me as a tester, such as wanting to write some automated checks, learn more about what the product I’m testing is actually doing under the surface or simply wanting to save time by creating tools to help myself. However, these situations don’t arise all of the time.
It’s also worth highlighting that to write programs, I need to understand a lot about the domain I’m working with. If I want to write an automated check, I need to test the product first to understand what is worth checking. If I want to read some code, I need to understand the context that code is used for, what its purpose is.
So even if I did have something that is worth programming, I still need to “test” to identify it, understand it and consider whether it is worth it. Simply learning to program is not enough, which is why as a tester you can bring a lot to the design of automated checks and why developers cannot easily test the product themselves.

Automated checks

So it seems the usual reason testers are looking to learn a programming language is to create automated regression suites for speed and reliability. Typically I find the advice tends to be that you should learn and use the same language as your back-end developers (so if they use Java to build the product you test and Java to write unit tests, then you should learn Java too). The argument being that by using the same language, you can access their support and help more easily when you need it. However, this depends upon your current relationship with your developers and their location. Perhaps you may not be very close to your developers and won’t benefit from their support - this may not be something you can easily change.
You are going to have judge for yourself which language, but the biggest factors that affect my choice would be:
  • How comfortable am I writing code in this language?
  • What support can I get from the developers I work with?
  • What support can I get from other testers?
  • How well supported is the language in terms of libraries or capabilities? (for example, if you want to write Selenium checks, is there documentation on how to use Selenium in that language?).
  • Can I write programs in this language in a way that makes them easily understood by other people?
There is no easy answer to these questions so I wouldn’t recommend any particular language. However, to help narrow your research, I would suggest focusing on these languages to consider:
  • Java
  • C#
  • Python
  • Ruby
  • Javascript
At the time of writing, these are some of the more popular languages to learn with regards to automated checks.

Toolsmithing

Maybe you’re interested in simply being able to make use of a programming language to create your own tools. A “tool” in this context can be something small like a script that rapidly repeats a certain action over and over. For example, I once created a script that compared two sets of 100 invoices with each other. It looked at each invoice total and compared the old total with a new one and saved the difference in a text file. This meant I could rapidly compare and identify obvious errors, saving my own time and helping me perform this particular action more accurately. However, it didn’t replace the testing I performed, it simply augmented it, allowing me to focus on more interesting tests.

I created tools like this in a programming language called Python. I personally love using this language because it’s very easy to read, has a lot of support in terms of libraries and documentation and can allow you to experiment with ideas very rapidly. I very much recommend Python as a starting point for building simple tools and it can be used to write automated checks if you so wish.

There’s a great tutorial on getting started with Python here.

Alternatives to programming

Do you want to become a more technically capable tester? Not really keen on learning a programming language but feel like you need to learn? Well perhaps you can find value in learning other technologies and concepts. While programming is a very powerful tool, it’s not the only one that a tester can learn in order to become more technically capable.
  • Test lower - maybe you could understand more about the technologies powering the product you’re testing and test lower and earlier? For example, many web services are built upon APIs (Application Programming Interfaces), perhaps you could learn how to test these? A place to start interacting with APIs is trying out Postman.
  • A similar approach to testing lower is learning about databases and how to operate them using languages such as MySQL or Postgres.
  • Research tools that can help you test or provide more information. For example, Google Chrome DevTools have lots of very useful tools for interacting with websites and debugging problems or emulating mobile devices.
  • Talk to developers! Ask them to draw diagrams explaining how the product works technically and ask them explain things you don’t understand. It can be difficult at first knowing what is important to remember and what can be forgotten but simply taking an interest in their work and asking questions can even help them understand their own work better. I find there is no better test of my own understanding than having to explain myself!

Summary

  • You don’t need to learn a programming language to be a great tester.
  • There is no one particular language that is “the best”, but there are some popular ones that are good to get started with.
  • There are other ways to become a more technical tester that don’t involve learning programming.