Friday, 19 February 2016

What's in a word?

I really don’t like using the word 'it'. There I said it.

I’d never really considered that this simple, 2-letter word could cause so much issue until speaking with an analyst in a previous job.

A defect was raised by the client on our legacy system. The manuals were consulted, and right there in the middle of all this documentation, the behaviour was described as it. Unfortunately, nowhere in the documentation was it actually stated, such was the manner in which it was described, it could have referred to a number of features and behaviours.

We might as well have described the behaviour as “some thingy”.

So why do I dislike that word?


The problem is it’s usage as a catch-all word, being used to describe something, without actually describing what that something is.

The definition of 'it' is listed as such:
"used to represent an inanimate thing understood, previously mentioned, about to be mentioned, or present in the immediate context"
The problem with this is many things can be mentioned past or previous, one person's understanding may be different from another's, and context may not necessarily be obvious.

In the software industry, I consider this to be ambiguous language, and ambiguous language can lead to misinterpretation. If this word is used when defining requirements then a level of uncertainty could be created, then interpreted incorrectly, developed incorrectly and tested incorrectly.

I’ve been trying to work towards getting better requirements into development, one of the ways in which to do this, is to use better language - or at least language which cannot be interpreted in multiple ways.

One of the easiest ways to get less ambiguous language into requirements is to eliminate the ‘it’ word at that stage.

Requirements


As an example, a list of requirements for the design of a web page could state (and this is in no way representative of actual requirements):

  • A banner coloured red, white and blue
  • There are 4 links - Home, Reports, Settings, Logout
  • It is located at the top of the page

So what is located at the top of the page? These are particularly bad requirements to get my point across, what the requirements were supposed to say was there are 4 links within the banner, and the banner is located at the top of the page. By using ‘it’ in that scenario, the inference could be on just the banner, just the links, or even a single link out of all of them.

By removing the implicit words when communicating ideas, there will be better understanding from everyone involved.

Not just requirements


We as testers need to be mindful that we are also in the business of communicating. When raising defects reports, the language used is key to getting the understanding of what the issue is, which has a bearing on whether that issue is perceived as being severe or not. If that perception of severity is lower than intended, then an issue that is causing frustration for users may not get fixed and continues to be a problem.

There are other words that are implicit in their concept, I’m just focussing on 'it' for this post because I find that the usage of 'it' being far more common, where individuals use not necessarily bad language, but lazy language, and just want to get their point across without actually stating what their point is, just hoping that someone understands them (or doesn’t, as this article on The Guardian covers).

Even when writing this post, I have had to work hard to avoid using 'it' - the use of such words is extremely prevalent and we like to fall into the trap of doing whatever is easiest.

We just need to be mindful of it.

Friday, 12 February 2016

North West Tester Gathering

Introduction

In the last few weeks I’ve attended my first ever testing meetups in Manchester and Liverpool. Both of these meetups were organised by a group called the “North West Tester Gathering” and you can find it here on meetup.com. Other than online, I’ve never spoken to any testers outside of the companies I’ve worked for and I was really looking forward to it. I wanted to go for two reasons:
  • To listen to other testers’ experiences and try to learn from them, the problems they faced and the solutions they chose.
  • To talk about my own experiences and seek out fresh opinions and ideas and talk about the challenges I face. This is not necessarily because I don’t believe I can face the challenges alone, but because I believe I can never think of everything and I like to try out new ideas that I might never think of.

Speakers

For the first meetup in Manchester there was only one main speaker, Richard Bishop from a company called Trust IV - a software testing consultancy company. The main topic of the talk was about Network Virtualisation, which is a technology that allows you to “stub” or simulate network interactions such as a user visiting to your website through an iPhone on a 2G network. The tool they demonstrated this with was one created by Hewlett Packard called HPE Network Virtualisation.
The second meetup in Liverpool had two speakers, Vernon Richards and Duncan Nisbet. Vernon’s talk was about the common myths in testing that we all know and how we can tackle these myths - mainly by improving how we talk about testing in the first place! Duncan’s talk was about exploratory testing and how we probably all already conduct exploratory testing, we just don’t include it in our existing processes.


You can find videos of these talks here:
“Myth Deep Dive” by Vernon Richards:
“Exploratory Testing” by Duncan Nisbet:
{will add when its uploaded!}


Main Takeaways

I found all of the talks engaging and very relatable! I fully recommend watching the videos if you’re new to discussing the world of testing!
“Network Virtualisation” by Richard Bishop
  • Richard showed us some figures produced by one of the large big data companies forecasting how the technology market would look for 2016. In it, he especially highlighted the rise of end users relying on mobile devices to interact with products. I think this was useful food for thought especially as I’m involved with a project which could be viewed via mobile.
  • He also used some very effective examples of demonstrating the value of performance testing as well as the need to validate your assumptions (which applies to any testing!). He described an interesting test where they took network speed samples before and during a major football match and found that the speed was faster during the match - against their assumption that it would be slower!
  • I’ve definitely got a lot to learn still regarding performance testing, right now it feels like a domain rich with specialist knowledge (or at least different knowledge, for example the need to understand and know about statistical significance). I now know what the phrase “jitter” means! (where network packets are received in the wrong order).
  • Richard also provided some useful example use cases such as Facebook’s “2G Tuesdays”. This is where employees at Facebook are asked to work with Facebook using a network speed as slow as 2G to help them understand the difference in experience for some users in more remote or developing areas of the world. I felt this was an effective example of the lengths Facebook were going to, to try and help their employees empathise with these customers and therefore take their product’s performance on slow networks seriously.
“Myth Deep Dive” by Vernon Richards
  • Vernon’s talk mainly focused around talking better about testing to non-testers. A lot of myths people believe about testing are partly caused by our own inability to talk about testing.
  • There were a lot of themes that I think we would all recognise, such as “The way to control testing is to count things” - which is to say, judging the value of testing in terms of test cases executed or bugs reported and how this isn’t necessarily useful.
  • I really recommend you watch the video above! But the other themes were: "Testing is just clicking a few buttons" and "Automated testing solves all your problems".
“Exploratory Testing” by Duncan Nisbet
  • Duncan’s talk focused on some typical testing examples of where we all perform exploratory testing but simply don’t think about it being exploratory - we don’t value it because we don’t identify it.
  • He also talked about exploratory sessions being iterative, you spend time exploring, learn what you can and then repeat but designing further tests based on what you’ve learnt.
  • He also talked about the difference between good and bad exploratory testing being how well the tester can explain what they did in an exploratory session. Good exploratory testing can be explained and justified, it isn’t random and a tester should be able to easily explain what they were doing and why.

Socialising!

So other than the main talks, I was attending these meetups to meet and talk to other testers! I introduced myself to a few people and got chatting to quite a few different people. Some people I already knew from my days at Sony in Liverpool, others I met for the first time. It was nice to be able share stories and experiences, I highly recommend attending meetups just for this really, you can learn a lot from others and get some different points of view on your testing ideas.

Being brave…

At the Manchester meetup I caught up with Leigh Rathbone, who was organising the Liverpool meetup. During the course of our chat, I think my passion for testing got out and Leigh asked if I wanted to stand up and do a lightning talk at Liverpool. I don’t take opportunities like this lightly, so I accepted. I think the process of writing these blog posts has helped prepare me a little bit but I certainly have never stood up in front of 80 people, let alone people from my profession, some of whom are massively more experienced than me and whom I have a lot of respect for.
I chose to talk about the very subject that I passionately discussed with Leigh - diagrams. Lately in my recent work I’ve found many examples where people try to explain themselves in terms of words - either written or oral and failed. Not everything is easy to explain this way - I have definitely found that right here on this blog! The point I tried to make was that sometimes some information is better explained in a diagram or chart - e.g. timelines, flowcharts, mind maps and entity relationship diagrams to name a few. Its worth considering this when we are trying to explain ourselves or when someone is struggling to explain something to us. I explored this theme a little in my post Test Cases - do we need them?
I also quickly recommended a book that I believe every tester should read -  “Perfect Software and other illusions about Testing” by Gerald Weinberg. I had never read a testing book before and I’m fairly sure a lot of testers haven’t. I particularly like this book because I think it addresses the very topic Vernon was talking about - explaining what we do as testers in terms that anyone can understand. I’m also very much a fan of Jerry’s writing style, his stories and anecdotes make his points so much more memorable and relatable!

Summary


  • You should attend testing meetups! Even if you’re not a tester!
  • Even if I knew something already about the topics discussed, I always had something to learn or a new way of looking at it. I’d like to think I will always learn from the talks at these meetups.
  • Richard, Vernon and Duncan are really friendly and engaging people to talk to!
  • I shouldn’t be afraid of talking in front of lots of testers, because they are friendly people and I must have made some kind of sense as people came to thank me and chat about diagrams! I hope this inspires other people are nervous or unsure of talking to give it a go! Don’t listen to your brain!
  • Take opportunities with both hands when you see them - it can be very rewarding!
  • I’ve only attended two meetups so far and I’ve got so much to talk and think about!