Exploratory Testing Tips from Tasting Let’s Test

The following are some tips from the Atlassian Team after the exploratory testing session held at Tasting Let’s Test. Thanks to Penny, Mark and the Atlassian team for a great session.

Don’t Trust the Documentation

By definition, you can do exploratory testing without any scripted test cases to follow. You can also

do it without needing a specification for the feature, or any documentation at all. However, if such

things do exist, it is tempting to let this define a testing session – you simply check that each item

works in the system as specified. Avoid this trap!

Instead, you should explore the feature, and test that it works as you expect it to. After all, endusers aren’t going to read the documentation before they use the feature. Later, you can use the documentation to cross-reference, but starting with the documentation means you’re more likely to miss possibilities like:

• Implicit requirements which are not documented,

• Incorrect documentation – that specifies something which is impossible, unwise, or sounds better on paper than in practice,

• End users’ expectations of software from using previous versions or similar features in different products.

Furthermore, assuming your developers are competent, the cases explicitly stated in the documentation are the ones most likely to have been already tested by them.

Break it – Don’t Verify

When asked to test a feature our natural inclination is to confirm that works as expected. This tends to limit our interaction with the feature to positive behaviours (setting all conditions so that we can confirm that the feature does what is expected to do).

Instead, try to switch your mindset to breaking, not verifying. Instead of “Does this work?”, think “When mightn’t this work?” “When shouldn’t this work?”.

For example, given a feature that creates new users in a system and emails them a temporary password; don’t just check that it works. Instead, think about and explore functional cases such as:

• The user trying to create new users doesn’t have permission to do so,

• The system has reached its limit of licensed users,

• The system is connected to a read-only LDAP server and isn’t able to create new users,

• There is no mail server configured,

• The mail server is not responding.

These are not “edge cases”. These are all valid user scenarios.

Be Diligent

Once we see some cases behave as expected, our tendency is to assume that everything else must behave as expected. In fact, the more tests we run that don’t uncover a bug, the more we start to believe that there are no bugs and ease our search.

If the area you’re investigating seems bug-free, try these ideas to bring the untested back into the forefront so that you keep digging!

• Complex input data – What kinds of data can you put in? What is the maximum length? What data formats does the data get passed in along the way (XML, JSON, SQL) and does it correctly handle the characters that have special meanings in those formats?

• Performance – Will it be responsive in a production environment with production quantities of data? Will it be responsive on slow end-users’ machines?

• Security – Can malicious users use the feature in a way that wasn’t intended? How about each part of the feature in isolation?

• Environments – Does it work in all supported environments? Platforms, browsers, webapp containers, databases – whatever is relevant for the project.

• Concurrency – How does it work when multiple users use it at once? Can their actions conflict? Are their actions completely separated?

• Reliability – How does a part of the feature react when external components it’s relying on are not there, not responding or slow? What happens if you leave it open for a weekend between actions, or perform two actions in quick succession?

• Usability – Who is the feature aimed at? Who else will use it? Will they naturally understand what it can do and how to use it? When they do something wrong, will they know what they’ve done wrong and how to fix it? If it’s a common action, can it be done quickly?

• Internationalisation – Will the feature make sense to non-English speakers? Are all strings translatable? Do the concepts translate? Is there anything that could be offensive in other cultures?

• Accessibility – Does the feature meet accessibility standards? Is it accessible to users who are using tools such as screen readers?

This is not a checklist, merely a hint for extra idea generation. It’s always better to do your own exploration first before consulting a list such as this, lest your thinking get restricted.

Happy testing!

From the Atlassian QA team (http://www.atlassian.com)

One Response to “Exploratory Testing Tips from Tasting Let’s Test”

  1. Great blog post!
    Specially liked the aspect of ‘break it’ instead of ‘verify it’.
    Way too often test cases are only designed against requirements of design specs, but lack the common sense part, e.g. what the end user would expect.. How it really should work is often tested through Exploratory testing.. and testers using good test objectives for UI elements.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>