August 5, 2010
Functional Testing is a key component in any development process. Its main objective is to identify potential issues before they reach production and negatively impact eventual users. These tests tend to be time consuming, and are often described as the ‘bottleneck’ in the development cycle.
At this point, Automated Functional Tests kick in. Several tools are available to generate these UI tests (here at Medallia we use Selenium and HtmlUnit), but in general the test suites based on these frameworks rely entirely on the UI, making them unnecessarily time consuming and flaky. A particular suite that is aimed at testing a specific functionality requires a number of preconditions that are usually fulfilled by creating the various necessary components through the UI. As a result, a test that has the goal of verifying some small case ends up having a lot of boiling code that takes quite some time to execute, and also might fail due to inherent flakiness of the UI test.
Data Sharing… don’t!
One solution might be to have a common data set that all tests can access: a common starting point for all the suites. But that has another inherent problem that one prefers not to face: data dependency. Now you have a system where one test might interfere with others, and trust me, if there is one key feature that has to be maintained for the success of any testing framework -both at unit and functional level- it’s the independence of tests.
Our Solution at Medallia
As I explained before, we wanted a system that was reliable and fast, where we apply UI testing only to the portions of the test that we WANT to actually verify. Also, we wanted test independence so developers could focus on testing their feature and not worry about the rest. What’s more, with test independence it is possible to run all the tests in parallel, reducing the length of each test cycle. Additionally, we wanted our framework to be easily extendable, where all the components are reusable, in order to minimize the time needed to create a new test.
In order to achieve these objectives, we came up with an automated framework that is based on three main modules:
- Components: A “bean” that represents any entity of the application. For example, the component User that contains a name and a password.
- Commands: A method that performs an action using the components. For example, createUser(User) or deleteUser(User)
- Facades: An interface that groups the commands by functionality. Following the same example, a UserFacade should contain the commands createUser and deleteUser(User)
Then we provided different “implementations” of those facades and finally we created a system that picks the right implementation of the specific functionality for each test. In addition, we built a basic QA Api that is able to receive different commands and create data in the backend (in our case it is mainly calling the create method of the different components we have in the production system, but it could be done by opening a DB connection, or even better, you could use the production API if your system has one).
Let’s illustrate the basic behavior with an example:
We have our “User” interface that defines two methods:
Then we have two different implementations of that facade:
In the actual framework, each ImplementationFacade does not directly implements the Facade, so there is no need to implement on each ImplementationFacade all the commands defined in the Facade.
Now there is a mechanism that runs an algorithm every time a test calls a method of the interface, to check which implementations have that method available, and will then execute the implementation that has the highest priority (lowest number). You can also force a command to run under a specific implementation.
These two tests end up doing the same thing: they both create one user and then deletes it. The difference is that the first one is going to create the user through the UI and then delete the user through the QA API, while the second test does the opposite.
This approach gives us several advantages:
- We only do the UI testing of the particular logic component that we want to test (in this example all the assertions that the component was successfully created are missing, but it is quite simple to code those assertions with Selenium).
- Our tests are not tightly coupled with a specific implementation. This offers us two main advantages:
- In the future, if we decide to use another tool (rather than Selenium), we can simply create a new implementation using that tool and almost no change has to be made to the tests)
- If at some point we have, for example, a production API that allows us to handle the creation of the components, we simply have to create a new implementation of a facade, assign it a higher priority and then all of our tests will begin using that implementation with no change at all!
- We have a fast and reliable way of creating all the components needed in the preconditions without going through the UI. If a test needs to have five different users, simply call those ‘create methods’, then enable UI testing, and you can test your functionality from there.
- The commands are completely reusable, once a method is created, it is automatically available for any other test.
- You can have one command that will create a basic entity from scratch (for example a company with X number of users, Y locations around the globe, etc.), so that all the tests can call that command at the beginning and have a fresh new entity, that no other test uses, achieving data independence.
We are still working in the final details and we hope that at one point we will be able to open the code of the framework in order to share it with the community.