I admit that I regard automation as a dull but vital part in the success of a project. Automation had evolved into Continuous Integration, a powerful toolset allowing frequent and regular building and testing of the code. I won’t get into what CI is (check the internets). Instead, I am going to explore a couple of aspects of CI that can be added to the artifacts of the development process and note some others that cannot.
Continuous performance
You wrote performance tests. You can run performance tests by firing a battery of tests from a client machine and targeted on an arbitrary environment where the application lives. The test results are collected on the client and you can publish them on a web server. Why not automating this completely then? To accomplish this, automate the execution, gathering and publishing of the performance results. Daily performance indicators not only increase visibility of the progress of the application but it becomes much easier to fix a performance degradation on a daily changeset than between two releases. There are a couple of factors that may add complexity to establishing performance tests:
Dealing with dependencies
The obvious rule of thumb is to minimize dependencies. However, if there still are dependencies on other (perhaps external) systems, use mocks and to isolate the system you’re testing for performance. We’re talking about nightly performance tests so don’t put unnecessary stress where you shouldn’t.
Finally, the main artifact of the final integration (done once per iteration) is a running environment where all components run together, in a production-like setup. Use this environment to run your system performance test where you measure the current performance against the baseline.
Measuring relative performance
The environment you’re using for the nightly PT cycle most likely will not be a perfect mirror of production (especially true when dealing with geographically distributed systems). Use common-sense to establish the ratio between the two environments then derive rough production performance numbers using it (assuming a linear CPU/Throughput relationship).
Continuous deployment
This is as simple as it sounds: automate the install. Make it dead easy to deploy the application in any environment by providing installation scripts. Simplify the configuration down to a single file that is self-documented and easily understood by non-programmers (read Operations Teams). The goal here is to unlock a powerful tool: making the application installable and upgradable with a click of a button. If all the other pieces of the continuum are in place then you could confidently deploy your application in production it on a much tighter release cycle, even on a daily basis. Deployment and integration become tasks in the background rather than first-class events.
More continuous
Since I am just fresh off the Agile Testing Days conference and I have learned a few more Cs from the distinguished speakers which I term as Soft Cs since they involve constant human engagement:
– Continuous learning (Declan Whelan)
– Continuous process improvement (Stuart Reid)
– Continuous acceptance testing (i.e. stakeholder signoff at the end of every sprint) – Anko Tijman
– Continuous customer involvement (Anko Tijman)