Automated Testing Pipeline

In our past blog posts, we delved into our testing framework; specifically, the Sonar Reviewer and Behave framework. Creating an efficient, automated testing regiment frees up valuable time for employees to work on other ideas. I would much prefer to spend a workday on a new idea than sifting through logs and error messages to fix a bug. Developers and QAs must be mindful of writing good code and test cases, but also be willing to create effective test structures to catch those that slip through the cracks. In this post, I discuss how we try to catch those bugs with our automated testing pipeline.

 

JENKINS WITH GERRIT

Belvedere uses Jenkins, an open source automation service, to automatically spin up most of our testing processes. In a prior post, we discussed how Belvedere uses Gerrit to manage our change workflow in the Sonar Reviewer blog post. Every time a developer pushes up a commit to Gerrit, a Jenkins job is triggered that builds the codebase and runs our unit test suites against it. The results of each automated check are displayed in Gerrit, as well as all manual reviews.

When referencing a commit, we know if it will break production-critical processes via these builds and unit tests. Unit tests are great - they run quickly, test code on its most basic level, and document what each class or function should do. The problem is that our unit tests do not cover everything, nor should they. We only want to test so much on each commit push because compiling and running tests every time becomes time-consuming and expensive.

 

BEHAVE WITH JENKINS

To further safeguard against bugs, we use Jenkins to schedule nightly Behave regression tests for our most recent merged code base. These tests take too long to run on every commit, but instead run nightly to give us baseline inter-component testing. This helps us ensure that new code additions maintain the quality and functionality of the existing code base. These regression tests run during idle server time at night and replace a couple hours of work that a QA would have done on any given day. Jenkins provides build and test run history, with detailed logging of each run and stack traces of errors. Developers and QAs are provided with the information necessary to pinpoint potential problem areas that they should manually test, and return issues that manifest back to development.

The Jenkins job uses Salt to trigger builds on pre-configured testing servers and then the Behave regression tests.

For example, one regression test could use three remote servers (A, B, C) to run tests on trading software requiring a market data feed and an exchange simulator. The Behave test runner uses Salt calls to start all three services and ZeroC to communicate commands such as (A) “Persist this series of market updates to our trading software,” (B) “Check that our trading software is reacting correctly to market changes and place a trade,” and (C) “Given a trade resting on the exchange, send a fill.” Along the way, the Behave tests are checking that the communication in between each service is behaving as expected (ICE calls, tcp communication, Redis updates, etc).

The Behave suite collections tests results and displays them in Jenkins. Each night, we can make sure updates to our code base did not break basic features (e.g. safety settings, trade routing, and accounting mechanisms).

The image below shows a summary of a nightly Jenkins regression build.  Three test cases failed, allowing developers and QAs to pinpoint a potential problem area.  These tests are ran on the most recent pre-deployment code, allowing an extra layer of protection from bugs.  In this scenario, we could delay deployment until we address the failed test cases.

 

PAST STRUGGLES

Regression Updates: We faced problems maintaining an up-to-date regression test suite as the code changed across tech teams. Before this Behave test suite was fully implemented, it was hard for the testing suite to reflect our ever-iterating code base. However, by getting more developers and QAs involved and invested in the regression suite, we are able to ensure that the tests are current and useful.

 

FUTURE GOALS

Virtualization: Further integrate virtual machines (using oVirt) into our automated testing pipeline. Instead of using pre-configured testing servers, the Jenkins job should be able to create the necessary environments on virtual servers at will. This will allow further parallelization of testing suites for better efficiency and coverage potential.

Release Candidates (RC) Pulls: Incorporate the automated testing pipeline into our RC testing procedure, allowing QAs to hand off the responsibility of testing some of the more basic and repetitive test cases to automation.

UI Testing: Incorporate our UI automation testing (Avalanche) into the Jenkins jobs, expanding the coverage of our daily automated testing.

 

CONCLUSION

Automated testing has clear benefits. At Belvedere, we are constantly trying to devise ways to more efficiently and effectively test our software. We continue to strive to create an environment where employees can focus on new features and confidently implement them, with the additional safeguards of automated testing.

Previous
Previous

Clang at Belvedere

Next
Next

A Year in Review and ChronosES