Sonar Code Analysis on Every Push

Every experienced programmer knows that consistency is important when building systems.  We have all pounded our heads against a keyboard when working on legacy projects, and we would very much like to avoid it.  Code Analysis (CA) is one solution that can help alleviate the frustration.  With good code standards we can preach how we want the code to be written, but how do we enforce it?

Using manual code reviews can help, but that still requires each reviewer to know the standards by heart and be robotic-perfect on each review.  So why not replace that manual process with a robot to prevent missing something, as well as save us all more time?  This is the thought behind the Sonar Reviewer that we created as part of our Continuous Integration (CI) Pipeline.

 

ENVIRONMENT

At Belvedere, we use Gerrit for our code reviews and change workflow.  Each new change is pushed up to Gerrit, where other developers can comment on the change and suggest improvements.  When a change is pushed to Gerrit it also triggers our CI pipeline in Jenkins, which first builds the complete repository and then runs all the unit tests.

CR-Code Review
DR-Deployment Review (deployment manual tasks specified)
D-Data Team Review (database schema changes approved)
FR-Functional Review
LUT-linux Unit Tests
QR-QA Review
V-Verified (Successful Build)
WUT-Windows Unit Tests

The V/LUT/WUT all are tied to Jenkins jobs, and when they pass correctly with no errors, a checkbox will be filled in on this change.  When the Sonar Reviewer finishes, it should update the CR column. Given this environment, our starting goals for our Sonar Reviewer became:

  1. Must run for each change pushed up (not just for changes being merged)

  2. Must run relatively fast < 15 min

  3. Must post inline comments to Gerrit on files with CA violations

  4. Must negatively review the commit if any CA violations and neutrally review if none

 

CODE ANALYSIS RULES

We'll be using C# as an example, but the pattern is similar for our C++ and Python code.  We already had generated a good set of CA rules that we wanted to validate on every push.  This is then broken apart in Sonar to the Sonar specific rules and the Microsoft CA rules.  In order to keep things consistent, we need one source of truth for our Sonar code analysis ruleset, and so we point all of our projects (> 100) to that Microsoft version of that ruleset (automated with a script, of course).  We also upload that CA ruleset to Sonar.  This gives us the benefit that when developers build locally, it will show the CA violations in Visual Studio for the Microsoft CA rules just like in Sonar.

Now that we have consistency within our rules, we’ll add a CI job in Jenkins for Sonar that can run in parallel with the unit test job.

 

RUNNING SONAR

With the Jenkins job in place, we needed to start using a command line runner for Sonar. The runner configuration is split up into multiple parts:

1. Global Properties:

sonar.analysis.mode=incremental
sonar.scm.enabled=false
sonar.scm-stats.enabled=false
sonar.issuesReport.html.enable=true
sonar.host.url=[YourSonarURL]
sonar.visualstudio.skipIfNotBuilt=true
sonar.visualstudio.testProjectPattern=".*[Tt]est[s]?.*" 

2. Project Specific Properties:

sonar.projectKey=[sonarProjectKey]
sonar.projectName=[projectName]
sonar.sources=[projectDirectory]
sonar.language=cs
sonar.visualstudio.enable=true

3. Run Specific Properties:

sonar-runner.bat -D sonar.projectVersion=[GitCommitHash] -D sonar.visualstudio.outputPaths=[PathOfBuiltDlls]

There is also a shell version of sonar-runner for non-windows machines.  After the run, it will spit out a sonar-report.json which will have all the errors it found since the previous full run (since we set Sonar to incremental).

Example sonar-report.json:

{
"version": "4.5.1",
"issues": [{
"key": "42b470d5-f15d-4871-8c07-80610e981959",
"component": "[sonarProjectKey]:[vsProject]:[filePathFromVsProject]",
"line": 115,
"message": "Rename this method to match the regular expression: [A-Z][a-zA-Z0-9]++",
"severity": "MAJOR",
"rule": "csharpsquid:MethodName",
"status": "OPEN",
"isNew": false,
"creationDate": "2015-02-25T04:58:22-0600",
"updateDate": "2015-02-25T04:58:22-0600"
}, …],
"components": [{
"key": "[sonarProjectKey]:[vsProject]:[filePathFromVsProject]",
"path": "[pathToFile]",
"moduleKey": "[sonarProjectKey]:[vsProject]",
"status": "CHANGED"
}, …],
"rules": [{
"key": "csharpsquid:MethodName",
"rule": "MethodName",
"repository": "csharpsquid",
"name": "Method name should comply with a naming convention"
}, …]
"users": []
}

From this report, we can tell that in line 115 of file [filePathFromVsProject], there is a function named incorrectly for our naming convention.  We also can see that this is not something we changed since our last Sonar, as the issue’s isNew is false.

We next need to translate our Sonar file paths to Gerrit file paths, which are used as Ids to post comments.  The path used by Gerrit is the full path of the file from the base repo location, but since we only have the last folders and file, we need to get a full path.  Assuming no duplication of project names, this can be looked up by the file system.  Now that we have the full file path, we are ready for some REST calls.

 

GERRIT API

The Gerrit REST API is great and the documentation is straight forward.  For posting our reviews, we’ll need to use the SetReview command in the changes endpoint.

Example:

POST /changes/[gerritChangeId]/revisions/[revisionId]/review
{
"message": "Sonar Reviewer found some issues",
"labels": {
"Code-Review": -1
},
"comments": {
"[fullFilePath]": [{
"line": 115,
"message": " Rename this method to match the regular expression: [A-Z][a-zA-Z0-9]++"
}]
}
}

At this point, all we need to post a comment is the [gerritChangeId] and the [revisionId].  Both of these can come from our Jenkins CI pipeline as parameters to our reviewer job, since we already trigger a build and unit test run off of a new Gerrit patchset.  We now have a Jenkins job that runs Sonar in incremental mode after a build is complete, and then parses out the issues, and posts a comment on the associated file.  It will either +0 (neutral review) if no issues are found or -1 (negative review) when issues are found.

 

LESSONS LEARNED

Looking back at our original goals, we met 1, 3, and 4. We now had to worry about running fast.  In order to reduce unnecessary work, we only ran projects that had changes in them.  The only difficulty is if someone makes a change across all projects, or across many projects.  In this situation, we must run every project, which could take hours and log jam the Jenkins jobs.  We chose to not run the sonar reviewer if it has over 10 projects changed.  We assumed that changes like this are more likely reference changes that are small but need to be made in most places.  This limit seems like a balance between time and value of Sonar reviewer feedback.

After the first pass at this project, we started showing all the issues that Sonar reported for every severity INFO to BLOCKER and for existing issues and new issues (isNew flag). All of our coworkers were quite annoyed by the sheer amount of Sonar issues that were showing up wrong in their changes in Gerrit.  Consistency is great, but we weren’t going to stop all development and only focus on fixing all the Code Analysis errors.  We decided a more progressive rollout of these rules would make the most sense.

For all new issues above INFO level severity, we have our new SonarReviewer comment and prevent this change from going out without fixing the error or appropriately suppressing, if necessary.  For existing issues (isNew=false), we only comment on BLOCKER level rules.  Over time, we would slowly increase more rules into the BLOCKER level or reduce the level down to encompass MAJOR and MINOR rules.  This created a great balance for developers, in which they would not introduce new code that would violate our standards, and they were only forced to fix the existing issues if they were of great importance.  Ultimately, our entire code base still continues to move toward consistency by  following our coding standards at a rate configurable by our rule severity threshold.

The next steps in this project is to finish up our C++ and Python equivalent Sonar reviewer.  We first must lock down our rules for each of these languages.  Once those are in place, we can use them as inputs into the shell version of Sonar-runner.  With some minor tweaks to the properties (removing the visual studio specific configs and finding equivalent settings in C++ and Python), the rest of our Sonar-reviewer will behave almost identically.

Previous
Previous

Performance Management