My DevOps Journey at ADNEOM

September 24, 2019

By Marc Plaza

I work as a Technical Leader within the ADNEOM Web and Mobile Department where we develop great applications with an awesome team of developers. I joined the team in 2016, with a clear goal : To implement tools and processes following the DevOps philosophy to ensure the department growth and sustainability. A lot had to be done as the department was still in its early days (we were 20, we are around 50 now).

Making a team adopt DevOps is a hard and lengthy process, as it requires a change of mindset. An emphasis on communication and collaboration is mandatory since it will alter everything from project management to the way the developers design the application to release management. I will do a quick overview of what was done here following the DevOps stages :

 DevOps is usually represented this way
DevOps is usually represented this way

Plan

Agile methodology is followed. After the initial scoping phase, workload is split into 2-weeks sprints. Atlassian JIRA is used for project management. For each project, different environments are used :

  • Local (on the developer workstation)
  • Development (where developers merge their local environment)
  • Staging (used by QA for testing)
  • Preproduction (optional, used by QA to test in the production context)
  • Production

For most of them, front-end is made using Angular or React for web applications, Kotlin and Swift for mobile applications and NodeJS or PHP is used for the back-end. Almost all of the infrastructure is hosted on AWS (we’re quite huge fans here), so the full potential of the service is used to deploy environments :

CloudFormation is used to automate deployment of every environment.

Beside AWS and JIRA, GitLab is used as Code Management and Deployment tool.

 A few stats about our GitLab
A few stats about our GitLab
 A few CI pipelines of one of our projects
A few CI pipelines of one of our projects

Code

At the beginning, no clear processes or conventions were shared between the teams of developers, making the time for a developer to be efficient when switching teams quite long.

The first step was to implement a common branch flow and commit convention. The commit convention is a slightly tweaked version of AngularJS convention :

 Example of a commit following the convention
Example of a commit following the convention

pre-receive hook was created on GitLab which check that each commit sent to the server follow the rules (ensure that the commit message is following the standard set, and the committer email address is from a account on the server).

Commit declined because it does not follow the convention
Commit declined because it does not follow the convention

Note : GitLab now natively offers these checks through push rules , we are in the process of migrating to it.

As for the branch flow, it is quite simple, each feature needs to be developed on a short-lived branch and its code needs to be reviewed by at least one person before merging it back to the main branch through merge requests.

 Example of a merge request
Example of a merge request

Build

Building is usually compiling/minifying code using tools such as webpack or creating Docker images. Gitlab CI is used to automate this :

Example of building an Angular web application in GitLab
Example of building an Angular web application in GitLab
 A Docker image being built
A Docker image being built

Test

To be certain that the applications that are released have a code base which is stable, maintainable and easy to read, some standards and tools were implemented : Linters and SonarQube.

Each time a commit is sent to GitLab, the code is automatically tested and sent to SonarQube to detect code smells, bugs or duplicate code.

If the code coverage is not at least 80% and the quality gate is not passed, the code will not be deployed on the infrastructure.

 Code coverage
Code coverage
 Example of a SonarQube report
Example of a SonarQube report

As for the code readability, linters such as eslint are used to ensure that everybody format the code the same way.

 Some of the linter rules we use
Some of the linter rules we use

Linter rules are voted by hand every few months by the developers.

In order to design/develop/operate the application in the most efficient way, a few principles are followed such as :

Release

Releasing is automated as a stage in CI pipelines to easily ship the applications, using tools such as fastlane for mobiles apps :

 A mobile application being automatically released to Apple App Store
A mobile application being automatically released to Apple App Store

As for the web applications, releasing usually means tagging the Docker image when it is built in the pipeline.

 A build/release stage in Gitlab CI, image is tagged with commit sha when built
A build/release stage in Gitlab CI, image is tagged with commit sha when built

Deploy

Manual deployment was the only option a few years ago, and the source of a lot of problems :

  • Prone to error : As it requires human intervention, the probability that a mistake will be made is quite high (it happens everywhere and to everyone)
  • Time consuming : Depending of the infrastructure complexity, it could take a few hours to complete, and as said above, the longer the deployment lasts, the higher the chance that an error will be made if done manually.
  • Security issues : Most of the time, an access to the infrastructure is required, which is mostly a bad idea

Automated deployments using tools such as GitLab Runner are now the norm. Depending of the use-case, either a runner is deployed on the target infrastructure (as it only needs https egress to work, it allows us to deploy without asking for a firewall rule or a ssh access) or a self-hosted autoscale runner is used.

 A deploy stage from a project
A deploy stage from a project

Operate

As mentioned before, usually the projects are deployed on a target infrastructure, that might be a physical server using Windows Server, a virtual machine on a VMWare Hypervisor using Debian or a Virtual Machine using Ubuntu on Amazon Web Services. Being able to deploy on such heterogeneous targets can be quite a challenge because most of the time there are unforeseen issues : from dependencies not compiling to performance bottlenecks or crashes (the old “but it works on my computer”)

In order to have a more agnostic approach, Docker is used from the developers workstations to the production servers (if possible), practically eradicating most of the issues mentioned above as it allows us to be sure that the code building is idempotent.

As said earlier, when possible CloudFormation Stacks are used to easily manage infrastructure deployment. When combined with GitLab CI, it becomes quite powerful :

  • Automation : When a modification of the template is pushed, the infrastructure updates itself.
  • History : Since a repository is used, infrastructure evolution can be seen and easily rollback if needed
  • Access : It allows developers to deploy an infrastructure without the need of an access to AWS console
 A CloudFormation stack being built
A CloudFormation stack being built

Monitor

Once the application is deployed, some health-checks are created (usually from Route53) that will ping a specific route to check that everything is alright, usually /ping, which is supposed to answer “pong” for the back-end, and will check that the index answers 200 for the front-end.

Example of a Route53 healthcheck
Example of a Route53 healthcheck

As for the logs, they are redirected to syslog or Kibana or using solutions such as Papertail.

 Log messages in Kibana
Log messages in Kibana

A lot was done in the span of three years: from manual to continuous integration/deployment, no tests to an average of 80–90% code coverage, using tools such as Docker or CloudFormation, but there is still a lot to do and improve, and we have great hopes for the future.

Marc Plaza, Technical Leader at ADNEOM

Categorised in:

This post was written by mcochet