Continuous Integration and Delivery using Google Cloud
Posted on Tue Mar 24 2020
I'm going to teach you how to setup a CI/CD pipeline that will automatically build, test, and deploy your application everytime you push your changes to your git repo by using one of Google Cloud products called Cloud Build.
The application
Before we start, we first need an application to deploy. I'm gonna be using a simple nodejs web server for this tutorial, but you can use whatever language or framework you want, that's because we're gonna be using docker to package the application in a container in order to run it on another Google Cloud product called Cloud Run.
If you don't want to create a node application, you can skip this step and continue with the part we build the docker container in.
If you want to follow through and create a node application, make sure you have node installed on your system. If you don't, head over to nodejs.org and install the latest LTS version of node.
First, we're gonna start by initializing a new node application in my project directory.
$ cd ~/Desktop/Projects/continuous-integration
$ npm init -y
This will initialize a new node project and create 2 new files in our project directory package.json
and package-lock.json
Next, we're going to install express (which is a web application framework for nodejs) using the following command:
$ npm install --save express
And, we have to create an entry file for the server called index.js
in the root folder of our project with the following code:
const express = require('express')
const app = express()
const port = 3000;
app.get('/', (req, res) => {
res.send('hello world')
})
app.listen(port, () => {
console.log(`App started at port ${port}`)
})
Now, that we have an application, we wanna be able to run it. So, we have to add a script to the package.json
file. To do so, let's replace the test script with a new script called start.
This script will run a command that will start pur node server and it looks like this node indes.js
{
"name": "continuous-integration",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.1"
}
}
Next, we're going to run the command in the terminal like so:
$ npm start
The server should start and we should get an output like the one in the image below.
To test the server, open any web browser and navigate to http://localhost:3000
and you should see the word Hello World
. This means our server is perfectly running.
You can now stop the server by pressing ctrl + C
in your terminal.
Writing the Dockerfile
Now that we have a working web server, we want to tell docker how to run this application so we can later build it and run it on the cloud.
To do so, we're going to create a file named Dockerfile
in the root of our project.
The fist thing we need to do is define from what image we want to build our container from. Because our application is written in node, we will use the latest LTS
version of node found on Docker Hub:
FROM node:10
Next, we need to create a directory inside the image that will hold our application code:
WORKDIR /usr/src/app
Then we're going to copy our package.json
and package-lock.json
files into the directory we just created inside the image:
COPY package*.json ./
Now we have to tell docker to install our application's dependencies:
RUN npm install
The next thing we should do is to copy the rest of the project files into the image:
COPY . .
We also have to expose the port that our server is going to use in order to access it from outside the container:
EXPOSE 3000
Last but not least, define the command to run your app using CMD which defines your runtime. Here we will use npm start to start your server:
CMD ["npm", "start"]
Your Dockerfile should now look like this:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
One last thing we habe to do before we setup up a git repository is to create a .dockerignore
file.
We don't want to include any unwanted files or, in our case, pre-installed packages in our image, that's why we should include these files in the .dockerignore
file.
Create a new file named .dockerignore
in the root of the project with the following content:
node_modules
npm-debug.log
Initializing the repo
The next thing we're going to do before setting up our pipeline is to create a git repository for our project.
In your terminal type the following:
$ git init
You should get an output like this one:
Initialized empty Git repository in /path/to/your/project/.git/
Next, we have to create a .gitignore
file with same content of the .dockerignore
file we created earlier to make sure no unwanted files get pushed to the remote repo.
Now, we should create a remote git repository. I'm gonna create one on GitHub but you can use GitLab or whatever service you like using.
I created a new repository on github called ci-cd-tutorial
, you can name yours whatever you want.
Now We have to add this repo to the list of remote repos in our local repository. To do so, get the url for your repository and use the following commang:
$ git remote add origin git@github.com:hassansaleh31/ci-cd-tutorial.git
Replace
git@github.com:hassansaleh31/ci-cd-tutorial.git
with your repository's URL.
The final step in setting up git is to commit and push our changes:
$ git add -A
$ git commit -m 'Initial commit'
$ git push origin master
You should get an output similar to this:
➜ Continuous-Integration git:(master) ✗ git remote add origin git@github.com:hassansaleh31/ci-cd-tutorial.git
➜ Continuous-Integration git:(master) ✗ git add -A
➜ Continuous-Integration git:(master) ✗ git commit -m 'Initial commit'
[master (root-commit) f832e68] Initial commit
7 files changed, 607 insertions(+)
create mode 100644 .dockerignore
create mode 100644 .gitignore
create mode 100644 Dockerfile
create mode 100644 Readme.md
create mode 100644 index.js
create mode 100644 package-lock.json
create mode 100644 package.json
➜ Continuous-Integration git:(master) git push origin master
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Delta compression using up to 8 threads
Compressing objects: 100% (7/7), done.
Writing objects: 100% (8/8), 7.34 KiB | 7.34 MiB/s, done.
Total 8 (delta 0), reused 0 (delta 0)
To github.com:hassansaleh31/ci-cd-tutorial.git
* [new branch] master -> master
And that's it for the project setup.
Initializing cloud build
At this point, if you don't have a Google Cloud account, go ahead and create one, and once you're done, open the cloud console.
Next, we have to create a new project for our app.
To do so, navigate to the Cloud Resource Manager and click on the Create Project
button.
Give your project a name and click on the Create
button.
Your project will now initialize and you will be navigated back to the Cloud Resource Manager.
Now that we have a project, we have to enable the Cloud Build API.
Open the Cloud Build page using the side navigation bar and you will be presented with the following page:
If you notice at the top of the page, to the right of the title, there's a dropdown button that says Select a project
. Click on this button and select the newly created project.
The, click on the Enable
button and wait for Cloud Build to initialize.
Initializing Cloud Run
The next step is to enable Cloud Run and start a new service.
To do so, navigate to the Cloud Run page using the side navigaton bar and click on the Start Using Cloud Run
button.
Wait for Cloud Run to initialize and then click on the Create Service
button.
Select your desired region, give the service a name, check the Allow unauthenticated invocations
checkbox, and click on next.
You will prompted to select a container to run on the service, but we don't have any containers yet, so click on the select
button and you will be able to select a demo container.
Select the dome container and click the Create
button.
Wait for the service to start and you can test it by clicking on the URL that can be found to the right of the service name at the top.
Creating a Build Trigger
Head over the Cloud Build Dashboard using the side navigation or by clicking on this link.
From here, we're going to create a new build trigger that will run our pipeline every time we push new code to our repository. To do so, click on the Set Up Build Triggers
button.
Give your trigger a name and a description.
For the event type, we're goint to leave it at Push to a branch
and now we have to connect our GitHub
(or whatever service you desided to use) with our Google Cloud account using the Connect New Repository
button.
Google Cloud will walk you through the connection process and once it asks you to create a push trigger for this repository, click the skip for now
button.
Now back to the trigger we were creating, select your repository from the repository
dropdown under the source title and set the branch to ^master$
.
We're gonna leave everything else as is and hit the Create
button at the bottom of the screen.
Defining the Build Steps
Now we have a Build Trigger. But in order for it to work, we have to tell Google Cloud what to do when a trigger is triggered.
Just like we did with docker, we have to create a file that defines our build process, but this time, it's a YAML file.
Create a new file named cloudbuild.yaml
in the root directory of your project.
Now we have to define our build steps, we want to first build the docker image using the Dockerfile we created earlier, then we want to save this image somewhere for Cloud Run to run it, and lastly we want to actually run the saved image on Cloud Run.
To build the docker image we're going to use the docker builder:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA', '.']
Replace [SERVICE-NAME] with the name of your service
In the build steps of Cloud Build we have to define two main things for each step:
- The builder to use (In this case docker)
- The arguments which are the commands you want to run
So here we are building the Dockerfile adn tagging (naming) it based on our project id,service name, and the commit SHA. This way every image has it's unique name.
The next step is to push this image to the GCR (Google Container Registry)
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA']
Replace [SERVICE-NAME] with the name of your service
Then we're going to to run the container on Cloud Run using the gcloud builder
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '[SERVICE-NAME]'
- '--image'
- 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA'
- '--region'
- '[REGION]'
- '--platform'
- 'managed'
Replace [SERVICE-NAME] with the name of your service and [REGION] with the region you selected when creating the service.
And lastly we're going to tell Cloud Build about the images we created in this build
images:
- 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA'
Replace [SERVICE-NAME] with the name of your service
The final content of the file should like like so:
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA', '.']
# push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- '[SERVICE-NAME]'
- '--image'
- 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA'
- '--region'
- '[REGION]'
- '--platform'
- 'managed'
images:
- 'gcr.io/$PROJECT_ID/[SERVICE-NAME]:$COMMIT_SHA'
Testing and Debuging
Let's test our configuration by pushing new commits to the repository.
$ git add -A
$ git commit -m 'Added a cloudbuild.yaml file'
$ git push origin master
Now head to the Cloud Build Dashboard and you will see a build trigger running.
After a couple of seconds the trigger will fail and display a red warning icon like so
To view the logs, click on the build id (in this case it was e02e3a4a) and it will open the build details, there you will see the list of steps and on the right the output of the build.
At the end of the logs, you should see an output similar to this one:
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #2: ERROR: (gcloud.run.deploy) PERMISSION_DENIED: The caller does not have permission
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 1
As you can see, it seems that this is a permission error.
The issue here is that Cloud Build don't have the required permissions to deploy to Cloud Run.
So, lets give it permission.
Navigate to the Cloud Build settings page using this link, there you will find a list service accounts and their permissions.
Click on the status of the Cloud Run account and set it to enabled, you will be prompted to grant access to all service accounts in order to use Cloud Run from Cloud build so go ahead and do that.
Now lets make a small change in our code. Open the index.js file and change Hello World
to Hello World!
Commit and push your changes.
$ git add -A
$ git commit -m 'Fixed a typo in the word "Hello World!"'
$ git push origin master
Head back to Cloud Build and wait for the build to finish.
And voilla, it's gonna fail again :(
But this time we get another error:
Starting Step #2
Step #2: Already have image (with digest): gcr.io/cloud-builders/gcloud
Step #2: Allow unauthenticated invocations to [ci-cd-tutorial-00001-jid] (y/N)?
Step #2:
Step #2: Deploying container to Cloud Run service [ci-cd-tutorial-00001-jid] in project [continuos-integration-272020] region [us-central1]
Step #2: Deploying new service...
Step #2: Creating Revision............................................................................................................................................................................................................................................................................................................................................failed
Step #2: Deployment failed
Step #2: ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Finished Step #2
ERROR
ERROR: build step 2 "gcr.io/cloud-builders/gcloud" failed: step exited with non-zero status: 1
It seams that is some sort of a problem with the port we chose to use and that's expected because cloud run ecpects your server to listen to a specific port and not just any port.
So, how do we get this port. That's pretty easy, we can obtain it from the environment variables.
In you index.js change the port from 3000 to the following
const port = process.env.PORT || 3000;
Here we're extracting the port from the environment and when it's null
, we'll use port 3000. This will make our app work in the cloud and locally.
Now push the code
$ git add -A
$ git commit -m 'Changed the port to use the environment variables'
$ git push origin master
And this time the cloud build will succeed.
Now, if you go back to Cloud Run, click on the new service, and click the URL and you should see Hello World!
on the screen.
Congrats, you now have a CI/CD pipeline that will build and deploy your app everytime you make changes to it.
You can now easily add unit/integration tests to your app and run these tests using a additional steps.
For example to test your node application:
steps:
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'test']
For a complete list of builders check out the official cloud builders repository.
References:
- Dockerizing a Node.js web app
- Creating and Managing Projects in Google Cloud Console
- Continuous deployment from git using Cloud Build
If you feel that you have learned anything from this article, don't forget to share it and help other people learn.
If you feel that you have learned anything from this article, don't forget to share it and help other people learn.