Full stack javascript tutorial For beginners - Part 3 [Deployment]

Posted on Sat Mar 02 2019

A beginner friendly stet by step tutorial aimed to teach you how to deploy your Nodejs and React application to Digital Ocean using docker compose and git.

Full stack javascript tutorial For beginners - Part 3 [Deployment]

What you will learn?

  1. What is a Dockerfile?
  2. How to write a Dockerfile
  3. What is a docker compose file?
  4. How to write a docker compose file
  5. How to run a project using docker compose
  6. How to deploy your full stack application to a linux server using git
  7. How to install Nginx
  8. How to configure Nginx to serve your application

Pre-requisites

In this part, I assume that you have followed along in part 1 and part 2 where we built the backend of this application using nodejs, Postgres and React. If you haven't, head there now before you continue reading this part.

At some point in this tutorial, we will be using git to create a local repository and push it to a GitHub repository. This step is not required but if you want to follow it you have git installed on your system.

A recap of what we have done so far

In the previous parts of this tutorial, we created a simple to-do web application.

In part one we created the backend for our app which was made of a Nodejs express server and a Postgres database. Inside our express server, we created some helper functions, defined the endpoints and used the functions inside the endpoints to apply CRUD operations to data inside the database based on the request.

In part two we created a React application using Create React App CLI. We then created some helper methods that send requests to our server and get a response from it and used those methods to display, delete and update tasks.

In this part, we will use docker to make our application run on any machine with a single command, push our code to GitHub, pull the code from a Linux server, install Nginx and configure Nginx reverse proxy to forward port 80 (Default HTTP port) to port 3000 (Our server port).

What is a Dockerfile

A docker file is a set of commands that instruct docker on how to build an image.

In part one we used docker to run the Postgres image, docker then pulled this image from docker hub and ran it. This image like all docker images was build using a Dockerfile and pushed to docker hub so that other people can pull and run it. We won't be pushing to docker hub in this tutorial, but we will use git to push our code to GitHub, then build and run the images on the server using docker-compose.

Writing a Dockerfile

Navigate to the root folder of the application where the Nodejs server code is located, create a new file named Dockerfile without any extension and open the file in your code editor (If you use visual studio code install the Docker extension, it's very helpful).

The first thing we have to do is to tell docker which image should be used as a base image to ours. Since we are using Nodejs, we will use the official node image.

Write the following line inside the Dockerfile:

FROM node:10

The first word FROM is case sensitive and it tells docker to pull the following image and build on top of it.

Then we state the image which is in this case node and specify the tag which is 10. Tags are used to identify different versions of an image, in our case we used the tag 10 which will pull the latest version of node 10. You can find all supported node tags here.

If no tag is specified, docker will assume that the tag is latest and pull the latest image.

Next, we need to copy the project files to the image in order to run them inside it. We can copy all the files at once, but we're going to break the copy process into two parts. The reason for this is that docker caches the files we copy and only copy them again if they changed, so any file that doesn't change much we will copy it using a separate command.

Let's imagine how this will work if we copied everything in a single command. First, we copy everything and then run npm install to install dependencies. We change a file and build the image again, docker will find out that a file has changed so it doesn't use the cache instead it deletes everything inside the image and builds it from scratch (copy everything and install dependencies).

If we copy the package.json file, run npm install then copy everything else, what will happen is that if we changed a file and run build again, docker will see that our package.json file is unchanged and use the cache instead of copying it and installing dependencies again.

To do so we first need to assign where our app will be living inside the image and we have to use the command WORKDIR for that.

FROM node:10
WORKDIR /app

WORKDIR is case sensitive

Here we told docker to create a directory called app and set it as the current directory.

Next, let's copy package.json and install dependencies.

FROM node:10
WORKDIR /app
COPY package.json .
RUN npm install

COPY is also case sensitive and followed by the file name and where to copy it to inside the image. . for the current directory. RUN is also case sensitive and is followed by a command to run inside the image.

Then we need to copy everything else to the image.

FROM node:10
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .

Using . as the file name will copy every file and folder.

We still need to do two things. First, we need to expose the port Nodejs is running on to the host so that we can access it from outside the container, then we need to start the Node server.

FROM node:10
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]

CMD defines the command that will run when the image has finished building and is now running.

There can be only one CMD command inside a Dockerfile.

Before we build our image and run it. We should tell docker which files or folders we don't want to copy to our image. To do so we have to create a new file named .dockerignore inside the root directory of the project and write the files or folders we want docker to ignore separated by new lines.

/node_modules
/to-do/node_modules

Now we are ready to build and test this image. To build the image, open your terminal (CMD on Windows), navigate to the root directory of your project and run the following command.

docker build . -t to-do-application

-t to-do-application tells docker to name the new image to-do-application

The build process will start, docker will pull the node image from docker hub and build your image. When the process is complete you will get a message like this one

Successfully built 761fd772a437
Successfully tagged to-do-application:latest

This means we now have an image called to-do-application and we can now run it by running:

docker run to-do-application

The image will then run inside a docker container and we will get an error message like this one:

Server started
{ Error: getaddrinfo ENOTFOUND postgres.localhost postgres.localhost:5432
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:57:26)
  errno: 'ENOTFOUND',
  code: 'ENOTFOUND',
  syscall: 'getaddrinfo',
  hostname: 'postgres.localhost',
  host: 'postgres.localhost',
  port: 5432 }

This is because our Nodejs server is now running inside a container and not directly on our system. So, localhost no longer refers to the hostname of our system, instead, it refers to the hostname of the container.

But don't worry we will fix this issue using docker compose.

What is docker compose

According to docker compose official page, Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

So, what does this mean for us?

Our Nodejs server relies on a Postgres database to run correctly. In part one, we used docker to manually run a Postgres image on our system inside a container and used node-postgres package to connect to this container inside our server. But in this part our server is no longer running on the host, instead, it is now running inside a container. To fix this issue we will create a compose YAML file to tell docker that our application is based on two images that work together.

The benefit of writing a compose file is that we no longer need to run each of our containers and connect them together manually. Instead, we can let compose do all the building, running and networking for us.

Writing a docker compose file

Now that we know what is docker compose and why we need it, let's create one for our application. If the docker image we created before is still running, you can stop it by pressing control + C or command + C on MacOS.

Create a new file in the root directory of the project called docker-compose.yml and open it in your code editor.

The first parameter of the compose file has to specify the version of Compose we are using. We will be using version 3 so the first parameter is:

version: "3"

The second parameter is a set of services we want to run.

version: "3"
services:
  # Indentation is required to indicate nesting
  db:
    image: postgres
    restart: always
    volumes:
      - "./postgres-data:/var/lib/postgresql/data"
    ports:
      - 5432:5432

Inside services we defined our first service and named it db. The image parameter specifies the image this service will run, restart tells docker when to restart the service if it crashes, volumes is an array of volumes the container should mount to specific directories inside it and ports is a list of external ports the container should map to internal ports.

image and restart are self-explanatory but let's talk a little bit about volumes and ports.

For the ports parameter, we are telling docker that the containers created from this service should listen to port 5432 and send it to the internal host on also port 5432 which is the default Postgres port.

The volumes parameter tells docker that the containers created from this image will use the folder postgres-data inside the project directory as the folder /var/lib/postgresql/data (Which is the default Postgres folder for storing data) inside the container. This will make our data persistent even if the container is rebuilt or destroyed.

Now we need to add our Node server to the list of services.

version: "3"
services:
  # Indentation is required to indicate nesting
  db:
    image: postgres
    restart: always
    volumes:
      - "./postgres-data:/var/lib/postgresql/data"
    ports:
      - 5432:5432
  server:
    build: .
    restart: always
    depends_on:
      - db
    ports:
      - 3000:3000

For the server we did not use the image parameter, instead, we used build. This means we are not going to use a prebuilt image, we are going to build one from a Dockerfile. build takes the path to the folder containing the Dockerfile which is in our case the same folder we are in hence the ..

Another parameter we used is depends_on which tells compose that this service depends on the db service to run correctly.

Now we are almost ready to test our compose files but first, we need to make some changes to let our server know the hostname of the Postgres container. To achieve this, compose allow us to send variables to our containers using environment variables as we did when we created the Postgres container in Part one (We set the database password using an environment variable called POSTGRES_PASSWORD).

Postgres docker image supports multiple environment variables, but we will only use three of them:

  1. POSTGRES_USER: Sets the Postgres username.
  2. POSTGRES_PASSWORD: Sets the password for the username.
  3. POSTGRES_DB: Sets the database name.

And also the node package pg uses environment variables to set Postgres config without the need to specify them manually inside our server code. We will use five pg environment variables:

  1. PGHOST: The hostname where the Postgres database is located (We will set this to the hostname of the db service).
  2. PGUSER: The Postgres username to use.
  3. PGDATABASE: The database name to use.
  4. PGPASSWORD: The Postgres password.
  5. PGPORT: The port Postgres is running on.

To set environment variables inside a compose file, we have to use the environment parameter which takes an array of environment variables to pass to the service.

The final compose file will look like this:

version: "3"
services:
  # Indentation is required to indicate nesting
  db:
    image: postgres
    restart: always
    volumes:
      - "./postgres-data:/var/lib/postgresql/data"
    ports:
      - 5432:5432
    environment:
      - POSTGRES_USER=my_username
      - POSTGRES_PASSWORD=mysecretpassword
      - POSTGRES_DB=my_username
  server:
    build: .
    restart: always
    depends_on:
      - db
    ports:
      - 3000:3000
    environment:
      - PGHOST=db
      - PGUSER=my_username
      - PGDATABASE=my_username
      - PGPASSWORD=mysecretpassword
      - PGPORT=5432

We can now change the code inside the app.js file where we configured pg to connect to the Postgres database from:

const pool = new Pool({
    user: 'postgres',
    host: 'postgres.localhost',
    database: 'postgres',
    password: 'mysecretpassword',
    port: 5432,
})

To:

const pool = new Pool();

Now pg will use the environment variables we set to connect to Postgres.

Running the project using docker compose

Our project is now dockerized and ready for production deployment. But before we deploy it let's test it out first.

Just to make sure that the react application is built (In case you didn't build it in Part two or you just cloned the GitHub repo).

cd to-do
npm run build
cd ../

Now to run the project using compose run the following command:

docker-compose up

This command will run a container for each service we defined in our compose file. So, 2 containers will be created and started, one for Postgres and one for our node server.

we can now test the application by opening the browser and navigating to localhost:3000.

After you have confirmed that the application is running you can stop the containers by pressing control + C or command + C on MacOS and running the following command:

docker-compose down

If the app didn't work or you got an error when running the compose command, make sure you followed all the steps correctly and if you did don't hesitate to contact from the about page

Pushing your code to GitHub

In this step, we will learn how to use git to push our project to GitHub in order to pull it from the Linux server later.

If you don't want to use your code, you can skip this step and use my GitHub repo.

If you already know how to use git you can skip this step as it's only here to teach you how to use git to push your code to GitHub.

The first thing to do before you push any code is to create a local repository inside the project. Run the following command inside the project directory:

git init

This will create an empty repository inside your project.

The next thing to do is to create a .gitignore file in the project directory. This file has the same syntax as the .dockerignore file, it tells git what files to ignore when pushing or pulling files to/from the remote repository.

So, create a new file named .gitignore in the project directory and add the following lines to it.

/node_modules
/postgres-data

/to-do/node_modules
/to-do/.pnp
/to-do/.pnp.js

# testing
/to-do/coverage

# production
/to-do/build

# misc
/to-do/.DS_Store
/to-do/.env.local
/to-do/.env.development.local
/to-do/.env.test.local
/to-do/.env.production.local
.vscode

/to-do/npm-debug.log*
/to-do/yarn-debug.log*
/to-do/yarn-error.log*

The most important things in this file are the folders node_modules, postgres-data and to-do/node_modules all the other files are related to code editors and package managers.

At this point, you are ready to push the project to GitHub but first, you need to create a GitHub repository. Head over to GitHub, create an account if you don't have one and create a new empty repository (Don't let GitHub create a README file or other files).

When you create an empty repository you will end up with a page showing you how to push code to this repository. In the first section there's a link to the repository, copy it and run the following command in your terminal.

git remote add origin https://your_repo_link

This will add your GitHub repo as a remote repository for your project and name it origin.

Now run the following commands to push the code to GitHub:

git add .
git commit -m 'Initial commit'
git push origin master

The first command add will stage all changed files for committing.

The second command commit will commit all changes to the repository so you can push them to a remote one. the commit message is specified using the flag -m.

Lastly git push will push the code to the origin repository we added before on the master branch (Default Branch).

Now go to your GitHub repo page and refresh it to see your code.

Deploying to a Linux VPS

I'm not going to do a step by step on how to create a Digital Ocean droplet, because I have already done a tutorial on how to create one, set up SSH keys and access it using your terminal. You can find this tutorial here.

Access your Linux server using ssh ssh root@server_ip_address and clone your GitHub repo. Or clone my repo if you didn't create one.

git clone https://github.com/hassansaleh31/fullstack-javascript-tutorial-for-beginners.git

Navigate to the project folder and run the application. You have to install docker and nodejs on the server first (If you're not sure how just run the command docker, on some Linux distros you will be shown how to install it, same for node).

cd repo-name/to-do
npm run build
cd ../
docker-compose up -d

Open your browser and navigate to droplet_ip_address:3000 and the application should run.

Installing Nginx

Our application is now running online on a server, but to access it we need to access the port 3000 which is not user-friendly. Nodejs has some issues with running on port 80 (Default HTTP port), so changing the port in our code is not a good idea (It might work, but I advise not to do it). The solution to this problem is a reverse proxy and we will do this using Nginx.

First, we need to install Nginx on our server. I’m going to show you how to install Nginx on Ubuntu.

If you’re not using Ubuntu you can follow the official docs here.

To install nginx on a Linux server running Ubuntu run the following commands:

apt update
apt install nginx
systemctl enable nginx

The last command will enable Nginx and run it on startup.

Now we need to set up the firewall because on a Linux machine the firewall is disabled by default.

ufw allow 'Nginx HTTP'
ufw enable

To check the firewall status run:

ufw status

For more info on ufw commands check the official Ubuntu community help wiki page.

Now if you navigate to your droplet ip address without the port you will see a welcome page by Nginx.

Configuring the reverse proxy

The default folder for Nginx config files is /etc/config/. Inside this folder, there are some files and folders, one of the folders is named sites_enabled and inside it, there's a file called default. This is the default configuration file for Nginx and we want to edit it to setup reverse proxy.

cd /etc/nginx/sites-available/
nano default
# nano is the default text editor in Ubuntu

This command will open the file in a text editor. You will see lots of comments just ignore them and scroll down to the part that starts with server {, look for a line of code that starts with root and comment it out using a # at the beginning of the line (root specifies the root folder where index.html is placed, but we are using node to serve our app so we don't need it).

The next thing we need to change is location /. Inside it there are some comments and a line that starts with try_files, we also need to comment this line and add the following code to location /:

location / {
    # some comments
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://localhost:3000;
}

The most important line in the code we just added is the last one where we passed the request from the default location which is http://host_name to http://localhost:3000 which is the location of our nodejs application. The first two lines are extracting some headers from the request to send them to our node server.

To save the file press control + X, you will be asked if you want to save the file press y for yes and hit Enter.

To test the configuration files run the following command:

nginx -t

You then need to restart Nginx for the changes to apply.

systemctl restart nginx

Now open your browser and navigate to your server IP address without the port and you will see your application running.

That was it for this tutorial series, thank you for reading. If you got any question or you found any errors don't hesitate to contact me.

The code for this tutorial is on GitHub.

If you feel that you have learned anything from this article, don't forget to share it and help other people learn.