How to package a crystal application using Docker

Crystal is a very powerful language that advertises itself as being fast as C bust slick as Ruby. After developing an application with crystal, one is usually left with the decision of how to package and deploy the application. Docker, offers us one of the most ubiquitous and proper way to deploy the applicaton. In this tutorial, I will show you one or two things about building your application with docker.

Crystal supports Static Linking, meaning that crystal can build an application with all the required static libraries embedded, so that it is not required on the host machines. This makes the compiled application very portable. Crystal supports static linking by using musl-libc. Since this is present in alpine-linux, using the alpine linux docker image is the recommended way to build crystal app.

Consider the dockerfile below

ARG CRYSTAL_VERSION
FROM crystallang/crystal:${CRYSTAL_VERSION} as builder

# set the working directory to /app
WORKDIR /app

# copy the dependencies based files
COPY ./shard.yml ./shard.lock /app/
RUN shards install --production -v

# Build the binary app in the builder stage
COPY . /app/
RUN shards build --static --no-debug --release --production -v

# ===============
# Result image with one layer
FROM alpine:latest
WORKDIR /
COPY --from=builder /app/bin/demo-app .

EXPOSE 3000

ENTRYPOINT ["/demo-app"]


Please note that this is what the shard.yml looks like

name: demo-app
version: 0.1.0

authors:
  - 234Developer  <234developer@gmail.com>

targets:
  demo-app:
    main: src/demo-app.cr

dependencies:
  kemal:
    github: kemalcr/kemal

crystal: '>= 1.10.1'

license: MIT

Now, the docker image can be built using the following command

$ docker build \
   --build-arg "CRYSTAL_VERSION=1.10-alpine" \
    -t app/demo:v1 \
    .

How to setup a PostgreSQL database locally using docker compose

Databases are fundamental to the development of modern apps, as such it is extremely vital for a developer to be familiar with their operations. PostgreSQL is a very popular RDBMS (Relational Database Management System) that is reputed to be the Worlds most advanced RDBMS. In this tutorial, I will teach you how you can run the postgres database server locally on your machine just using Docker and Docker-Compose.

What do you need?

  • An IDE (Integrated Development Environment), a very good one is Vscode. (Note: This is not a hard requirement, you can use a text editor also)
  • Docker – See how install here
  • Docker-compose – See how to install here

Defining The Docker-Compose File

version: '3.9'
services:
  db:
    image: postgres:${POSTGRES_VERSION:-16}
    environment:
    - POSTGRES_USER=postgres 
    - POSTGRES_PASSWORD=password 
    volumes:
      - db-volume:/var/lib/postgresql/data
    ports:
    - 5432:5432
    networks:
    - backend
  pgadmin:
    image: dpage/pgadmin4
    environment:
        PGADMIN_DEFAULT_EMAIL: 'user1@test.com'
        PGADMIN_DEFAULT_PASSWORD: 'password'
    ports:
        - 8080:80
    networks:
        - backend

volumes:
  db-volume:
networks:
  backend:

Explanation

If you are not familiar with docker-compose, the file above may look complex, however thats not the case, so lets try explain it together by highlighting the following

  1. We are creating two containers (services) named db (the postgres database service) and pgadmin (the web based tool for accessing postgres)
  2. We are creating a network named backend that is used by the two services. This allows the two services to refer to each other just by their names
  3. We are creating a volume named db-volume. This is used to make the data used by the postgres server (db) available locally, this can be used for backup etc.
  4. We are exposing the postgres database (db) on the host using the port 5432.
  5. We are exposing the pgadmin app to the host machine using port 8080. Meaning, after starting your docker-compose file, you can access the pgadmin app by visiting http://localhost:8080 in your browser.

Starting your services

After creating your docker-compose.yaml file, just cd into the directory where it is and run the following

$ docker-compose up -d         
[+] Running 29/2
 ✔ db 13 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                         16.3s 
 ✔ pgadmin 14 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                   17.1s 
[+] Running 4/4
 ✔ Network postgresql_backend      Created                                                                                                                                      0.0s 
 ✔ Volume "postgresql_db-volume"   Created                                                                                                                                      0.0s 
 ✔ Container postgresql-pgadmin-1  Started                                                                                                                                      1.3s 
 ✔ Container postgresql-db-1       Started                                                                                                                                      1.3s 

This command will download the two images specified in the compose file (postgres and dpage/pgadmin4) and then proceed to setup the volumes and networks to make it work properly. Please note , the -d flag makes the docker-compose run in detached mode, i.e. it runs in the background as a daemon.

Stopping your services

To stop your services from running after starting it, all you need to do is run the following command

$ docker-compose down

Cleaning up after yourself

After stopping your services, docker-compose still retains you defined volumes and networks, so that if you restart your services, it can use the old data and not start afresh. If you however wish to tear everthing down, do the following

$ docker-compose down -v

[+] Running 4/4
 ✔ Container postgresql-pgadmin-1  Removed                          1.6s 
 ✔ Container postgresql-db-1       R...                             0.2s 
 ✔ Volume postgresql_db-volume     Removed                          0.0s 
 ✔ Network postgresql_backend      Removed                          0.1s

Connecting To Your Database Using PgAdmin

After starting your services, you can visit pgAdmin by using the url http://localhost:8080 and you will see the following

After putting the credentials supplied in the docker-compose.yaml file, and clicking login button, you will see the screen below

You can then connect to the postgres server defined with the db services by doing the following.

  • Right click on the servers icon and click Create Server Group as shown below

Supply the name for the server, Note: this is the name that will be shown on the object explorer to refer to your server.

Click the connection tab and supply the details for connecting to the db service

  • Host: db (Note: since the pgadmin and db are on the same backend network, they can refer to each other using the same name)
  • Username: postgres (Note: this was supplied in the docker-compose file)
  • Password: password (Note: this was supplied in the docker-compose file)

After, clicking save button, you should see the following screen.

Now go on and enjoy your postgreSQL database, hope this tutorial has been helpful.

How to install BASH (Bourne Again Shell) in Alpine

The Alpine Linux docker image is a minimal docker image that is based on the Alpine Linux, in its bare form, it is as small as 5MB in size. However, by default, BASH is not included or installed in alpine, it defaults to /bin/sh. This becomes sometimes difficult if you already have scripts targeted at BASH and this needs to be ported into an alpine image.

The easy solution is to install BASH into the image, and this can be done easily as demonstrated below

$ apk update
$ apk upgrade
$ apk add bash

This can also be included into a Dockerfile as shown below

FROM alpine:3.17.2

RUN apk update && \
    apk add bash

Then, you can build the image as follow

$ docker build -t alpine/demo:v1 -f Dockerfile.

This newly generated docker image can then be used with BASH shell as follow

$ docker run --rm --it alpine/demo:v1 bash

How to set max-size of docker container/service logs in docker-compose

Did you know that if you have a service that sends its output to the stdout like nginx does, docker caches the logs and keeps it with the /var/lib/docker folder.

This makes it possible for your to run docker service logs {_DOCKER_ID} / docker container logs {_DOCKER_ID} command and docker retrieves all the stored logs, this is a very nifty feature but the problem is that it quickly grows out of hand and the logging file grows very large and it can starve your host machine of space.

To fix, this you can do two quick things

  1. set a max-size for the log file
  2. enable log rotation by setting a maximum number of logs to eep

The cool thing is that once the log file reaches the max-size, docker flushes it and starts with a new one, but once the number of logs reaches the specified number, docker removes the oldest ones to accomodate new ones.

Enough talk, how do you do it?

version: '3.6'
services:
  something_you_like:
     image: nginx
     logging:
       options:
         max-size: "500M'
         max-file: "5"
         

The config above is self explanatory, it would limit the maximum size of the docker log file for the service something_you_like to 50M(50 megabytes) and the will log rotate to keep only a maximum of 5 files.

Getting Docker Container Data In Proper Json Format

Docker is a very important and exciting tool, at the most basic level , it runs containers from pre-built images. Usually, you would need to see a list of the running containers and this can be done easily using the command below

$ docker ps

This will give an output similar to this

Also, you can get this formatted as json by using the command below

$ docker ps --format="{{ json . }}"

The problem with the output above is that the json isn’t fully valid to be used in another application, to resolve this issue, we can query the data from the Docker Engine Api using the command below

$ curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json

You get an output similar to this

The output above is a very valid json document and this can even be made pretty by using a tool such as jq, as follows