What magic is this? Faster npm with PnPm

Node.js is the most popular javascript runtime environments and it is extremely popular amongst developers. It uses npm as its package manager. Another alternative to npm is yarn which is touted to be more performant (i.e. faster than npm), however not fast enough.

Issues With Npm/Yarn

The main issue that many developers have with npm is that it is slow when used on projects with really big dependecies, also it does not manage the creation of the node_modules efficiently, duplicating dependencies across multiple projects.

Introducing PnPm

Here comes PnPm, a highly performant (fast) new package manager for node.js with the efficient disk space management. Pnpm is a drop in replacement for npm, it incorporates all the features of npm and it does it more efficiently.

Features of PnPm

  • Up to 2 times faster than npm
  • Shareable node_modules content, i.e – Files inside node_modules are stored in a single content-addressable storage, this is then cloned or hard-linked to projects where the libraries are used. This can greatly reduce disk space usage by removing the need for duplications.
  • PnPm Workspaces – This allows developers to isolate concerns by using multiple packages in a repository.
  • Peers installation
  • Patching dependencies and so much more…

Enough Talk, How do you install it?

To install PnPm is very easy, explore the following options

  1. Install using npm

To install PnPm using npm, there are two available packages for the pnpm CLI, namely, pnpm and @pnpm/exe.

  • pnpm is a ordinary version of pnpm, which needs Node.js to run.
  • @pnpm/exe is the version packaged with Node.js into an executable, so it can be executed on a system with no Node.js installed.
$ npm install -g pnpm

or 

$ npm install -g @pnpm/exe

 2. On macOs, using Homebrew

If you are on macOs, you can use the brew package manager to install pnpm as follows.

$ brew install pnpm

You can see other installation methods over here

How to use PnPm

That is the interesting thing, it is a drop-in replacement for npm, so all npm commands apply to pnpm also, just replace npm with pnpm, e.g

  • To install all dependencies, run pnpm install
  • To install a package, run pnpm install @faker-js/faker
  • etc.

How to package a crystal application using Docker

Crystal is a very powerful language that advertises itself as being fast as C bust slick as Ruby. After developing an application with crystal, one is usually left with the decision of how to package and deploy the application. Docker, offers us one of the most ubiquitous and proper way to deploy the applicaton. In this tutorial, I will show you one or two things about building your application with docker.

Crystal supports Static Linking, meaning that crystal can build an application with all the required static libraries embedded, so that it is not required on the host machines. This makes the compiled application very portable. Crystal supports static linking by using musl-libc. Since this is present in alpine-linux, using the alpine linux docker image is the recommended way to build crystal app.

Consider the dockerfile below

ARG CRYSTAL_VERSION
FROM crystallang/crystal:${CRYSTAL_VERSION} as builder

# set the working directory to /app
WORKDIR /app

# copy the dependencies based files
COPY ./shard.yml ./shard.lock /app/
RUN shards install --production -v

# Build the binary app in the builder stage
COPY . /app/
RUN shards build --static --no-debug --release --production -v

# ===============
# Result image with one layer
FROM alpine:latest
WORKDIR /
COPY --from=builder /app/bin/demo-app .

EXPOSE 3000

ENTRYPOINT ["/demo-app"]


Please note that this is what the shard.yml looks like

name: demo-app
version: 0.1.0

authors:
  - 234Developer  <234developer@gmail.com>

targets:
  demo-app:
    main: src/demo-app.cr

dependencies:
  kemal:
    github: kemalcr/kemal

crystal: '>= 1.10.1'

license: MIT

Now, the docker image can be built using the following command

$ docker build \
   --build-arg "CRYSTAL_VERSION=1.10-alpine" \
    -t app/demo:v1 \
    .

How To Resolve The “Ubuntu Repository No Longer Has a Release” Issue

Sometimes, when you run the command sudo apt update -y or sudo apt upgrade -y, you get the error message such as

The repository 'http://gb.archive.ubuntu.com/ubuntu kinetic Release' no longer has a Release file.

This error is caused because the releases sources for the ubuntu version you are attempting to update has been migrated to an archive server. So, to fix this solution, all you need to do is point the sources to the archived server.

This can be easily achieved by running the commands below

# Step 1: backup your existing source list file
$  cp /etc/apt/sources.list /etc/apt/sources.list.bak 

# Step 2: run sed to replace the sources path
$ sudo sed -i -re 's/([a-z]{2}.)?archive.ubuntu.com|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list

# step 3: update the OS and do a dist upgrade
$ sudo apt-get update && sudo apt-get dist-upgrade

Once the commands listed above has been executed successfully, the error message will disappear and you can then freely update your OS at any time without any hinderances.

How To Check If AWS Instance Is Using Metadata v2 using aws cli?

The instance metadata on EC2 instances is provided by the Instance Metadata Service (IMDS) from Amazon. It helps provide metadata information such as hostname, list of public keys, security groups etc though the web endpoint http://169.254.169.254/latest/meta-data.

There are two (2) versions of the IMDS, v1 and v2. It is therefore helpful to be able to detect which IMDS version is enabled on an EC2 instance, you can do the following to find out.

Step 1: Get The Metadata Options of the Instance To Check

Note: in this example, the name of the instance is TestInstance

$ aws --no-cli-pager --region=eu-west-1 ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=tag:Name,Values='TestInstance'" | jq -r '.Reservations[].Instances[].MetadataOptions'

{
  "State": "applied",
  "HttpTokens": "optional",
  "HttpPutResponseHopLimit": 1,
  "HttpEndpoint": "enabled",
  "HttpProtocolIpv6": "disabled",
  "InstanceMetadataTags": "disabled"
}

From the JSON returned, check the HttpTokens key, and consider the following rule

  1. If value is optional, then IMDSv1 is enabled
  2. If value is required, then IMDSv2 is enabled.

Also note, the key HttpEndpoint indicates whether the metadata service is turned on or not for an instance.

AWS – How To Rename IAM User name

The AWS IAM (Identity and Access Management) Service is the core service for managing identity and securing access to all AWS services. An AWS IAM User is usually created once and forgotten, however sometimes it becomes important to change the username of an AWS IAM User without losing access to the attached groups, access keys etc. In this tutorial, we will demonstrate how to change the username of an AWS IAM user fulfilling the following requirements.

  1. The UserId of the AWS IAM User remains the same after the change
  2. The AWS IAM User remains attached to all groups they were attached to before the change
  3. All policies attached to the IAM User remains attached under the new username
  4. All other permissions granted to the IAM User remains the same.

To achieve the rename as described above, lets follow the steps listed below

Step 1: Obtain the current username and confirm aws is configured properly

Run the following command to be sure that the aws profile is configured correctly

Step 2: Run the command to change the username

This command will change the username from my.email@old.com to my.email@new.com as follows

Step 3: Get the IAM user again to confirm the change has been truly effected

Run the command displayed below to confirm that the AWS IAM username has been updated. Note, the UserId remains unchanged before and after the change.

How To Trigger Jenkins Jobs Remotely With CSRF Protection

Jenkins in many quarters is regarded as the swiss army of (CI/CD) Continuous Integration and Continuous Deployment Tools. It has various ways of triggering jobs, which ranges from

  • Scheduled Times (Build Periodically)
  • Polling SCM
  • Build when a change is pushed to a VCS such as Github/Bitbucket etc

However, sometimes we desire to trigger a job remotely from a bash script. The flexibility of Jenkins equally allows for this, in this tutorial, I will show you how to do this while also protecting the jenkins server from CSRF (Cross Site Request Forgery) requests.

Step 1: Login To Jenkins And Generate Api Token

For the bash script to interact with the jenkins server, it will require an API Token. Follow the following steps to generate one.

  1. Login To Jenkins
  2. Click your profile name to visit your profile
  3. Click the configure button
  4. Click the Add New Token button

Note: You will have to copy and save this token because once generated, it cant be recovered again. Check the image below

Step 2: Create A Demo Job To Trigger Remotely

in this step, we create a demo job to trigger remotely, in this case, the job will be called Demo_Job, which is a job to print the server remote IP and the current date.

The bash script being executed by the Jenkins job is given below as

#!/usr/bin/env bash

ipAddress=$(curl -Ls https://ip.me)
# Echo IP and date
echo "Ip Address = ${ipAddress} at $(date)"

Step 3: Write The Bash Script To Use To Trigger The Job Remotely

In this step, we will make use of the api token generated in step 1 above, as well as a CRUMB token, this token is similar to the csrf_token used for protecting form requests in laravel and other web frameworks. It ensures thats requests issued to the jenkins server cannot be forged. You can read more about it here.

#!/usr/bin/env bash

JENKINS_URL="https://jenkins.dev"
JENKINS_USER="admin"
JENKINS_TOKEN="xxxxxxx-put-api-token-here-xxxxxxxx"
JENKINS_AUTH="$JENKINS_USER:$JENKINS_TOKEN"
JENKINS_CRUMB_TOKEN_URL="${JENKINS_URL}/crumbIssuer/api/json";
JENKINS_JOB_NAME="Demo_Job"
JENKINS_JOB_URL="${JENKINS_URL}/job/${JENKINS_JOB_NAME}"


# obtain the crumb token
CRUMB_TOKEN=$(curl --silent \
                   --user "$JENKINS_AUTH" \
                   $JENKINS_CRUMB_TOKEN_URL | jq -r '.crumb' \
                   2> /dev/null);

# exit the script if the crumb token could not be obtained
if [[ $? -ne 0 ]]; then
    echo "Crumb Authentication Failed"
    exit;
fi

curl --silent \
    --user "$JENKINS_AUTH" \
    -H "$CRUMB_TOKEN" \
    -X POST "$JENKINS_JOB_URL/build"

Save the above script into a file and make it executable, then run it. This should trigger the Demo_Job remotely adn then you will see the job build on the jenkins server.

This means you’ve been able to trigger the jenkins job remotely. Hope this has been helpful for you.

Customizing Of AWS Ec2 Instances With UserData In Terraform

Terraform is a very versatile tool for writing IaaC (Infrastructure As Code) and one of the most common use-cases of terraform is creating Ec2 instances.

What is userdata?

AWS Userdata is a set of commands that is ran at the creation of an EC2 instance that can be used to modify the EC2 instance. It can be used to perform a host of operations such as

  • Setting the hostname
  • Updating the OS
  • Installing a new package

In this example, we will be using the userdata to update the operating system, set the hostname for the machine and also install a package, nginx.

resource "aws_instance" "web" {
  ami                    = "demo-ubuntu-ami-id"
  instance_type          = "t2.micro"
  key_name               = "eks-pair-name"
  ...
  tags = {
    Name = "Testing-Machine"
  }
  ...
  user_data = <<EOF
  #!/bin/bash
  sudo apt-get update
  sudo hostnamectl set-hostname "Testing-Machine"
  sudo apt-get install nginx -y
  EOF
}

Now, we can run this terraform block by issuing the terraform apply command and supplying yes when prompted.

Now, your machine will have the hostname set to Testing-Machine, have the operating system updated and also have nginx webserver installed.

How to setup a PostgreSQL database locally using docker compose

Databases are fundamental to the development of modern apps, as such it is extremely vital for a developer to be familiar with their operations. PostgreSQL is a very popular RDBMS (Relational Database Management System) that is reputed to be the Worlds most advanced RDBMS. In this tutorial, I will teach you how you can run the postgres database server locally on your machine just using Docker and Docker-Compose.

What do you need?

  • An IDE (Integrated Development Environment), a very good one is Vscode. (Note: This is not a hard requirement, you can use a text editor also)
  • Docker – See how install here
  • Docker-compose – See how to install here

Defining The Docker-Compose File

version: '3.9'
services:
  db:
    image: postgres:${POSTGRES_VERSION:-16}
    environment:
    - POSTGRES_USER=postgres 
    - POSTGRES_PASSWORD=password 
    volumes:
      - db-volume:/var/lib/postgresql/data
    ports:
    - 5432:5432
    networks:
    - backend
  pgadmin:
    image: dpage/pgadmin4
    environment:
        PGADMIN_DEFAULT_EMAIL: 'user1@test.com'
        PGADMIN_DEFAULT_PASSWORD: 'password'
    ports:
        - 8080:80
    networks:
        - backend

volumes:
  db-volume:
networks:
  backend:

Explanation

If you are not familiar with docker-compose, the file above may look complex, however thats not the case, so lets try explain it together by highlighting the following

  1. We are creating two containers (services) named db (the postgres database service) and pgadmin (the web based tool for accessing postgres)
  2. We are creating a network named backend that is used by the two services. This allows the two services to refer to each other just by their names
  3. We are creating a volume named db-volume. This is used to make the data used by the postgres server (db) available locally, this can be used for backup etc.
  4. We are exposing the postgres database (db) on the host using the port 5432.
  5. We are exposing the pgadmin app to the host machine using port 8080. Meaning, after starting your docker-compose file, you can access the pgadmin app by visiting http://localhost:8080 in your browser.

Starting your services

After creating your docker-compose.yaml file, just cd into the directory where it is and run the following

$ docker-compose up -d         
[+] Running 29/2
 ✔ db 13 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                         16.3s 
 ✔ pgadmin 14 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                                                   17.1s 
[+] Running 4/4
 ✔ Network postgresql_backend      Created                                                                                                                                      0.0s 
 ✔ Volume "postgresql_db-volume"   Created                                                                                                                                      0.0s 
 ✔ Container postgresql-pgadmin-1  Started                                                                                                                                      1.3s 
 ✔ Container postgresql-db-1       Started                                                                                                                                      1.3s 

This command will download the two images specified in the compose file (postgres and dpage/pgadmin4) and then proceed to setup the volumes and networks to make it work properly. Please note , the -d flag makes the docker-compose run in detached mode, i.e. it runs in the background as a daemon.

Stopping your services

To stop your services from running after starting it, all you need to do is run the following command

$ docker-compose down

Cleaning up after yourself

After stopping your services, docker-compose still retains you defined volumes and networks, so that if you restart your services, it can use the old data and not start afresh. If you however wish to tear everthing down, do the following

$ docker-compose down -v

[+] Running 4/4
 ✔ Container postgresql-pgadmin-1  Removed                          1.6s 
 ✔ Container postgresql-db-1       R...                             0.2s 
 ✔ Volume postgresql_db-volume     Removed                          0.0s 
 ✔ Network postgresql_backend      Removed                          0.1s

Connecting To Your Database Using PgAdmin

After starting your services, you can visit pgAdmin by using the url http://localhost:8080 and you will see the following

After putting the credentials supplied in the docker-compose.yaml file, and clicking login button, you will see the screen below

You can then connect to the postgres server defined with the db services by doing the following.

  • Right click on the servers icon and click Create Server Group as shown below

Supply the name for the server, Note: this is the name that will be shown on the object explorer to refer to your server.

Click the connection tab and supply the details for connecting to the db service

  • Host: db (Note: since the pgadmin and db are on the same backend network, they can refer to each other using the same name)
  • Username: postgres (Note: this was supplied in the docker-compose file)
  • Password: password (Note: this was supplied in the docker-compose file)

After, clicking save button, you should see the following screen.

Now go on and enjoy your postgreSQL database, hope this tutorial has been helpful.

How To Force DNS Resolution In Curl Using –resolve option

Curl is the most popular Swiss Army Knife used for interracting with HTTP requests. One very common requirement one will have sooner than later is to be able to trick curl to resolve a domain to specific IP. This is very helpful especially in cases of debugging or testing, when one wishes to point the product or website url to a staging IP without altering the hosts file on the machine.

Consider this example. You have a domain (www.myexample.com) that points to the IP (220.100.20.30) as shown below

$ dig +short www.myexample.com

220.100.20.30

However, we desire to test the domain against your local server with IP (127.0.0.1), what we do is trick curl to resolve (www.example.com) to resolve to the testing server IP as follow

$ curl --resolve www.myexample.com:80:127.0.0.1 http://www.myexample.com:80

HTTP/1.1 200 OK
Server: nginx/1.25.2
Date: Wed, 20 Sep 2023 12:22:00 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 15 Aug 2023 17:03:04 GMT
Connection: keep-alive
ETag: "64dbafc8-267"
Accept-Ranges: bytes

The format of the –resolve option of curl is

Domain:Port:IP

e.g

google.com:443:192.168.0.1

How To Always Execute A Null Resource With Terraform

One of the beauty of terraform is that it is idempotent which is the attribute that makes it very predicatable. Terraform compares the current state with the intended state and if there is no difference, it doesnt re-run the same code.

However, there are times that you will always want a piece of code to run , it could be the creation of a file to record the execution time of a terraform run. To achieve this, you can use the null_resource with the trigger attribute set to the timestamp. This will make sure that for that resource, the state is always invalidated on every run, triggering a re-run of the resource.

Too much talk, lets see some code. The code below will always create a file named last-run.txt with the current date and time.

resource "null_resource" "tester" {
  triggers = {
      always_run = "${timestamp()}"
  }
  provisioner "local-exec" {
    command     = "echo $(date) > ${path.module}/last-run.txt"
  }
}

Now you can apply and run this as follows

$ terraform apply -auto-approve

After execution, you can run a plan to see that it will be re-created again

$ terraform plan

Terraform will perform the following actions:

  # null_resource.echo-cat must be replaced
-/+ resource "null_resource" "tester" {
      ~ id       = "3621864573746826936" -> (known after apply)
      ~ triggers = { # forces replacement
          ~ "always_run" = "2023-08-09T14:47:18Z" -> (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Now, go away and repeat some null_resource block.