How to deploy Woodpecker CI 1.0.3 in docker swarm behind Caddy v2.7.5

How to deploy Woodpecker CI 1.0.3 in docker swarm behind Caddy v2.7.5

Woodpecker is a simple CI engine with extensibility. It focuses on executing pipelines inside containers. If you are using containers in your daily workflow, you’ll love Woodpecker.

In this post, I am going to show you how to deploy Woodpecker CI 1.0,3, a Container-Native, Continuous Delivery Platform in Docker Swarm Cluster using the Docker Compose tool behind Caddy 2.6.4

Woodpecker CI is a simple CI engine with great extensibility

If you want to learn more about Woodpecker CI, please go through the below links.

  1. Woodpecker website

  2. Official documentation

  3. GitHub repository

Let’s start with actual deployment…

Prerequisites

Please make sure you should fulfill the below requirements before proceeding to the actual deployment.

  1. Docker Swarm Cluster with GlusterFS as persistent tool.

  2. Caddy as reverse proxy to expose micro-services to external.

Introduction

Woodpecker CI is a simple CI engine with great extensibility. It focuses on executing pipelines inside containers. If you are using containers in your daily workflow, you will love Woodpecker.

Woodpecker uses pipeline file (.woodpecker.yml) that contains a single pipeline or multiple pipelines to build an application and publish the docker container to a registry of your choice.

Pipelines are configured using YML with a simple, easy to read file that we commit to our git repository. Each pipeline step is executed inside an isolated Docker Container that is automatically downloaded at runtime.

Pipeline steps can be named as you like. Run any command in the command section. Steps are containers and file changes are incremental.

Persist Woodpecker Data

Containers are fast to deploy and make efficient use of system resources. Developers get application portability and programmable image management and the operations team gets standard run time units of deployment and management.

With all the known benefits of containers, there is one common misperception that the containers are ephemeral, which means if we restart the container or in case of any issues with it, we lose all the data for that particular container. They are only good for stateless micro-service applications and that it’s not possible to containerize stateful applications.

I am going to use GlusterFS to overcome the ephemeral behavior of Containers.

I already set up a replicated GlusterFS volume to have data replicated throughout the cluster if I would like to have some persistent data.

The below diagram explains how the replicated volume works.

GlusterFS Replicated Volume

Volume will be mounted on all the nodes, and when a file is written to the /mnt partition, data will be replicated to all the nodes in the Cluster

Note

In case of any one of the nodes fails, the application automatically starts on other node without loosing any data and that’s the beauty of the replicated volume.

Persistent application state or data needs to survive application restarts and outages. We are storing the data or state in GlusterFS and had periodic backups performed on it.

Gitea will be available if something goes wrong with any of the nodes on our Docker Swarm Cluster. The data will be available to all the nodes in the cluster because of GlusterFS Replicated Volume.

I am going to create a folder woodpeckerdata in /mnt directory to map container volume /var/lib/woodpecker/

cd /mnt
sudo mkdir -p woodpeckerdata

Tip

Please watch the below video for the GlusterFS Replicated Volume Setup.

Prepare Woodpecker Environment

I am going to use docker-compose to prepare the environment file for deploying Woodpecker. The compose file is known as YAML ( YAML stands for Yet Another Markup Language) and has extension .yml or .yaml

I am going to create application folders in /opt directory on manager node in our docker swarm cluster to store configuration files, nothing but docker compose files (.yml or .yaml).

Also, I am going to use the caddy overlay network created in the previous Caddy post.

Now it’s time to create a folder, woodpecker in /opt directory to place configuration file, i.e., .yml file for woodpecker.

Use the below commands to create the folder.

Go to /opt directory by typing cd /opt in Ubuntu console

make a folder, drone in /opt with sudo mkdir -p woodpecker

Let’s get into drone folder by typing cd woodpecker

Now create a docker-compose file inside the woodpecker folder using sudo touch woodpecker.yml

Open woodpecker.yml docker-compose file with nano editor using sudo nano woodpecker.yml and copy and paste the below code in it.

Woodpecker Docker Compose

Here is the docker-compose file for woodpecker. I am going to utilize SQLite as a back-end database for it.

version: "3.7"

services:
  woodpecker-server:
    image: woodpeckerci/woodpecker-server:latest-alpine
    volumes:
      - /mnt/woodpeckerdata:/var/lib/woodpecker/
    environment:
      - WOODPECKER_HOST=https://woodpecker.example.com
      - WOODPECKER_GITEA=true
      - WOODPECKER_GITEA_CLIENT=gitea-client-id
      - WOODPECKER_GITEA_SECRET=gitea-client-secret
      - WOODPECKER_GITEA_URL=https://gitea.example.com
      - WOODPECKER_AGENT_SECRET=agent-secret
      - WOODPECKER_ADMIN=username
      - WOODPECKER_REPO_OWNERS=username 
    networks:
      - caddy
    deploy:
      placement:
        constraints: [node.role == worker]
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
  woodpecker-agent:
    image: woodpeckerci/woodpecker-agent:latest-alpine
    command: agent
    depends_on:
      - woodpecker-server
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WOODPECKER_SERVER=woodpecker-server:9000
      - WOODPECKER_AGENT_SECRET=agent-secret
      - WOODPECKER_MAX_PROCS=10
      - WOODPECKER_BACKEND=docker
      - WOODPECKER_HOST=https://woodpecker.example.com
    networks:
      - caddy
    deploy:
      placement:
        constraints: [node.role == worker]
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
volumes:
  woodpeckerdata:
    driver: "local"
networks:
  caddy:
    external: true
    attachable: true

Tip

Please watch the below video to deploy Woodpecker CI in Docker Swarm Cluster.

Please find documentation below to learn how to configure Woodpecker based on the selected git provider. I am using Gitea as a provider.

https://docs.drone.io/server/provider/gitea/

Caddyfile – Woodpecker

The Caddyfile is a convenient Caddy configuration format for humans.

Caddyfile is easy to write, easy to understand, and expressive enough for most use cases.

Please find Production-ready Caddyfile for Woodpecker.

Learn more about Caddyfile here to get familiar with it.

{
    email you@example.com
    default_sni drone
    cert_issuer acme
    # Production acme directory
    acme_ca https://acme-v02.api.letsencrypt.org/directory
    # Staging acme directory
    #acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
    servers {
        metrics
        protocols h1 h2c h3
        strict_sni_host on
        trusted_proxies cloudflare {
            interval 12h
            timeout 15s
        }
    }
}
woodpecker.example.com {
    log {
        output file /var/log/caddy/woodpecker.log {
        roll_size 20mb
        roll_keep 2
        roll_keep_for 6h
        }
        format console
        level error
    }
    encode gzip zstd
    reverse_proxy woodpecker:8000
}

Please go to Caddy Post to get more insight to deploy it in the docker swarm cluster.

Final Woodpecker Docker Compose (Including caddy server configuration)

Please find the full docker-compose file below. You can deploy as many sites as you want using it.

Don’t forget to map site data directories like /mnt/woodpeckerdata:/var/lib/woodpecker/ in Caddy configuration caddy.yml.

I already wrote an article Caddy in Docker Swarm. Please go through if you want to learn more.

version: "3.7"

services:
  caddy:
    image: tuneitme/caddy
    ports:
      - "80:80"
      - "443:443"
    networks:
      - caddy
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - /mnt/caddydata:/data
      - /mnt/caddyconfig:/config
      - /mnt/caddylogs:/var/log/caddy
      - /mnt/woodpeckerdata:/var/lib/woodpecker/
    deploy:
      placement:
        constraints:
          - node.role == manager
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
  woodpecker-server:
    image: woodpeckerci/woodpecker-server:latest-alpine
    volumes:
      - /mnt/woodpeckerdata:/var/lib/woodpecker/
    environment:
      - WOODPECKER_HOST=https://woodpecker.example.com
      - WOODPECKER_GITEA=true
      - WOODPECKER_GITEA_CLIENT=gitea-client-id
      - WOODPECKER_GITEA_SECRET=gitea-client-secret
      - WOODPECKER_GITEA_URL=https://gitea.example.com
      - WOODPECKER_AGENT_SECRET=agent-secret
      - WOODPECKER_ADMIN=username
      - WOODPECKER_REPO_OWNERS=username 
    networks:
      - caddy
    deploy:
      placement:
        constraints: [node.role == worker]
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
  woodpecker-agent:
    image: woodpeckerci/woodpecker-agent:latest-alpine
    command: agent
    depends_on:
      - woodpecker-server
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WOODPECKER_SERVER=woodpecker-server:9000
      - WOODPECKER_AGENT_SECRET=agent-secret
      - WOODPECKER_MAX_PROCS=10
      - WOODPECKER_BACKEND=docker
      - WOODPECKER_HOST=https://woodpecker.example.com
    networks:
      - caddy
    deploy:
      placement:
        constraints: [node.role == worker]
      replicas: 1
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
volumes:
  caddydata:
    driver: "local"
  caddyconfig:
    driver: "local"
  caddylogs:
    driver: "local"
  woodpeckerdata:
    driver: "local"
networks:
  caddy:
    external: true
    attachable: true

Here I used a custom Caddy docker container with plugins, like Cloudflare DNS, Caddy Auth Portal etc…

Please find the custom caddy docker image below.

Tuneit Caddy Docker Image

Deploy Woodpecker Stack using Docker Compose

Now it’s time to deploy our docker-compose file above, woodpecker.yml using the below command

docker stack deploy --compose-file woodpecker.yml woodpecker

In the above command, you have to replace woodpecker.yml with your docker-compose file name and drone with whatever name you want to call this particular application.

With docker compose in docker swarm what ever we are deploying is called as docker stack and it has multiple services in it as per the requirement.

As mentioned earlier I named my docker-compose as woodpecker.yml and named my application stack as drone

Check the status of the stack by using docker stack ps woodpecker

Check drone stack logs using docker service logs woodpecker_woodpecker-server

One thing we observe is that it automatically re-directs to https with Letsencrypt generated certificate. The information is stored in /data a directory.

I will be using this caddy stack as a reverse proxy / load balancer for the applications I am going to deploy to Docker Swarm Cluster.

Also I use docker network caddy to access the applications externally.

Access / Configure Woodpecker

Now open any browser and type woodpecker.example.com to access the site. it will automatically be redirected to https://woodpecker.example.com/welcome ( Be sure to replace example.com with your actual domain name).

Make sure that you have DNS entry for your application (woodpecker.example.com) in your DNS Management Application.

Please find below images for your reference. Click on them to open in lightbox for full resolution.

Images

Deployment of Woodpecker behind Caddy in our Docker Swarm is successful

If you enjoyed this tutorial, please give your input/thought on it by commenting below. It would help me to bring more articles that focus on Open Source to self-host.

Stay tuned for other deployments in coming posts… 🙄