déploiement avec Codeberg (forgejo) et Nomad
Wed, 25 Sep 2024 20:35:30 +0200En 2017 j’écrivais un billet sur le déploiement continu avec drone. Je souhaite mettre à nouveau en place une CI/CD mais évidemment depuis pas mal de choses ont changé.
Pas simple de suivre la tech 😅.
2 options s’ouvrent à moi, soit j’auto-héberge une instance Woodpecker et un agent reliés à un webhook Codeberg, soit plus simplement je test un runner Codeberg (forgejo-runner) auto-hébergé et connecté à Codeberg. Évidemment dans l’esprit de simplicité qui m’anime j’ai choisi cette dernière option. A voir cependant si Woodpecker et ses plugins n’offrent pas plus de possibilités. (un avis sur ce sujet ici : Switching from Woodpecker to Gitea Actions).
Forgejo qui était un soft fork de Gitea est devenu un hard fork et suis donc son propre chemin ( Forgejo forks its own path forward) mais il a malgré tout implémenté les Actions de Gitea. A l’origine les Actions viennent de Github Actions mais Gitea qui se veut plus ou moins une copie de Github les a implémenté mais la syntaxe n’est pas complètement identique.
Codeberg est une association en Allemagne et développe Forgejo en Go : FAQ
Plus bas le hcl que j’utilise sur Nomad pour déployer 2 conteneurs sur mon NUC. Le premier conteneur est un docker-in-docker
qui a besoin de se connecter à la socket de Docker sur le NUC pour builder des images Docker (d’où le DinD). Pour cela il lui faut la variable privileged
à true.
Pour que cela fonctionne il faut ajouter dans la configuration Nomad sur le NUC allow_privileged = true
dans la section plugin et relancer Nomad (Voir la configuration complète).
plugin "docker" {
config {
allow_privileged = true
volumes {
enabled = true
}
extra_labels = ["job_name", "job_id", "task_group_name", "task_name", "namespace", "node_name", "node_id"]
}
}
Le 2ème conteneur est le runner en lui même, il reçoit les Actions de Coderberg et utilise le 1er conteneur pour exécuter les tâches de build,
Avant de lancer le hcl il faut exécuter dans un terminal cette commande afin de connaitre la dernière version du runner et mettre à jour le numéro du tag si nécessaire, car il n’y a pas de tag latest, à ce jour c’est la version 3.5.1.
echo RUNNER_VERSION=$(curl -X 'GET' https://code.forgejo.org/api/v1/repos/forgejo/runner/releases/latest | jq .name -r | cut -c 2-)`
RUNNER_VERSION=3.5.1
Voici le hcl mais on ne le lance pas encore
forgejo-runner.hcl
job "forgejo-runner" {
datacenters = ["dc1"]
type = "service"
group "home" {
count = 1
network {
mode = "bridge"
port "tcp" {
to = 2376 # container port the app runs on
}
}
task "docker-in-docker" {
driver = "docker"
constraint {
attribute = "${attr.unique.hostname}"
value = "nuc"
}
config {
image = "docker:dind"
privileged = true
volumes = [
"/data/volumes/forgejo-runner/certs:/certs"
]
ports = [
"tcp"
]
}
}
# export RUNNER_VERSION=$(curl -X 'GET' https://code.forgejo.org/api/v1/repos/forgejo/runner/releases/latest | jq .name -r | cut -c 2-)
task "forgejo-runner" {
driver = "docker"
constraint {
attribute = "${attr.unique.hostname}"
value = "nuc"
}
env {
DOCKER_HOST = "tcp://127.0.0.1:2376"
DOCKER_CERT_PATH = "/certs/client"
DOCKER_TLS_VERIFY = 1
}
config {
image = "code.forgejo.org/forgejo/runner:3.5.1"
command = "/bin/sh"
args = [
"-c",
"while ! nc -z 127.0.0.1 2376 </dev/null; do echo 'waiting for docker daemon...'; sleep 5; done; forgejo-runner -c /data/config.yml daemon",
]
# command = "/bin/sh"
# args = [
# "-c",
# "while : ; do sleep 1 ; done ;",
# ]
volumes = [
"/data/volumes/forgejo-runner/data:/data",
"/data/volumes/forgejo-runner/certs:/certs"
]
}
resources {
cpu = 500
memory = 600
}
service {
name = "forgejo-runner"
provider = "consul"
}
}
}
}
Avant il faut modifier décommenter command /bin/sh
qui fait un sleep infini et commenter command /bin/sh
qui lance forgejo-runner
, cela permettra de rentrer dans le conteneur. On le lance
nomad job run forgejo-runner.hcl
Et on se connecte sur le serveur sur lequel le runner est lancé puis dans son conteneur via docker exec -it ID bash
. On peut lancer à la main la commande forgejo-runner register
. Attention le répertoire du volume /data/volumes/forgejo-runner/data
doit appartenir à l’utilisateur 1000 pour avoir les droits d’écrire.
Lors du register le runner va demander l’url de l’instance forgejo, ici https://codeberg.org
et la token que l’on obtient dans les settings de son compte.
Pour les labels j’ai laissé la config par défaut ce qui a mis comme image "docker:docker://node:20-bullseye"
mais j’ai ajouté ensuite ce label "ubuntu-22.04:docker://ghcr.io/catthehacker/ubuntu:act-22.04"
selon les docs fournies en liens plus bas. Les informations sont écrites dans un fichier .runner.
/data/volumes/forgejo-runner/data/.runner
{
"WARNING": "This file is automatically generated by act-runner. Do not edit it manually unless you know what you are doing. Removing this file will cause act runner to re-register as a new runner.",
"id": 2560,
"uuid": "UUID",
"name": "nuc",
"token": "TOKEN_CODEBERG",
"address": "https://codeberg.org",
"labels": [
"docker:docker://node:20-bookworm",
"ubuntu-22.04:docker://ghcr.io/catthehacker/ubuntu:act-22.04"
]
}
A ce stade le runner pourrait être lancé et il se connecterait bien à Codeberg, mais il sera incapable de déclencher des builds Docker in Docker. Pour info j’ai passé des heures sur cette page de bug Cannot connect to the Docker daemon at unix:///var/run/docker.sock 😑 Inutile de détailler, voici ce qu’il reste à faire.
Il faut générer le fichier de configuration yaml que l’on va modifier ensuite
forgejo-runner generate-config > config.yml
config.yml (modifié)
# Example configuration file, it's safe to copy this as the default config file without any modification.
# You don't have to copy this file to your instance,
# just run `./act_runner generate-config > config.yaml` to generate a config file.
log:
# The level of logging, can be trace, debug, info, warn, error, fatal
level: info
runner:
# Where to store the registration result.
file: .runner
# Execute how many tasks concurrently at the same time.
capacity: 1
# Extra environment variables to run jobs.
envs:
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: /certs/client
# Extra environment variables to run jobs from a file.
# It will be ignored if it's empty or the file doesn't exist.
env_file: .env
# The timeout for a job to be finished.
# Please note that the Forgejo instance also has a timeout (3h by default) for the job.
# So the job could be stopped by the Forgejo instance if it's timeout is shorter than this.
timeout: 3h
# The timeout for the runner to wait for running jobs to finish when
# shutting down because a TERM or INT signal has been received. Any
# running jobs that haven't finished after this timeout will be
# cancelled.
# If unset or zero the jobs will be cancelled immediately.
shutdown_timeout: 3h
# Whether skip verifying the TLS certificate of the instance.
insecure: false
# The timeout for fetching the job from the Forgejo instance.
fetch_timeout: 5s
# The interval for fetching the job from the Forgejo instance.
fetch_interval: 2s
# The interval for reporting the job status and logs to the Forgejo instance.
report_interval: 1s
# The labels of a runner are used to determine which jobs the runner can run, and how to run them.
# Like: ["macos-arm64:host", "ubuntu-latest:docker://node:20-bookworm", "ubuntu-22.04:docker://node:20-bookworm"]
# If it's empty when registering, it will ask for inputting labels.
# If it's empty when execute `deamon`, will use labels in `.runner` file.
labels: []
cache:
# Enable cache server to use actions/cache.
enabled: true
# The directory to store the cache data.
# If it's empty, the cache data will be stored in $HOME/.cache/actcache.
dir: ""
# The host of the cache server.
# It's not for the address to listen, but the address to connect from job containers.
# So 0.0.0.0 is a bad choice, leave it empty to detect automatically.
host: ""
# The port of the cache server.
# 0 means to use a random available port.
port: 0
# The external cache server URL. Valid only when enable is true.
# If it's specified, act_runner will use this URL as the ACTIONS_CACHE_URL rather than start a server by itself.
# The URL should generally end with "/".
external_server: ""
container:
# Specifies the network to which the container will connect.
# Could be host, bridge or the name of a custom network.
# If it's empty, create a network automatically.
network: "host"
# Whether to create networks with IPv6 enabled. Requires the Docker daemon to be set up accordingly.
# Only takes effect if "network" is set to "".
enable_ipv6: false
# Whether to use privileged mode or not when launching task containers (privileged mode is required for Docker-in-Docker).
privileged: false
# And other options to be used when the container is started (eg, --add-host=my.forgejo.url:host-gateway).
#options:
options: -v /certs/client:/certs/client
# The parent directory of a job's working directory.
# If it's empty, /workspace will be used.
workdir_parent:
# Volumes (including bind mounts) can be mounted to containers. Glob syntax is supported, see https://github.com/gobwas/glob
# You can specify multiple volumes. If the sequence is empty, no volumes can be mounted.
# For example, if you only allow containers to mount the `data` volume and all the json files in `/src`, you should change the config to:
# valid_volumes:
# - data
# - /src/*.json
# If you want to allow any volume, please use the following configuration:
# valid_volumes:
# - '**'
#valid_volumes: []
valid_volumes:
- /certs/client
# overrides the docker client host with the specified one.
# If it's empty, act_runner will find an available docker host automatically.
# If it's "-", act_runner will find an available docker host automatically, but the docker host won't be mounted to the job containers and service containers.
# If it's not empty or "-", the specified docker host will be used. An error will be returned if it doesn't work.
docker_host: ""
# Pull docker image(s) even if already present
force_pull: false
host:
# The parent directory of a job's working directory.
# If it's empty, $HOME/.cache/act/ will be used.
workdir_parent:
A noter les variables d’environnement DOCKER_TLS_VERIFY
et DOCKER_CERT_PATH
(options et valid_volumes) et surtout le network: "host"
qui permet au conteneur de build de se connecter au conteneur Dockerd docker-in-docker
.
On vérifie ensuite que tout fonctionne en lançant directement dans le conteneur la commande forgejo-runner -c /data/config.yml daemon
et si tout est ok vous devriez voir dans l’interface web de Codeberg ceci
Enfin on quitte le conteneur, on recommente la commande command /bin/sh
qui fait la boucle while et on décommente command /bin/sh
qui lance le runner puis on stop le runner nomad job stop forgejo-runner
et on relance nomad job run forgejo-runner.hcl
.
🎉 🧨 🎊
Les Actions doivent être activées pour chaque repo dans les settings du projet : Repository units / Overview / et cocher Enable integrated CI/CD pipelines with Forgejo Actions.
Ensuite on peut déposer un fichier de test à la racine du dépot git : .forgejo/workflows/test.yaml , exemple :
on: [push]
jobs:
test:
runs-on: docker
steps:
- run: echo All Good
On git push et on doit voir dans l’onglet Actions du projet
Maintenant on test on lancant la commande docker info
pour valider que tout fonctionne. Voici le nouveau test.yaml
.forgejo/workflows/test.yaml
name: Docker Image CI
on:
push:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: test
run: docker info
après un git push le runner devrait exécuter le docker info
[Docker Image CI/build] |
[Docker Image CI/build] | Server:
[Docker Image CI/build] | Containers: 1
[Docker Image CI/build] | Running: 1
[Docker Image CI/build] | Paused: 0
[Docker Image CI/build] | Stopped: 0
[Docker Image CI/build] | Images: 1
[Docker Image CI/build] | Server Version: 27.3.1
[Docker Image CI/build] | Storage Driver: overlay2
[Docker Image CI/build] | Backing Filesystem: btrfs
[Docker Image CI/build] | Supports d_type: true
[Docker Image CI/build] | Using metacopy: true
[Docker Image CI/build] | Native Overlay Diff: false
[Docker Image CI/build] | userxattr: false
[Docker Image CI/build] | Logging Driver: json-file
[Docker Image CI/build] | Cgroup Driver: cgroupfs
[Docker Image CI/build] | Cgroup Version: 2
[Docker Image CI/build] | Plugins:
[Docker Image CI/build] | Volume: local
[Docker Image CI/build] | Network: bridge host ipvlan macvlan null overlay
[Docker Image CI/build] | Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
[Docker Image CI/build] | Swarm: inactive
[Docker Image CI/build] | Runtimes: io.containerd.runc.v2 runc
[Docker Image CI/build] | Default Runtime: runc
[Docker Image CI/build] | Init Binary: docker-init
[Docker Image CI/build] | containerd version: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
[Docker Image CI/build] | runc version: v1.1.14-0-g2c9f560
[Docker Image CI/build] | init version: de40ad0
[Docker Image CI/build] | Security Options:
[Docker Image CI/build] | apparmor
[Docker Image CI/build] | seccomp
[Docker Image CI/build] | Profile: builtin
[Docker Image CI/build] | cgroupns
[Docker Image CI/build] | Kernel Version: 6.10.6-10-MANJARO
[Docker Image CI/build] | Operating System: Alpine Linux v3.20 (containerized)
[Docker Image CI/build] | OSType: linux
[Docker Image CI/build] | Architecture: x86_64
[Docker Image CI/build] | CPUs: 4
[Docker Image CI/build] | Total Memory: 15.55GiB
[Docker Image CI/build] | Name: 3d4855d7f793
[Docker Image CI/build] | ID: 858db76d-55ca-47b7-90c2-aae0c540978f
[Docker Image CI/build] | Docker Root Dir: /var/lib/docker
[Docker Image CI/build] | Debug Mode: false
[Docker Image CI/build] | Experimental: false
[Docker Image CI/build] | Insecure Registries:
[Docker Image CI/build] | 127.0.0.0/8
[Docker Image CI/build] | Live Restore Enabled: false
[Docker Image CI/build] | Product License: Community Engine
Pour comparer voici le docker info
sur mon NUC qui ne donne pas le même résultat (Server version, Operating System et Name) ce qui montre bien que la directive network: "host"
permet au Docker dans le Docker de se connecter au serveur Docker embarqué dans le Docker du NUC 😅
Server:
Containers: 14
Running: 14
Paused: 0
Stopped: 0
Images: 15
Server Version: 27.1.2
Storage Driver: overlay2
Backing Filesystem: btrfs
Supports d_type: true
Using metacopy: true
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353.m
runc version:
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.10.6-10-MANJARO
Operating System: Manjaro Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.55GiB
Name: nuc
ID: 587bcdd1-c354-4028-ab5f-f38d5859588a
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
On peut enfin faire quelque chose de réel, par exemple builder une image Docker après un git push et la pousser sur le hub Docker. Exemple avec le repo du blog d’un site utilisant gibson : https://codeberg.org/fredix/nostromo.social
.forgejo/workflows/ci.yaml
name: ci
on:
push:
branches: [ "main" ]
jobs:
docker:
runs-on: ubuntu-22.04
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
with:
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/nostromo.social:latest
Bien sûr il sera préférable de builder une nouvelle image sur un push dans une autre branche que main.
Ce pipeline build une image en utilisant le Dockerfile dans le repo, puis se connecte au docker hub (il faut avoir mis ses crédentials dans les secrets de son compte Codeberg).
Et voici un pipeline Nomad pour mettre à jour l’image Docker en prod lors d’un git push dans un tag
.forgejo/workflows/deploy.yaml
name: cd
on:
push:
tags: 'v*'
jobs:
deploy:
runs-on: docker
container:
image: hashicorp/nomad:1.8
env:
NOMAD_ADDR: ${{ secrets.NOMAD_ADDR }}
NOMAD_TOKEN: ${{ secrets.NOMAD_TOKEN }}
steps:
- name: wget
run: wget https://codeberg.org/fredix/nomad/raw/branch/main/nostromo.hcl
- name: deploy
run: |
nomad job stop nostromo
sleep 5
nomad job run nostromo.hcl
Allons plus loin en mettant en place un load balancer côté Caddy, qui distribue les requêtes vers 2 conteneurs sur le NUC.
On reste sur le blog nostromo.social en modifiant le hcl du conteneur
nostromo.hcl
job "nostromo" {
datacenters = ["dc1"]
type = "service"
group "home" {
count = 1
# Add an update stanza to enable rolling updates of the service
update {
max_parallel = 2
min_healthy_time = "30s"
healthy_deadline = "5m"
# Enable automatically reverting to the last stable job on a failed
# deployment.
auto_revert = true
}
network {
port "http" {
to = 8080 # container port the app runs on
host_network = "tailscale"
}
}
task "nostromo" {
driver = "docker"
constraint {
attribute = "${attr.unique.hostname}"
value = "nuc"
}
config {
image = "fredix/nostromo.social:latest"
ports = [
"http"
]
}
resources {
cpu = 100
memory = 64
}
service {
name = "nostromo"
provider = "consul"
port = "http"
tags = ["allocport=${NOMAD_HOST_PORT_http}"]
check {
type = "http"
name = "app_health"
path = "/"
interval = "20s"
timeout = "10s"
}
}
}
}
}
La variable count est incrémentée count = 2
ce qui demande à Nomad de déployer 2 conteneurs sur le NUC.
Le template du conteneur caddy est également modifié. Maintenant il ajoute dynamiquement dans une variable ENDPOINT
chacune des instances Docker avec leur propre port. Exemple sur mon serveur
docker ps|grep -i nostromo-caddy
4d099c9c77d9 fredix/sleep "sleep infinity"
docker exec -it 4d099c9c77d9 env|grep -i endpoint
ENDPOINT=nostromo.service.consul:21029 nostromo.service.consul:31334
A noter que ce template fonctionne également si la cible n’a qu’un seul conteneur.
nostromo-caddy.hcl
job "nostromo-caddy" {
datacenters = ["dc1"]
type = "service"
group "app" {
count = 1
task "nostromo-caddy" {
driver = "docker"
constraint {
attribute = "${attr.unique.hostname}"
value = "node1"
}
template {
data = <<EOH
# as service 'nostromo' is registered in Consul
# we want to grab its 'allocport' tag
# set a default value
ENDPOINT=""
{{$T_ENDPOINT := ""}}
{{- range $tag, $services := service "nostromo" | byTag -}}
{{if $tag | contains "allocport"}}
{{$allocId := index ($tag | split "=") 1}}
{{ $INSTANCE := (print "nostromo.service.consul:" $allocId) }}
{{ $T_ENDPOINT = (print $T_ENDPOINT $INSTANCE " ") }}
{{end}}
{{end}}
ENDPOINT="{{ $T_ENDPOINT }}"
EOH
destination = "secrets/file.env"
env = true
}
config {
image = "fredix/sleep"
labels = {
"caddy" = "nostromo.social"
"caddy.reverse_proxy" = "${ENDPOINT}"
# remove the following line when you have verified your setup
# Otherwise you risk being rate limited by let's encrypt
"caddy.tls.ca" = "https://acme-v02.api.letsencrypt.org/directory"
}
}
resources {
cpu = 10
memory = 10
}
service {
name = "nostromo-caddy"
tags = ["global", "app"]
provider = "consul"
}
}
}
}
j’ai modifié le pipeline de déploiement pour builder une image docker en fonction du tag envoyé, puis relancer les 2 conteneurs avec Nomad
.forgejo/workflows/deploy.yaml
name: cd
on:
push:
tags: 'v*'
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
with:
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/nostromo.social:${{ env.GITHUB_REF_NAME }}
deploy:
runs-on: docker
container:
image: hashicorp/nomad:1.8
env:
NOMAD_ADDR: ${{ secrets.NOMAD_ADDR }}
NOMAD_TOKEN: ${{ secrets.NOMAD_TOKEN }}
steps:
- name: wget & sed
run: |
wget https://codeberg.org/fredix/nomad/raw/branch/main/nostromo.hcl
sed -i "s/latest/${{ env.GITHUB_REF_NAME }}/g" nostromo.hcl
- name: deploy
run: nomad job run nostromo.hcl
Il suffit de taguer le repo et le pousser pour déclencher la CI/CD
git tag -a v0.3.3 -m "update to 0.3.3"
git push origin v0.3.3
Une nouvelle image portant le numéro du tag est construite et poussée vers le Hub Docker, puis un conteneur Nomad récupère le hcl, remplace latest par la version du tag et met à jour les 2 conteneurs sur le NUC.
F I N
ps : suite à mes tests Codeberg a bloqué mon IP car trop de requêtes … 🤭
ps2: il est problable que je n’écrive plus pendant un certain temps mais stay tuned comme on dit.
(Ce texte a été écrit avec VNote)