1
0

Compare commits

..

78 Commits

Author SHA1 Message Date
8a9b3db287 Gramps: upgrade to 25.7.0 2025-07-02 13:43:33 +03:00
a72c67f070 Wakapi: install 2.14.0
And transfer data from local
2025-07-01 11:21:05 +03:00
47745b7bc9 RSS-Bridge: install version 2025-06-03 2025-06-30 19:18:45 +03:00
c568f00db1 Miniflux: install and configure rss reader 2025-06-28 12:12:19 +03:00
99b6959c84 Tasks: add quick commands for authelia 2025-06-28 11:00:32 +03:00
fa65726096 Authelia: upgrade to 4.39.4 2025-06-28 10:02:57 +03:00
f9eaf7a41e Rename encrypted vars to secrets 2025-06-28 09:59:04 +03:00
d825b1f391 Netdata: upgrade to 2.5.4 2025-06-28 09:57:19 +03:00
b296a3f2fe Netdata: upgrade to 2.5.3 2025-06-22 09:34:57 +03:00
8ff89c9ee1 Gitea: upgrade to 1.24.2 2025-06-22 09:31:46 +03:00
62a4e598bd Gitea: upgrade to v1.24.0 2025-06-11 20:48:51 +03:00
b65aaa5072 Gramps: upgrade to v25.6.0 2025-06-11 20:48:27 +03:00
98b7aff274 Gramps: upgrade to v25.5.2 2025-05-24 12:04:45 +03:00
6eaf7f7390 Netdata: upgrade to 2.5.1 2025-05-21 21:24:22 +03:00
32e80282ef Update ansible roles 2025-05-17 17:17:01 +03:00
c8bd9f4ec3 Netdata: add fail2ban monitoring 2025-05-17 16:58:12 +03:00
d3d189e284 Gitea: upgrade to 1.23.8 2025-05-17 13:51:10 +03:00
71fe688ef8 Caddy: upgrade to 2.10.0 2025-05-17 13:50:47 +03:00
c5d0f96bdf Netdata + Authelia: add monitoring 2025-05-17 13:33:35 +03:00
eea8db6499 Netdata + Caddy: add monitoring for http-server 2025-05-17 11:55:38 +03:00
7893349da4 Netdata: refactoring as docker compose app 2025-05-17 10:27:41 +03:00
a4c61f94e6 Gramps: upgrade to 25.5.1 (with Gramps API 3.0.0) 2025-05-12 15:56:23 +03:00
da0a261ddd Outline: upgrade to 0.84.0 2025-05-12 12:58:21 +03:00
b9954d1bba Authelia: upgrade to 4.39.3 2025-05-12 12:55:41 +03:00
3a23c08f37 Remove keycloak 2025-05-07 12:51:05 +03:00
d1500ea373 Outline: use oidc from authelia 2025-05-07 12:37:07 +03:00
a77fefcded Authelia: introduce to protect system services 2025-05-07 11:23:22 +03:00
41fac2c4f9 Remove caddy system-wide installation 2025-05-06 12:00:32 +03:00
280ea24dea Caddy: web proxy in docker container 2025-05-06 11:50:26 +03:00
855bafee5b Format files with ansible-lint 2025-05-06 11:20:00 +03:00
adde4e32c1 Networks: create internal docker network for proxy server
Prepare to use caddy in docker
2025-05-06 11:11:48 +03:00
527067146f Gramps: refactor app
Move scripts, configs and data to separate user space
2025-05-06 10:25:38 +03:00
93326907d2 Remove unused var 2025-05-06 10:02:39 +03:00
bcad87c6e0 Remove legacy files 2025-05-05 20:57:47 +03:00
5d127d27ef Homepage: refactoring 2025-05-05 20:40:32 +03:00
2d6cb3ffe0 Format files with ansible-lint 2025-05-05 18:04:54 +03:00
e68920c0e2 Netdata as playbook 2025-05-05 18:02:14 +03:00
c5c15341b8 Outline: update to 0.83.0 2025-05-05 17:00:48 +03:00
cd4a7177d7 Outline: configure backups 2025-05-05 16:53:09 +03:00
daeef1bc4b Backups: rewrite backup script 2025-05-05 11:48:49 +03:00
ddae18f8b3 Gitea: configure backups again 2025-05-05 11:39:06 +03:00
8c8657fdd8 Gramps: configure backup again 2025-05-05 11:26:54 +03:00
c4b0200dc6 Outline: configure mailer 2025-05-04 14:02:28 +03:00
38bafd7186 Remove old configs 2025-05-04 11:12:44 +03:00
c6db39b55a Remove old playbooks and configs 2025-05-04 11:05:18 +03:00
528512e665 Refactor outline app: deploy with ansible 2025-05-04 10:59:41 +03:00
0e05d3e066 Make consistent container names 2025-05-04 10:26:17 +03:00
4221fb0009 Refactor keycloac app: deploy with ansible 2025-05-04 10:18:18 +03:00
255ac33e04 Configure gitea mailer 2025-05-03 19:39:02 +03:00
0bdd2c2543 Update gitea to 1.23.7 2025-05-03 16:58:38 +03:00
155d065dd0 Add backups for gitea 2025-05-03 16:56:22 +03:00
9a3e646d8a Refactor gitea app: deploy with ansible 2025-05-03 14:44:23 +03:00
f4b5fcb0f1 Format playbooks with ansible-lint 2025-05-03 10:41:00 +03:00
3054836085 Fix cronjob for backups 2025-05-03 10:35:33 +03:00
838f959fd9 Remove apps dir in files, simplify layout 2025-05-02 19:52:48 +03:00
5b60af6061 gramps: fix redis host and baclups 2025-05-02 19:45:48 +03:00
d1eae9b5b5 Configure baclup for sqlite databases 2025-05-02 19:05:17 +03:00
76328bf6c6 Update gramps to v25.4.1
- Inline vars into docker compose file
- Replace redis with valkey
2025-05-02 18:40:13 +03:00
a31cbbe18e Add backups with gobackup and restic 2025-05-02 17:34:31 +03:00
132da79fab Add utils for backups: task. restic. gobaclup 2025-05-02 10:57:42 +03:00
676f6626f2 Update netdata to 2.4.0 2025-05-02 10:33:56 +03:00
dda5cb4449 Update eget installation path 2025-05-02 10:31:41 +03:00
4ae238a09a Drop music service
Move music to homelab
2025-04-21 14:49:15 +03:00
fcedbdbe3d Fix docker image tag and push 2025-04-13 11:03:37 +03:00
e5ad8fda80 Add playbook for homepage app deploy 2025-04-13 10:44:32 +03:00
c5a7db6a55 Update navidrome 0.55.2 2025-04-06 10:01:14 +03:00
30f7a913ab Update navidrome 0.55.1 2025-03-15 10:19:12 +03:00
5a3c32ba73 Update navidrome 2025-02-21 10:19:30 +03:00
9f075fac11 Update applications 2025-02-16 10:12:02 +03:00
5e427e688d Remove obsolete applications 2025-01-25 16:49:28 +03:00
32437de3f1 Update navidrome 2025-01-25 16:48:53 +03:00
88b47fb32d Update netdata 2025-01-25 16:48:15 +03:00
cad1c9bd89 Reduce worker count to 2 2025-01-25 16:48:05 +03:00
ba2891b18c Add email config to gramps 2025-01-07 11:49:17 +03:00
45185fd8a8 Add Gramps web application 2025-01-05 20:44:47 +03:00
0ce778871e Configure navidrome 2024-12-24 18:23:14 +03:00
6fc30522d0 Add music app 2024-12-23 17:08:15 +03:00
87e13973ec Add rclone with eget and rclone docker plugin role 2024-12-23 15:26:32 +03:00
80 changed files with 6257 additions and 765 deletions

View File

@ -1,3 +1,5 @@
---
exclude_paths:
- 'galaxy.roles/'
- ".ansible/"
- "galaxy.roles/"
- "Taskfile.yml"

View File

@ -9,6 +9,9 @@ indent_size = 4
[*.yml]
indent_size = 2
[*.yml.j2]
indent_size = 2
[Vagrantfile]
indent_size = 2

4
.gitignore vendored
View File

@ -1,7 +1,11 @@
/.ansible
/.idea
/.vagrant
/.vscode
/galaxy.roles/
/ansible-vault-password-file
/temp
*.retry
test_smtp.py

View File

@ -3,13 +3,13 @@
Настройки виртуального сервера для домашних проектов.
> В этом проекте не самые оптимальные решения.
> Но они помогают мне поддерживать сервер для моих личных проектов уже семь лет.
> Но они помогают мне поддерживать сервер для моих личных проектов уже много лет.
## Требования
- [ansible](https://docs.ansible.com/ansible/latest/getting_started/index.html)
- [invoke](https://www.pyinvoke.org/)
- [task](https://taskfile.dev/)
- [yq](https://github.com/mikefarah/yq)
## Установка
@ -20,37 +20,21 @@ $ ansible-galaxy install --role-file requirements.yml
## Структура
- Для каждого приложения создается свой пользователь.
- Для каждого приложения создается свой пользователь (опционально).
- Для доступа используется ssh-ключ.
- Докер используется для запуска и изоляции приложений. Для загрузки образов настраивается Yandex Docker Registry.
- Выход во внешнюю сеть через proxy server [Caddy](https://caddyserver.com/).
- Чувствительные данные в `vars/vars.yaml` зашифрованы с помощью Ansible Vault.
- Для мониторинга за сервером устанавливается [netdata](https://github.com/netdata/netdata).
## Частые команды
## Настройка DNS
Конфигурация приложений (если нужно добавить новое приложение):
```bash
$ task configure-apps
```
Конфигурация мониторинга (если нужно обновить netdata):
```bash
$ task configure-monitoring
```
В организации Яндекс: https://admin.yandex.ru/domains/vakhrushev.me?action=set_dns&uid=46045840
## Деплой приложений
Доступные для деплоя приложения:
Деплой всех приложений через ansible:
```bash
invoke --list
```
Выполнить команду деплоя, например:
```bash
invoke deploy:gitea
ansible-playbook -i production.yml --diff playbook-gitea.yml
```

View File

@ -12,17 +12,58 @@ vars:
sh: 'yq .ungrouped.hosts.server.ansible_user {{.HOSTS_FILE}}'
REMOTE_HOST:
sh: 'yq .ungrouped.hosts.server.ansible_host {{.HOSTS_FILE}}'
AUTHELIA_DOCKER: 'docker run --rm -v $PWD:/data authelia/authelia:4.39.4 authelia'
tasks:
install-roles:
cmds:
- ansible-galaxy role install --role-file requirements.yml --force
ssh:
cmds:
- ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}}
edit-vars:
btop:
cmds:
- ansible-vault edit vars/vars.yml
env:
EDITOR: micro
- ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} -t btop
vars-decrypt:
cmds:
- ansible-vault decrypt vars/vars.yml
vars-encrypt:
cmds:
- ansible-vault encrypt vars/vars.yml
authelia-cli:
cmds:
- "{{.AUTHELIA_DOCKER}} {{.CLI_ARGS}}"
authelia-validate-config:
vars:
DEST_FILE: "temp/configuration.yml"
cmds:
- >
ansible localhost
--module-name template
--args "src=files/authelia/configuration.yml.j2 dest={{.DEST_FILE}}"
--extra-vars "@vars/secrets.yml"
- defer: rm -f {{.DEST_FILE}}
- >
{{.AUTHELIA_DOCKER}}
validate-config --config /data/{{.DEST_FILE}}
authelia-gen-random-string:
cmds:
- >
{{.AUTHELIA_DOCKER}}
crypto rand --length 32 --charset alphanumeric
authelia-gen-secret-and-hash:
cmds:
- >
{{.AUTHELIA_DOCKER}}
crypto hash generate pbkdf2 --variant sha512 --random --random.length 72 --random.charset rfc3986
format-py-files:
cmds:

28
Vagrantfile vendored
View File

@ -1,28 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Этот файл предназначен для запуска тестовой виртуальной машины,
# на которой можно обкатать роли для настройки сервера.
ENV["LC_ALL"] = "en_US.UTF-8"
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.network "private_network", ip: "192.168.50.10"
# Приватный ключ для доступа к машине
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end

View File

@ -1,3 +0,0 @@
WEB_SERVER_PORT=9494
USER_UID=1000
USER_GID=1000

View File

@ -1 +0,0 @@
data/

View File

@ -1,16 +0,0 @@
services:
server:
image: gitea/gitea:1.22.6
restart: unless-stopped
environment:
- "USER_UID=${USER_UID}"
- "USER_GID=${USER_GID}"
- "GITEA__server__SSH_PORT=2222"
volumes:
- ./data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "${WEB_SERVER_PORT}:3000"
- "2222:22"

View File

@ -1,5 +0,0 @@
WEB_SERVER_PORT=9595
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=password
USER_UID=1000
USER_GID=1000

View File

@ -1 +0,0 @@
data/

View File

@ -1,22 +0,0 @@
# Images: https://quay.io/repository/keycloak/keycloak?tab=tags&tag=latest
# Configuration: https://www.keycloak.org/server/all-config
# NB
# - На проде были проблемы с правами к директории data, пришлось выдать 777
# - Переменную KC_HOSTNAME_ADMIN_URL нужно указать вместе с KC_HOSTNAME_URL, иначе будут ошибки 403
services:
keycloak:
image: quay.io/keycloak/keycloak:24.0.4
command: ["start-dev"]
restart: unless-stopped
environment:
KEYCLOAK_ADMIN: "${KEYCLOAK_ADMIN}"
KEYCLOAK_ADMIN_PASSWORD: "${KEYCLOAK_ADMIN_PASSWORD}"
KC_HOSTNAME_URL: "https://kk.vakhrushev.me"
KC_HOSTNAME_ADMIN_URL: "https://kk.vakhrushev.me"
ports:
- "${WEB_SERVER_PORT}:8080"
volumes:
- "./data:/opt/keycloak/data"

View File

@ -1,16 +0,0 @@
# Images: https://quay.io/repository/keycloak/keycloak?tab=tags&tag=latest
# Configuration: https://www.keycloak.org/server/all-config
services:
keycloak:
image: quay.io/keycloak/keycloak:24.0.4
command: ["start-dev"]
restart: unless-stopped
environment:
KEYCLOAK_ADMIN: "${KEYCLOAK_ADMIN}"
KEYCLOAK_ADMIN_PASSWORD: "${KEYCLOAK_ADMIN_PASSWORD}"
ports:
- "${WEB_SERVER_PORT}:8080"
volumes:
- "./data:/opt/keycloak/data"

View File

@ -1,60 +0,0 @@
services:
outline-app:
image: outlinewiki/outline:0.81.1
restart: unless-stopped
ports:
- "${WEB_SERVER_PORT}:3000"
depends_on:
- postgres
- redis
environment:
NODE_ENV: '${NODE_ENV}'
SECRET_KEY: '${SECRET_KEY}'
UTILS_SECRET: '${UTILS_SECRET}'
DATABASE_URL: '${DATABASE_URL}'
PGSSLMODE: '${PGSSLMODE}'
REDIS_URL: '${REDIS_URL}'
URL: '${URL}'
FILE_STORAGE: '${FILE_STORAGE}'
FILE_STORAGE_UPLOAD_MAX_SIZE: '262144000'
AWS_ACCESS_KEY_ID: '${AWS_ACCESS_KEY_ID}'
AWS_SECRET_ACCESS_KEY: '${AWS_SECRET_ACCESS_KEY}'
AWS_REGION: '${AWS_REGION}'
AWS_S3_ACCELERATE_URL: '${AWS_S3_ACCELERATE_URL}'
AWS_S3_UPLOAD_BUCKET_URL: '${AWS_S3_UPLOAD_BUCKET_URL}'
AWS_S3_UPLOAD_BUCKET_NAME: '${AWS_S3_UPLOAD_BUCKET_NAME}'
AWS_S3_FORCE_PATH_STYLE: '${AWS_S3_FORCE_PATH_STYLE}'
AWS_S3_ACL: '${AWS_S3_ACL}'
OIDC_CLIENT_ID: '${OIDC_CLIENT_ID}'
OIDC_CLIENT_SECRET: '${OIDC_CLIENT_SECRET}'
OIDC_AUTH_URI: '${OIDC_AUTH_URI}'
OIDC_TOKEN_URI: '${OIDC_TOKEN_URI}'
OIDC_USERINFO_URI: '${OIDC_USERINFO_URI}'
OIDC_LOGOUT_URI: '${OIDC_LOGOUT_URI}'
OIDC_USERNAME_CLAIM: '${OIDC_USERNAME_CLAIM}'
OIDC_DISPLAY_NAME: '${OIDC_DISPLAY_NAME}'
redis:
image: redis:7.2-bookworm
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./redis.conf:/redis.conf
command: ["redis-server", "/redis.conf"]
postgres:
image: postgres:16.3-bookworm
restart: unless-stopped
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: '${POSTGRES_USER}'
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD}'
POSTGRES_DB: '${POSTGRES_DB}'
volumes:
database-data:

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,15 @@
services:
authelia_app:
container_name: 'authelia_app'
image: 'docker.io/authelia/authelia:4.39.4'
user: '{{ user_create_result.uid }}:{{ user_create_result.group }}'
restart: 'unless-stopped'
networks:
- "{{ web_proxy_network }}"
volumes:
- "{{ config_dir }}:/config"
networks:
{{ web_proxy_network }}:
external: true

37
files/authelia/users.yml Normal file
View File

@ -0,0 +1,37 @@
$ANSIBLE_VAULT;1.1;AES256
33323463653739626134366261626263396338333966376262313263613131343962326432613263
6430616564313432666436376432383539626231616438330a646161313364353566373833353337
64633361306564646564663736663937303435356332316432666135353863393439663235646462
3136303031383835390a396531366636386133656366653835633833633733326561383066656464
31613933333731643065316130303561383563626636346633396266346332653234373732326535
39663765353938333835646563663633393835633163323435303164663261303661666435306239
34353264633736383565306336633565376436646536623835613330393466363935303031346664
63626465656435383162633761333131393934666632336539386435613362353135383538643836
66373261306139353134393839333539366531393163393266386531613732366431663865343134
64363933616338663966353431396133316561653366396130653232636561343739336265386339
38646238653436663531633465616164303633356233363433623038666465326339656238653233
36323162303233633935646132353835336364303833636563346535316166346533636536656665
64323030616665316133363739393364306462316135636630613262646436643062373138656431
35663334616239623534383564643738616264373762663034376332323637626337306639653830
65386339666465343931303933663561643664313364386662656663643336636264636333666435
66366531613538363233346137383462326334306534333564636232393931393433386664363036
39623134636331646536323531653063326231613363366562643561353939633062663132303035
38303265326136303633666566613966636133666336396133333033643434303138303065666463
36643765316134636133333937396332613233383932663265386264623133633364646237346465
32623965653662336335366639643765393636623236323036396538353666646132393636663536
65646638643236313762373135336430643731643961386264303134366633353934366431333430
34313362633836613166336437323835626537653237666139383230663835626630623933383834
32636136663830643661363663303136393733646133626538333836666135653936323832336433
64396234396430326334656561393264366263313730306631383037643135613765373861356561
37363933383238316232336564363364376637626630373963666262376165343838303530653764
64343937666365646666363939383662313334656236326566373565643637313434616261616635
35646131396432623534396133666239613036386332663038353531313935636139363136666562
62616234663935383262626235313337623332333733383035666633393965336535316234323561
37353563623138343339616565653465633633383563636631356333303435376536393634343031
63653062303432366230643333353634383061313135616533643935316263393366653335353964
36363135356365373064613338393261326265396330323930613538326330663532616163666564
39313631633434353938626637626462376139383536306531633733646331303030333238373161
36336364383939663132366461383264346631366566363638333738386235623264623331343738
34316436393363323165396430343163653837623035626236313663643038336666633535666462
33323566353062653964643362363233346264396365336637376661323730336437333031363830
38303962646561346262

View File

@ -0,0 +1,37 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Backup: perform gitea backup"
su --login gitea --command '/home/gitea/backup.sh'
echo "Backup: perform outline backup"
su --login outline --command '/home/outline/backup.sh'
echo "Backup: perform gramps backup"
su --login gramps --command '/home/gramps/backup.sh'
echo "Backup: perform miniflux backup"
su --login miniflux --command '/home/miniflux/backup.sh'
echo "Backup: perform wakapi backup"
su --login wakapi --command '/home/wakapi/backup.sh'
echo "Backup: send backups to remote storage with retic"
restic-shell.sh backup --verbose /home/gitea/backups /home/outline/backups /home/gramps/backups /home/miniflux/backups /home/wakapi/backups \
&& restic-shell.sh check \
&& restic-shell.sh forget --compact --prune --keep-daily 90 --keep-monthly 36 \
&& restic-shell.sh check
echo "Backup: send notification"
curl -s -X POST 'https://api.telegram.org/bot{{ notifications_tg_bot_token }}/sendMessage' \
-d 'chat_id={{ notifications_tg_chat_id }}' \
-d 'parse_mode=HTML' \
-d 'text=<b>{{ notifications_name }}</b>: бекап успешно завершен!'
echo -e "\nBackup: done"

View File

@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
export RESTIC_REPOSITORY={{ restic_repository }}
export RESTIC_PASSWORD={{ restic_password }}
export AWS_ACCESS_KEY_ID={{ restic_s3_access_key }}
export AWS_SECRET_ACCESS_KEY={{ restic_s3_access_secret }}
export AWS_DEFAULT_REGION={{ restic_s3_region }}
restic "$@"

View File

@ -0,0 +1,93 @@
# -------------------------------------------------------------------
# Global options
# -------------------------------------------------------------------
{
grace_period 15s
admin :2019
# Enable metrics in Prometheus format
# https://caddyserver.com/docs/metrics
metrics
}
# -------------------------------------------------------------------
# Applications
# -------------------------------------------------------------------
vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to homepage_app:80
}
}
auth.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy authelia_app:9091
}
status.vakhrushev.me, :29999 {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy netdata:19999
}
git.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to gitea_app:3000
}
}
outline.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to outline_app:3000
}
}
gramps.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to gramps_app:5000
}
}
miniflux.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to miniflux_app:8080
}
}
wakapi.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to wakapi_app:3000
}
}
rssbridge.vakhrushev.me {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy rssbridge_app:80
}
}

View File

@ -0,0 +1,22 @@
services:
{{ service_name }}:
image: caddy:2.10.0
restart: unless-stopped
container_name: {{ service_name }}
ports:
- "80:80"
- "443:443"
- "443:443/udp"
cap_add:
- NET_ADMIN
volumes:
- {{ caddy_file_dir }}:/etc/caddy
- {{ data_dir }}:/data
- {{ config_dir }}:/config
networks:
- "{{ web_proxy_network }}"
networks:
{{ web_proxy_network }}:
external: true

View File

@ -1,25 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
36373937313831396330393762313931643536363765353936333166376465343033376564613538
3235356131646564393664376535646561323435363330660a353632613334633461383562306662
37373439373636383834383464316337656531626663393830323332613136323438313762656435
6338353136306338640a636539363766663030356432663361636438386538323238373235663766
37393035356137653763373364623836346439663062313061346537353634306138376231633635
30363465663836373830366231636265663837646137313764316364623637623333346636363934
33666164343832653536303262663635616632663561633739636561333964653862313131613232
39316239376566633964633064393532613935306161666666323337343130393861306532623666
39653463323532333932646262663862313961393430306663643866623865346666313731366331
32663262636132663238313630373937663936326532643730613161376565653263633935393363
63373163346566363639396432653132646334643031323532613238666531363630353266303139
31613138303131343364343438663762343936393165356235646239343039396637643666653065
31363163623863613533663366303664623134396134393765636435633464373731653563646537
39373766626338646564356463623531373337303861383862613966323132656639326533356533
38346263326361656563386333663531663232623436653866383865393964353363353563653532
65343130383262386262393634636338313732623565666531303636303433333638323230346565
61633837373531343530383238396162373632623135333263323234623833383731336463333063
62656533636237303962653238653934346430366533636436646264306461323639666665623839
32643637623630613863323335666138303538313236343932386461346433656432626433663365
38376666623839393630343637386336623334623064383131316331333564363934636662633630
31363337393339643738306363306538373133626564613765643138666237303330613036666537
61363838353736613531613436313730313936363564303464346661376137303133633062613932
36383631303739306264386663333338666235346339623338333663386663303439363362376239
35626136646634363430

21
files/gitea/backup.sh.j2 Normal file
View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gitea: backup data with gitea dump"
(cd "{{ base_dir }}" && \
docker compose exec \
-u "{{ user_create_result.uid }}:{{ user_create_result.group }}" \
-w /backups gitea_app \
gitea dump -c /data/gitea/conf/app.ini \
)
echo "Gitea: remove old backups"
keep-files.py "{{ backups_dir }}" --keep 3
echo "Gitea: done."

View File

@ -0,0 +1,33 @@
services:
gitea_app:
image: gitea/gitea:1.24.2
restart: unless-stopped
container_name: gitea_app
ports:
- "127.0.0.1:{{ gitea_port }}:3000"
- "2222:22"
volumes:
- {{ data_dir }}:/data
- {{ backups_dir }}:/backups
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- "{{ web_proxy_network }}"
environment:
- "USER_UID=${USER_UID}"
- "USER_GID=${USER_GID}"
- "GITEA__server__SSH_PORT=2222"
# Mailer
- "GITEA__mailer__ENABLED=true"
- "GITEA__mailer__PROTOCOL=smtp+starttls"
- "GITEA__mailer__SMTP_ADDR={{ postbox_host }}"
- "GITEA__mailer__SMTP_PORT={{ postbox_port }}"
- "GITEA__mailer__USER={{ postbox_user }}"
- "GITEA__mailer__PASSWD={{ postbox_pass }}"
- "GITEA__mailer__FROM=gitea@vakhrushev.me"
networks:
{{ web_proxy_network }}:
external: true

10
files/gramps/backup.sh.j2 Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gramps: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "Gramps: done."

View File

@ -0,0 +1,69 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
gramps_app: &gramps_app
image: ghcr.io/gramps-project/grampsweb:25.7.0
container_name: gramps_app
depends_on:
- gramps_redis
restart: unless-stopped
networks:
- "gramps_network"
- "{{ web_proxy_network }}"
volumes:
- "{{ (data_dir, 'gramps_db') | path_join }}:/root/.gramps/grampsdb" # persist Gramps database
- "{{ (data_dir, 'gramps_users') | path_join }}:/app/users" # persist user database
- "{{ (data_dir, 'gramps_index') | path_join }}:/app/indexdir" # persist search index
- "{{ (data_dir, 'gramps_thumb_cache') | path_join }}:/app/thumbnail_cache" # persist thumbnails
- "{{ (data_dir, 'gramps_cache') | path_join }}:/app/cache" # persist export and report caches
- "{{ (data_dir, 'gramps_secret') | path_join }}:/app/secret" # persist flask secret
- "{{ (data_dir, 'gramps_media') | path_join }}:/app/media" # persist media files
environment:
GRAMPSWEB_TREE: "Gramps" # will create a new tree if not exists
GRAMPSWEB_SECRET_KEY: "{{ gramps_secret_key }}"
GRAMPSWEB_BASE_URL: "https://gramps.vakhrushev.me"
GRAMPSWEB_REGISTRATION_DISABLED: "true"
GRAMPSWEB_CELERY_CONFIG__broker_url: "redis://gramps_redis:6379/0"
GRAMPSWEB_CELERY_CONFIG__result_backend: "redis://gramps_redis:6379/0"
GRAMPSWEB_RATELIMIT_STORAGE_URI: "redis://gramps_redis:6379/1"
GUNICORN_NUM_WORKERS: 2
# Email options
GRAMPSWEB_EMAIL_HOST: "{{ postbox_host }}"
GRAMPSWEB_EMAIL_PORT: "{{ postbox_port }}"
GRAMPSWEB_EMAIL_HOST_USER: "{{ postbox_user }}"
GRAMPSWEB_EMAIL_HOST_PASSWORD: "{{ postbox_pass }}"
GRAMPSWEB_EMAIL_USE_TLS: "false"
GRAMPSWEB_DEFAULT_FROM_EMAIL: "gramps@vakhrushev.me"
# media storage at s3
GRAMPSWEB_MEDIA_BASE_DIR: "s3://av-gramps-media-storage"
AWS_ENDPOINT_URL: "{{ gramps_s3_endpoint }}"
AWS_ACCESS_KEY_ID: "{{ gramps_s3_access_key_id }}"
AWS_SECRET_ACCESS_KEY: "{{ gramps_s3_secret_access_key }}"
AWS_DEFAULT_REGION: "{{ gramps_s3_region }}"
gramps_celery:
<<: *gramps_app # YAML merge key copying the entire grampsweb service config
container_name: gramps_celery
depends_on:
- gramps_redis
restart: unless-stopped
ports: []
networks:
- "gramps_network"
command: celery -A gramps_webapi.celery worker --loglevel=INFO --concurrency=2
gramps_redis:
image: valkey/valkey:8.1.1-alpine
container_name: gramps_redis
restart: unless-stopped
networks:
- "gramps_network"
networks:
gramps_network:
driver: bridge
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,32 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
users:
type: sqlite
path: "{{ (data_dir, 'gramps_users/users.sqlite') | path_join }}"
search_index:
type: sqlite
path: "{{ (data_dir, 'gramps_index/search_index.db') | path_join }}"
sqlite:
type: sqlite
path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/sqlite.db') | path_join }}"
undo:
type: sqlite
path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/undo.db') | path_join }}"
archive:
includes:
- "{{ data_dir }}"
excludes:
- "{{ (data_dir, 'gramps_cache') | path_join }}"
- "{{ (data_dir, 'gramps_thumb_cache') | path_join }}"
- "{{ (data_dir, 'gramps_tmp') | path_join }}"

View File

@ -0,0 +1,14 @@
services:
homepage_app:
image: "{{ registry_homepage_web_image }}"
container_name: homepage_app
restart: unless-stopped
ports:
- "127.0.0.1:{{ homepage_port }}:80"
networks:
- "{{ web_proxy_network }}"
networks:
{{ web_proxy_network }}:
external: true

48
files/keep-files.py Normal file
View File

@ -0,0 +1,48 @@
#!/usr/bin/env python3
import os
import argparse
def main():
parser = argparse.ArgumentParser(
description="Retain specified number of files in a directory sorted by name, delete others."
)
parser.add_argument("directory", type=str, help="Path to target directory")
parser.add_argument(
"--keep", type=int, default=2, help="Number of files to retain (default: 2)"
)
args = parser.parse_args()
# Validate arguments
if args.keep < 0:
parser.error("--keep value cannot be negative")
if not os.path.isdir(args.directory):
parser.error(f"Directory not found: {args.directory}")
# Get list of files (exclude subdirectories)
files = []
with os.scandir(args.directory) as entries:
for entry in entries:
if entry.is_file():
files.append(entry.name)
# Sort files alphabetically
sorted_files = sorted(files)
# Identify files to delete
to_delete = sorted_files[:-args.keep] if args.keep > 0 else sorted_files.copy()
# Delete files and print results
for filename in to_delete:
filepath = os.path.join(args.directory, filename)
try:
os.remove(filepath)
print(f"Deleted: {filename}")
except Exception as e:
print(f"Error deleting {filename}: {str(e)}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="miniflux_postgres_${TIMESTAMP}.sql.gz"
echo "miniflux: backing up postgresql database"
docker compose --file "{{ base_dir }}/docker-compose.yml" exec \
miniflux_postgres \
pg_dump \
-U "{{ miniflux_postgres_user }}" \
"{{ miniflux_postgres_database }}" \
| gzip > "{{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "miniflux: PostgreSQL backup saved to {{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "miniflux: removing old backups"
# Keep only the 3 most recent backups
keep-files.py "{{ postgres_backups_dir }}" --keep 3
echo "miniflux: backup completed successfully."

View File

@ -0,0 +1,52 @@
# See sample https://miniflux.app/docs/docker.html#docker-compose
# See env https://miniflux.app/docs/configuration.html
services:
miniflux_app:
image: miniflux/miniflux:2.2.10
container_name: miniflux_app
depends_on:
miniflux_postgres:
condition: service_healthy
networks:
- "miniflux_network"
- "{{ web_proxy_network }}"
environment:
- DATABASE_URL=postgres://{{ miniflux_postgres_user }}:{{ miniflux_postgres_password }}@miniflux_postgres/{{ miniflux_postgres_database }}?sslmode=disable
- RUN_MIGRATIONS=1
- CREATE_ADMIN=1
- ADMIN_USERNAME={{ miniflux_admin_user }}
- ADMIN_PASSWORD={{ miniflux_admin_password }}
- BASE_URL=https://miniflux.vakhrushev.me
- DISABLE_LOCAL_AUTH=1
- OAUTH2_OIDC_DISCOVERY_ENDPOINT=https://auth.vakhrushev.me
- OAUTH2_CLIENT_ID={{ miniflux_oidc_client_id }}
- OAUTH2_CLIENT_SECRET={{ miniflux_oidc_client_secret }}
- OAUTH2_OIDC_PROVIDER_NAME=Authelia
- OAUTH2_PROVIDER=oidc
- OAUTH2_REDIRECT_URL=https://miniflux.vakhrushev.me/oauth2/oidc/callback
- OAUTH2_USER_CREATION=1
- METRICS_COLLECTOR=1
- METRICS_ALLOWED_NETWORKS=0.0.0.0/0
miniflux_postgres:
image: postgres:16.3-bookworm
container_name: miniflux_postgres
environment:
- POSTGRES_USER={{ miniflux_postgres_user }}
- POSTGRES_PASSWORD={{ miniflux_postgres_password }}
- POSTGRES_DB={{ miniflux_postgres_database }}
networks:
- "miniflux_network"
volumes:
- {{ postgres_data_dir }}:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "miniflux"]
interval: 10s
start_period: 30s
networks:
miniflux_network:
driver: bridge
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,37 @@
services:
netdata:
image: netdata/netdata:v2.5.4
container_name: netdata
restart: unless-stopped
cap_add:
- SYS_PTRACE
- SYS_ADMIN
security_opt:
- apparmor:unconfined
networks:
- "{{ web_proxy_network }}"
volumes:
- "{{ config_dir }}:/etc/netdata"
- "{{ (data_dir, 'lib') | path_join }}:/var/lib/netdata"
- "{{ (data_dir, 'cache') | path_join }}:/var/cache/netdata"
# Netdata system volumes
- "/:/host/root:ro,rslave"
- "/etc/group:/host/etc/group:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/os-release:/host/etc/os-release:ro"
- "/etc/passwd:/host/etc/passwd:ro"
- "/proc:/host/proc:ro"
- "/run/dbus:/run/dbus:ro"
- "/sys:/host/sys:ro"
- "/var/log:/host/var/log:ro"
- "/var/run:/host/var/run:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
PGID: "{{ netdata_docker_group_output.stdout | default(999) }}"
NETDATA_EXTRA_DEB_PACKAGES: "fail2ban"
networks:
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,3 @@
jobs:
- name: fail2ban
update_every: 5 # Collect Fail2Ban jails statistics every 5 seconds

View File

@ -0,0 +1,22 @@
update_every: 5
autodetection_retry: 0
jobs:
- name: caddyproxy
url: http://caddyproxy:2019/metrics
selector:
allow:
- "caddy_http_*"
- name: authelia
url: http://authelia_app:9959/metrics
selector:
allow:
- "authelia_*"
- name: miniflux
url: http://miniflux_app:8080/metrics
selector:
allow:
- "miniflux_*"

View File

@ -0,0 +1,687 @@
# netdata configuration
#
# You can download the latest version of this file, using:
#
# wget -O /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
# or
# curl -o /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
#
# You can uncomment and change any of the options below.
# The value shown in the commented settings, is the default value.
#
# global netdata configuration
[global]
# run as user = netdata
# host access prefix = /host
# pthread stack size = 8MiB
# cpu cores = 2
# libuv worker threads = 16
# profile = standalone
hostname = {{ host_name }}
# glibc malloc arena max for plugins = 1
# glibc malloc arena max for netdata = 1
# crash reports = all
# timezone = Etc/UTC
# OOM score = 0
# process scheduling policy = keep
# is ephemeral node = no
# has unstable connection = no
[db]
# enable replication = yes
# replication period = 1d
# replication step = 1h
# replication threads = 1
# replication prefetch = 10
# update every = 1s
# db = dbengine
# memory deduplication (ksm) = auto
# cleanup orphan hosts after = 1h
# cleanup ephemeral hosts after = off
# cleanup obsolete charts after = 1h
# gap when lost iterations above = 1
# dbengine page type = gorilla
# dbengine page cache size = 32MiB
# dbengine extent cache size = off
# dbengine enable journal integrity check = no
# dbengine use all ram for caches = no
# dbengine out of memory protection = 391.99MiB
# dbengine use direct io = yes
# dbengine journal v2 unmount time = 2m
# dbengine pages per extent = 109
# storage tiers = 3
# dbengine tier backfill = new
# dbengine tier 1 update every iterations = 60
# dbengine tier 2 update every iterations = 60
# dbengine tier 0 retention size = 1024MiB
# dbengine tier 0 retention time = 14d
# dbengine tier 1 retention size = 1024MiB
# dbengine tier 1 retention time = 3mo
# dbengine tier 2 retention size = 1024MiB
# dbengine tier 2 retention time = 2y
# extreme cardinality protection = yes
# extreme cardinality keep instances = 1000
# extreme cardinality min ephemerality = 50
[directories]
# config = /etc/netdata
# stock config = /usr/lib/netdata/conf.d
# log = /var/log/netdata
# web = /usr/share/netdata/web
# cache = /var/cache/netdata
# lib = /var/lib/netdata
# cloud.d = /var/lib/netdata/cloud.d
# plugins = "/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"
# registry = /var/lib/netdata/registry
# home = /etc/netdata
# stock health config = /usr/lib/netdata/conf.d/health.d
# health config = /etc/netdata/health.d
[logs]
# facility = daemon
# logs flood protection period = 1m
# logs to trigger flood protection = 1000
# level = info
# debug = /var/log/netdata/debug.log
# daemon = /var/log/netdata/daemon.log
# collector = /var/log/netdata/collector.log
# access = /var/log/netdata/access.log
# health = /var/log/netdata/health.log
# debug flags = 0x0000000000000000
[environment variables]
# PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# PYTHONPATH =
# TZ = :/etc/localtime
[host labels]
# name = value
[cloud]
# conversation log = no
# scope = full
# query threads = 6
# proxy = env
[ml]
# enabled = auto
# maximum num samples to train = 21600
# minimum num samples to train = 900
# train every = 3h
# number of models per dimension = 18
# delete models older than = 7d
# num samples to diff = 1
# num samples to smooth = 3
# num samples to lag = 5
# random sampling ratio = 0.20000
# maximum number of k-means iterations = 1000
# dimension anomaly score threshold = 0.99000
# host anomaly rate threshold = 1.00000
# anomaly detection grouping method = average
# anomaly detection grouping duration = 5m
# num training threads = 1
# flush models batch size = 256
# dimension anomaly rate suppression window = 15m
# dimension anomaly rate suppression threshold = 450
# enable statistics charts = yes
# hosts to skip from training = !*
# charts to skip from training = netdata.*
# stream anomaly detection charts = yes
[health]
# silencers file = /var/lib/netdata/health.silencers.json
# enabled = yes
# enable stock health configuration = yes
# use summary for notifications = yes
# default repeat warning = off
# default repeat critical = off
# in memory max health log entries = 1000
# health log retention = 5d
# script to execute on alarm = /usr/libexec/netdata/plugins.d/alarm-notify.sh
# enabled alarms = *
# run at least every = 10s
# postpone alarms during hibernation for = 1m
[web]
#| >>> [web].default port <<<
#| migrated from: [global].default port
# default port = 19999
# ssl key = /etc/netdata/ssl/key.pem
# ssl certificate = /etc/netdata/ssl/cert.pem
# tls version = 1.3
# tls ciphers = none
# ses max tg_des_window = 15
# des max tg_des_window = 15
# mode = static-threaded
# listen backlog = 4096
# bind to = *
# bearer token protection = no
# disconnect idle clients after = 1m
# timeout for first request = 1m
# accept a streaming request every = off
# respect do not track policy = no
# x-frame-options response header =
# allow connections from = localhost *
# allow connections by dns = heuristic
# allow dashboard from = localhost *
# allow dashboard by dns = heuristic
# allow badges from = *
# allow badges by dns = heuristic
# allow streaming from = *
# allow streaming by dns = heuristic
# allow netdata.conf from = localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN
# allow netdata.conf by dns = no
# allow management from = localhost
# allow management by dns = heuristic
# enable gzip compression = yes
# gzip compression strategy = default
# gzip compression level = 3
# ssl skip certificate verification = no
# web server threads = 6
# web server max sockets = 262144
[registry]
# enabled = no
# registry db file = /var/lib/netdata/registry/registry.db
# registry log file = /var/lib/netdata/registry/registry-log.db
# registry save db every new entries = 1000000
# registry expire idle persons = 1y
# registry domain =
# registry to announce = https://registry.my-netdata.io
# registry hostname = 7171b7f9fc69
# verify browser cookies support = yes
# enable cookies SameSite and Secure = yes
# max URL length = 1024
# max URL name length = 50
# netdata management api key file = /var/lib/netdata/netdata.api.key
# allow from = *
# allow by dns = heuristic
[pulse]
# extended = no
# update every = 1s
[plugins]
# idlejitter = yes
# netdata pulse = yes
# profile = no
# tc = yes
# diskspace = yes
# proc = yes
# cgroups = yes
# timex = yes
# statsd = yes
# enable running new plugins = yes
# check for new plugins every = 1m
# slabinfo = no
# freeipmi = no
# python.d = yes
# go.d = yes
# apps = yes
# systemd-journal = yes
# network-viewer = yes
# charts.d = yes
# debugfs = yes
# perf = yes
# ioping = yes
[statsd]
# update every (flushInterval) = 1s
# udp messages to process at once = 10
# create private charts for metrics matching = *
# max private charts hard limit = 1000
# set charts as obsolete after = off
# decimal detail = 1000
# disconnect idle tcp clients after = 10m
# private charts hidden = no
# histograms and timers percentile (percentThreshold) = 95.00000
# dictionaries max unique dimensions = 200
# add dimension for number of events received = no
# gaps on gauges (deleteGauges) = no
# gaps on counters (deleteCounters) = no
# gaps on meters (deleteMeters) = no
# gaps on sets (deleteSets) = no
# gaps on histograms (deleteHistograms) = no
# gaps on timers (deleteTimers) = no
# gaps on dictionaries (deleteDictionaries) = no
# statsd server max TCP sockets = 262144
# listen backlog = 4096
# default port = 8125
# bind to = udp:localhost tcp:localhost
[plugin:idlejitter]
# loop time = 20ms
[plugin:timex]
# update every = 10s
# clock synchronization state = yes
# time offset = yes
[plugin:proc]
# /proc/net/dev = yes
# /proc/pagetypeinfo = no
# /proc/stat = yes
# /proc/uptime = yes
# /proc/loadavg = yes
# /proc/sys/fs/file-nr = yes
# /proc/sys/kernel/random/entropy_avail = yes
# /run/reboot_required = yes
# /proc/pressure = yes
# /proc/interrupts = yes
# /proc/softirqs = yes
# /proc/vmstat = yes
# /proc/meminfo = yes
# /sys/kernel/mm/ksm = yes
# /sys/block/zram = yes
# /sys/devices/system/edac/mc = yes
# /sys/devices/pci/aer = yes
# /sys/devices/system/node = yes
# /proc/net/wireless = yes
# /proc/net/sockstat = yes
# /proc/net/sockstat6 = yes
# /proc/net/netstat = yes
# /proc/net/sctp/snmp = yes
# /proc/net/softnet_stat = yes
# /proc/net/ip_vs/stats = yes
# /sys/class/infiniband = yes
# /proc/net/stat/conntrack = yes
# /proc/net/stat/synproxy = yes
# /proc/diskstats = yes
# /proc/mdstat = yes
# /proc/net/rpc/nfsd = yes
# /proc/net/rpc/nfs = yes
# /proc/spl/kstat/zfs/arcstats = yes
# /sys/fs/btrfs = yes
# ipc = yes
# /sys/class/power_supply = yes
# /sys/class/drm = yes
[plugin:cgroups]
# update every = 1s
# check for new cgroups every = 10s
# use unified cgroups = auto
# max cgroups to allow = 1000
# max cgroups depth to monitor = 0
# enable by default cgroups matching = !*/init.scope !/system.slice/run-*.scope *user.slice/docker-* !*user.slice* *.scope !/machine.slice/*/.control !/machine.slice/*/payload* !/machine.slice/*/supervisor /machine.slice/*.service */kubepods/pod*/* */kubepods/*/pod*/* */*-kubepods-pod*/* */*-kubepods-*-pod*/* !*kubepods* !*kubelet* !*/vcpu* !*/emulator !*.mount !*.partition !*.service !*.service/udev !*.socket !*.slice !*.swap !*.user !/ !/docker !*/libvirt !/lxc !/lxc/*/* !/lxc.monitor* !/lxc.pivot !/lxc.payload !*lxcfs.service/.control !/machine !/qemu !/system !/systemd !/user *
# enable by default cgroups names matching = *
# search for cgroups in subpaths matching = !*/init.scope !*-qemu !*.libvirt-qemu !/init.scope !/system !/systemd !/user !/lxc/*/* !/lxc.monitor !/lxc.payload/*/* !/lxc.payload.* *
# script to get cgroup names = /usr/libexec/netdata/plugins.d/cgroup-name.sh
# script to get cgroup network interfaces = /usr/libexec/netdata/plugins.d/cgroup-network
# run script to rename cgroups matching = !/ !*.mount !*.socket !*.partition /machine.slice/*.service !*.service !*.slice !*.swap !*.user !init.scope !*.scope/vcpu* !*.scope/emulator *.scope *docker* *lxc* *qemu* */kubepods/pod*/* */kubepods/*/pod*/* */*-kubepods-pod*/* */*-kubepods-*-pod*/* !*kubepods* !*kubelet* *.libvirt-qemu *
# cgroups to match as systemd services = !/system.slice/*/*.service /system.slice/*.service
[plugin:proc:diskspace]
# remove charts of unmounted disks = yes
# update every = 1s
# check for new mount points every = 15s
# exclude space metrics on paths = /dev /dev/shm /proc/* /sys/* /var/run/user/* /run/lock /run/user/* /snap/* /var/lib/docker/* /var/lib/containers/storage/* /run/credentials/* /run/containerd/* /rpool /rpool/*
# exclude space metrics on filesystems = *gvfs *gluster* *s3fs *ipfs *davfs2 *httpfs *sshfs *gdfs *moosefs fusectl autofs cgroup cgroup2 hugetlbfs devtmpfs fuse.lxcfs
# exclude inode metrics on filesystems = msdosfs msdos vfat overlayfs aufs* *unionfs
# space usage for all disks = auto
# inodes usage for all disks = auto
[plugin:tc]
# script to run to get tc values = /usr/libexec/netdata/plugins.d/tc-qos-helper.sh
[plugin:python.d]
# update every = 1s
# command options =
[plugin:go.d]
# update every = 1s
# command options =
[plugin:apps]
# update every = 1s
# command options =
[plugin:systemd-journal]
# update every = 1s
# command options =
[plugin:network-viewer]
# update every = 1s
# command options =
[plugin:charts.d]
# update every = 1s
# command options =
[plugin:debugfs]
# update every = 1s
# command options =
[plugin:perf]
# update every = 1s
# command options =
[plugin:ioping]
# update every = 1s
# command options =
[plugin:proc:/proc/net/dev]
# compressed packets for all interfaces = no
# disable by default interfaces matching = lo fireqos* *-ifb fwpr* fwbr* fwln* ifb4*
[plugin:proc:/proc/stat]
# cpu utilization = yes
# per cpu core utilization = no
# cpu interrupts = yes
# context switches = yes
# processes started = yes
# processes running = yes
# keep per core files open = yes
# keep cpuidle files open = yes
# core_throttle_count = auto
# package_throttle_count = no
# cpu frequency = yes
# cpu idle states = no
# core_throttle_count filename to monitor = /host/sys/devices/system/cpu/%s/thermal_throttle/core_throttle_count
# package_throttle_count filename to monitor = /host/sys/devices/system/cpu/%s/thermal_throttle/package_throttle_count
# scaling_cur_freq filename to monitor = /host/sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq
# time_in_state filename to monitor = /host/sys/devices/system/cpu/%s/cpufreq/stats/time_in_state
# schedstat filename to monitor = /host/proc/schedstat
# cpuidle name filename to monitor = /host/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/name
# cpuidle time filename to monitor = /host/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/time
# filename to monitor = /host/proc/stat
[plugin:proc:/proc/uptime]
# filename to monitor = /host/proc/uptime
[plugin:proc:/proc/loadavg]
# filename to monitor = /host/proc/loadavg
# enable load average = yes
# enable total processes = yes
[plugin:proc:/proc/sys/fs/file-nr]
# filename to monitor = /host/proc/sys/fs/file-nr
[plugin:proc:/proc/sys/kernel/random/entropy_avail]
# filename to monitor = /host/proc/sys/kernel/random/entropy_avail
[plugin:proc:/proc/pressure]
# base path of pressure metrics = /proc/pressure
# enable cpu some pressure = yes
# enable cpu full pressure = no
# enable memory some pressure = yes
# enable memory full pressure = yes
# enable io some pressure = yes
# enable io full pressure = yes
# enable irq some pressure = no
# enable irq full pressure = yes
[plugin:proc:/proc/interrupts]
# interrupts per core = no
# filename to monitor = /host/proc/interrupts
[plugin:proc:/proc/softirqs]
# interrupts per core = no
# filename to monitor = /host/proc/softirqs
[plugin:proc:/proc/vmstat]
# filename to monitor = /host/proc/vmstat
# swap i/o = auto
# disk i/o = yes
# memory page faults = yes
# out of memory kills = yes
# system-wide numa metric summary = auto
# transparent huge pages = auto
# zswap i/o = auto
# memory ballooning = auto
# kernel same memory = auto
[plugin:proc:/sys/devices/system/node]
# directory to monitor = /host/sys/devices/system/node
# enable per-node numa metrics = auto
[plugin:proc:/proc/meminfo]
# system ram = yes
# system swap = auto
# hardware corrupted ECC = auto
# committed memory = yes
# writeback memory = yes
# kernel memory = yes
# slab memory = yes
# hugepages = auto
# transparent hugepages = auto
# memory reclaiming = yes
# high low memory = yes
# cma memory = auto
# direct maps = yes
# filename to monitor = /host/proc/meminfo
[plugin:proc:/sys/kernel/mm/ksm]
# /sys/kernel/mm/ksm/pages_shared = /host/sys/kernel/mm/ksm/pages_shared
# /sys/kernel/mm/ksm/pages_sharing = /host/sys/kernel/mm/ksm/pages_sharing
# /sys/kernel/mm/ksm/pages_unshared = /host/sys/kernel/mm/ksm/pages_unshared
# /sys/kernel/mm/ksm/pages_volatile = /host/sys/kernel/mm/ksm/pages_volatile
[plugin:proc:/sys/devices/system/edac/mc]
# directory to monitor = /host/sys/devices/system/edac/mc
[plugin:proc:/sys/class/pci/aer]
# enable root ports = no
# enable pci slots = no
[plugin:proc:/proc/net/wireless]
# filename to monitor = /host/proc/net/wireless
# status for all interfaces = auto
# quality for all interfaces = auto
# discarded packets for all interfaces = auto
# missed beacon for all interface = auto
[plugin:proc:/proc/net/sockstat]
# ipv4 sockets = auto
# ipv4 TCP sockets = auto
# ipv4 TCP memory = auto
# ipv4 UDP sockets = auto
# ipv4 UDP memory = auto
# ipv4 UDPLITE sockets = auto
# ipv4 RAW sockets = auto
# ipv4 FRAG sockets = auto
# ipv4 FRAG memory = auto
# update constants every = 1m
# filename to monitor = /host/proc/net/sockstat
[plugin:proc:/proc/net/sockstat6]
# ipv6 TCP sockets = auto
# ipv6 UDP sockets = auto
# ipv6 UDPLITE sockets = auto
# ipv6 RAW sockets = auto
# ipv6 FRAG sockets = auto
# filename to monitor = /host/proc/net/sockstat6
[plugin:proc:/proc/net/netstat]
# bandwidth = auto
# input errors = auto
# multicast bandwidth = auto
# broadcast bandwidth = auto
# multicast packets = auto
# broadcast packets = auto
# ECN packets = auto
# TCP reorders = auto
# TCP SYN cookies = auto
# TCP out-of-order queue = auto
# TCP connection aborts = auto
# TCP memory pressures = auto
# TCP SYN queue = auto
# TCP accept queue = auto
# filename to monitor = /host/proc/net/netstat
[plugin:proc:/proc/net/snmp]
# ipv4 packets = auto
# ipv4 fragments sent = auto
# ipv4 fragments assembly = auto
# ipv4 errors = auto
# ipv4 TCP connections = auto
# ipv4 TCP packets = auto
# ipv4 TCP errors = auto
# ipv4 TCP opens = auto
# ipv4 TCP handshake issues = auto
# ipv4 UDP packets = auto
# ipv4 UDP errors = auto
# ipv4 ICMP packets = auto
# ipv4 ICMP messages = auto
# ipv4 UDPLite packets = auto
# filename to monitor = /host/proc/net/snmp
[plugin:proc:/proc/net/snmp6]
# ipv6 packets = auto
# ipv6 fragments sent = auto
# ipv6 fragments assembly = auto
# ipv6 errors = auto
# ipv6 UDP packets = auto
# ipv6 UDP errors = auto
# ipv6 UDPlite packets = auto
# ipv6 UDPlite errors = auto
# bandwidth = auto
# multicast bandwidth = auto
# broadcast bandwidth = auto
# multicast packets = auto
# icmp = auto
# icmp redirects = auto
# icmp errors = auto
# icmp echos = auto
# icmp group membership = auto
# icmp router = auto
# icmp neighbor = auto
# icmp mldv2 = auto
# icmp types = auto
# ect = auto
# filename to monitor = /host/proc/net/snmp6
[plugin:proc:/proc/net/sctp/snmp]
# established associations = auto
# association transitions = auto
# fragmentation = auto
# packets = auto
# packet errors = auto
# chunk types = auto
# filename to monitor = /host/proc/net/sctp/snmp
[plugin:proc:/proc/net/softnet_stat]
# softnet_stat per core = no
# filename to monitor = /host/proc/net/softnet_stat
[plugin:proc:/proc/net/ip_vs_stats]
# IPVS bandwidth = yes
# IPVS connections = yes
# IPVS packets = yes
# filename to monitor = /host/proc/net/ip_vs_stats
[plugin:proc:/sys/class/infiniband]
# dirname to monitor = /host/sys/class/infiniband
# bandwidth counters = yes
# packets counters = yes
# errors counters = yes
# hardware packets counters = auto
# hardware errors counters = auto
# monitor only active ports = auto
# disable by default interfaces matching =
# refresh ports state every = 30s
[plugin:proc:/proc/net/stat/nf_conntrack]
# filename to monitor = /host/proc/net/stat/nf_conntrack
# netfilter new connections = no
# netfilter connection changes = no
# netfilter connection expectations = no
# netfilter connection searches = no
# netfilter errors = no
# netfilter connections = yes
[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max]
# filename to monitor = /host/proc/sys/net/netfilter/nf_conntrack_max
# read every seconds = 10
[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_count]
# filename to monitor = /host/proc/sys/net/netfilter/nf_conntrack_count
[plugin:proc:/proc/net/stat/synproxy]
# SYNPROXY cookies = auto
# SYNPROXY SYN received = auto
# SYNPROXY connections reopened = auto
# filename to monitor = /host/proc/net/stat/synproxy
[plugin:proc:/proc/diskstats]
# enable new disks detected at runtime = yes
# performance metrics for physical disks = auto
# performance metrics for virtual disks = auto
# performance metrics for partitions = no
# bandwidth for all disks = auto
# operations for all disks = auto
# merged operations for all disks = auto
# i/o time for all disks = auto
# queued operations for all disks = auto
# utilization percentage for all disks = auto
# extended operations for all disks = auto
# backlog for all disks = auto
# bcache for all disks = auto
# bcache priority stats update every = off
# remove charts of removed disks = yes
# path to get block device = /host/sys/block/%s
# path to get block device bcache = /host/sys/block/%s/bcache
# path to get virtual block device = /host/sys/devices/virtual/block/%s
# path to get block device infos = /host/sys/dev/block/%lu:%lu/%s
# path to device mapper = /host/dev/mapper
# path to /dev/disk = /host/dev/disk
# path to /sys/block = /host/sys/block
# path to /dev/disk/by-label = /host/dev/disk/by-label
# path to /dev/disk/by-id = /host/dev/disk/by-id
# path to /dev/vx/dsk = /host/dev/vx/dsk
# name disks by id = no
# preferred disk ids = *
# exclude disks = loop* ram*
# filename to monitor = /host/proc/diskstats
# performance metrics for disks with major 252 = yes
[plugin:proc:/proc/mdstat]
# faulty devices = yes
# nonredundant arrays availability = yes
# mismatch count = auto
# disk stats = yes
# operation status = yes
# make charts obsolete = yes
# filename to monitor = /host/proc/mdstat
# mismatch_cnt filename to monitor = /host/sys/block/%s/md/mismatch_cnt
[plugin:proc:/proc/net/rpc/nfsd]
# filename to monitor = /host/proc/net/rpc/nfsd
[plugin:proc:/proc/net/rpc/nfs]
# filename to monitor = /host/proc/net/rpc/nfs
[plugin:proc:/proc/spl/kstat/zfs/arcstats]
# filename to monitor = /host/proc/spl/kstat/zfs/arcstats
[plugin:proc:/sys/fs/btrfs]
# path to monitor = /host/sys/fs/btrfs
# check for btrfs changes every = 1m
# physical disks allocation = auto
# data allocation = auto
# metadata allocation = auto
# system allocation = auto
# commit stats = auto
# error stats = auto
[plugin:proc:ipc]
# message queues = yes
# semaphore totals = yes
# shared memory totals = yes
# msg filename to monitor = /host/proc/sysvipc/msg
# shm filename to monitor = /host/proc/sysvipc/shm
# max dimensions in memory allowed = 50
[plugin:proc:/sys/class/power_supply]
# battery capacity = yes
# battery power = yes
# battery charge = no
# battery energy = no
# power supply voltage = no
# keep files open = auto
# directory to monitor = /host/sys/class/power_supply
[plugin:proc:/sys/class/drm]
# directory to monitor = /host/sys/class/drm

View File

@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="outline_postgres_${TIMESTAMP}.sql.gz"
echo "Outline: backing up PostgreSQL database"
docker compose --file "{{ base_dir }}/docker-compose.yml" exec \
outline_postgres \
pg_dump \
-U "{{ outline_postgres_user }}" \
"{{ outline_postgres_database }}" \
| gzip > "{{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "Outline: PostgreSQL backup saved to {{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "Outline: removing old backups"
# Keep only the 3 most recent backups
keep-files.py "{{ postgres_backups_dir }}" --keep 3
echo "Outline: backup completed successfully."

View File

@ -0,0 +1,81 @@
services:
# See sample https://github.com/outline/outline/blob/main/.env.sample
outline_app:
image: outlinewiki/outline:0.84.0
container_name: outline_app
restart: unless-stopped
depends_on:
- outline_postgres
- outline_redis
ports:
- "127.0.0.1:{{ outline_port }}:3000"
networks:
- "outline_network"
- "{{ web_proxy_network }}"
environment:
NODE_ENV: 'production'
URL: 'https://outline.vakhrushev.me'
FORCE_HTTPS: 'true'
SECRET_KEY: '{{ outline_secret_key }}'
UTILS_SECRET: '{{ outline_utils_secret }}'
DATABASE_URL: 'postgres://{{ outline_postgres_user }}:{{ outline_postgres_password }}@outline_postgres:5432/{{ outline_postgres_database }}'
PGSSLMODE: 'disable'
REDIS_URL: 'redis://outline_redis:6379'
FILE_STORAGE: 's3'
FILE_STORAGE_UPLOAD_MAX_SIZE: '262144000'
AWS_ACCESS_KEY_ID: '{{ outline_s3_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ outline_s3_secret_key }}'
AWS_REGION: '{{ outline_s3_region }}'
AWS_S3_ACCELERATE_URL: ''
AWS_S3_UPLOAD_BUCKET_URL: '{{ outline_s3_url }}'
AWS_S3_UPLOAD_BUCKET_NAME: '{{ outline_s3_bucket }}'
AWS_S3_FORCE_PATH_STYLE: 'true'
AWS_S3_ACL: 'private'
OIDC_CLIENT_ID: '{{ outline_oidc_client_id | replace("$", "$$") }}'
OIDC_CLIENT_SECRET: '{{ outline_oidc_client_secret | replace("$", "$$") }}'
OIDC_AUTH_URI: 'https://auth.vakhrushev.me/api/oidc/authorization'
OIDC_TOKEN_URI: 'https://auth.vakhrushev.me/api/oidc/token'
OIDC_USERINFO_URI: 'https://auth.vakhrushev.me/api/oidc/userinfo'
OIDC_LOGOUT_URI: 'https://auth.vakhrushev.me/logout'
OIDC_USERNAME_CLAIM: 'email'
OIDC_SCOPES: 'openid profile email'
OIDC_DISPLAY_NAME: 'Authelia'
SMTP_HOST: '{{ postbox_host }}'
SMTP_PORT: '{{ postbox_port }}'
SMTP_USERNAME: '{{ postbox_user }}'
SMTP_PASSWORD: '{{ postbox_pass }}'
SMTP_FROM_EMAIL: 'outline@vakhrushev.me'
SMTP_TLS_CIPHERS: 'TLSv1.2'
SMTP_SECURE: 'false'
outline_redis:
image: valkey/valkey:8.1.1-alpine
container_name: outline_redis
restart: unless-stopped
networks:
- "outline_network"
outline_postgres:
image: postgres:16.3-bookworm
container_name: outline_postgres
restart: unless-stopped
volumes:
- {{ postgres_data_dir }}:/var/lib/postgresql/data
networks:
- "outline_network"
environment:
POSTGRES_USER: '{{ outline_postgres_user }}'
POSTGRES_PASSWORD: '{{ outline_postgres_password }}'
POSTGRES_DB: '{{ outline_postgres_database }}'
networks:
outline_network:
driver: bridge
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,12 @@
services:
rssbridge_app:
image: rssbridge/rss-bridge:2025-06-03
container_name: rssbridge_app
restart: unless-stopped
networks:
- "{{ web_proxy_network }}"
networks:
{{ web_proxy_network }}:
external: true

10
files/wakapi/backup.sh.j2 Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "{{ app_name }}: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "{{ app_name }}: done."

View File

@ -0,0 +1,32 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
wakapi_app:
image: ghcr.io/muety/wakapi:2.14.0
container_name: wakapi_app
restart: unless-stopped
user: '{{ user_create_result.uid }}:{{ user_create_result.group }}'
networks:
- "{{ web_proxy_network }}"
volumes:
- "{{ data_dir }}:/data"
environment:
WAKAPI_PUBLIC_URL: "https://wakapi.vakhrushev.me"
WAKAPI_PASSWORD_SALT: "{{ wakapi_password_salt }}"
WAKAPI_ALLOW_SIGNUP: "false"
WAKAPI_DISABLE_FRONTPAGE: "true"
WAKAPI_COOKIE_MAX_AGE: 31536000
# Mail
WAKAPI_MAIL_SENDER: "Wakapi <wakapi@vakhrushev.me>"
WAKAPI_MAIL_PROVIDER: "smtp"
WAKAPI_MAIL_SMTP_HOST: "{{ postbox_host }}"
WAKAPI_MAIL_SMTP_PORT: "{{ postbox_port }}"
WAKAPI_MAIL_SMTP_USER: "{{ postbox_user }}"
WAKAPI_MAIL_SMTP_PASS: "{{ postbox_pass }}"
WAKAPI_MAIL_SMTP_TLS: "false"
networks:
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,16 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
wakapi:
type: sqlite
path: "{{ (data_dir, 'wakapi.db') | path_join }}"

View File

@ -1,5 +1,6 @@
#!/usr/bin/env sh
# Must be executed for every user
# See https://cloud.yandex.ru/docs/container-registry/tutorials/run-docker-on-vm#run
set -eu

47
generate.py Normal file
View File

@ -0,0 +1,47 @@
#!/usr/bin/env python3
import hmac
import hashlib
import base64
import argparse
import sys
# These values are required to calculate the signature. Do not change them.
DATE = "20230926"
SERVICE = "postbox"
MESSAGE = "SendRawEmail"
REGION = "ru-central1"
TERMINAL = "aws4_request"
VERSION = 0x04
def sign(key, msg):
return hmac.new(key, msg.encode("utf-8"), hashlib.sha256).digest()
def calculate_key(secret_access_key):
signature = sign(("AWS4" + secret_access_key).encode("utf-8"), DATE)
signature = sign(signature, REGION)
signature = sign(signature, SERVICE)
signature = sign(signature, TERMINAL)
signature = sign(signature, MESSAGE)
signature_and_version = bytes([VERSION]) + signature
smtp_password = base64.b64encode(signature_and_version)
return smtp_password.decode("utf-8")
def main():
if sys.version_info[0] < 3:
raise Exception("Must be using Python 3")
parser = argparse.ArgumentParser(
description="Convert a Secret Access Key to an SMTP password."
)
parser.add_argument("secret", help="The Secret Access Key to convert.")
args = parser.parse_args()
print(calculate_key(args.secret))
if __name__ == "__main__":
main()

View File

@ -1 +0,0 @@
192.168.50.10

68
playbook-authelia.yml Normal file
View File

@ -0,0 +1,68 @@
---
- name: "Configure authelia application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "authelia"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0700"
loop:
- "{{ config_dir }}"
- name: "Copy configuration files"
ansible.builtin.copy:
src: "files/{{ app_name }}/{{ item }}"
dest: "{{ (config_dir, item) | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
loop:
- "users.yml"
- name: "Copy configuration files (templates)"
ansible.builtin.template:
src: "files/{{ app_name }}/configuration.yml.j2"
dest: "{{ (config_dir, 'configuration.yml') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"

53
playbook-backups.yml Normal file
View File

@ -0,0 +1,53 @@
---
- name: "Configure restic and backup schedule"
hosts: all
vars_files:
- vars/secrets.yml
- vars/secrets.yml
vars:
restic_shell_script: "{{ (bin_prefix, 'restic-shell.sh') | path_join }}"
backup_all_script: "{{ (bin_prefix, 'backup-all.sh') | path_join }}"
tasks:
- name: "Copy restic shell script"
ansible.builtin.template:
src: "files/backups/restic-shell.sh.j2"
dest: "{{ restic_shell_script }}"
owner: root
group: root
mode: "0700"
- name: "Copy backup all script"
ansible.builtin.template:
src: "files/backups/backup-all.sh.j2"
dest: "{{ backup_all_script }}"
owner: root
group: root
mode: "0700"
- name: "Setup paths for backup cron file"
ansible.builtin.cron:
cron_file: "ansible_restic_backup"
user: "root"
env: true
name: "PATH"
job: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
- name: "Setup mail for backup cron file"
ansible.builtin.cron:
cron_file: "ansible_restic_backup"
user: "root"
env: true
name: "MAILTO"
job: ""
- name: "Creates a cron file for backups under /etc/cron.d"
ansible.builtin.cron:
name: "restic backup"
minute: "0"
hour: "1"
job: "{{ backup_all_script }} 2>&1 | logger -t backup"
cron_file: "ansible_restic_backup"
user: "root"

View File

@ -1,27 +0,0 @@
---
- name: 'Install and configure Caddy server'
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
tasks:
- name: 'Ensure networkd service is started (required by Caddy).'
ansible.builtin.systemd:
name: systemd-networkd
state: started
enabled: true
- name: 'Install and configure Caddy server'
ansible.builtin.import_role:
name: caddy_ansible.caddy_ansible
vars:
caddy_github_token: '{{ caddy_vars.github_token }}'
caddy_config: '{{ lookup("template", "templates/Caddyfile.j2") }}'
caddy_setcap: true
caddy_systemd_capabilities_enabled: true
caddy_systemd_capabilities: "CAP_NET_BIND_SERVICE"
# Поменяй на true, чтобы обновить Caddy
caddy_update: false

72
playbook-caddyproxy.yml Normal file
View File

@ -0,0 +1,72 @@
---
- name: "Configure caddy reverse proxy service"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "caddyproxy"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
caddy_file_dir: "{{ (base_dir, 'caddy_file') | path_join }}"
service_name: "{{ app_name }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ data_dir }}"
- "{{ config_dir }}"
- "{{ caddy_file_dir }}"
- name: "Copy caddy file"
ansible.builtin.template:
src: "./files/{{ app_name }}/Caddyfile.j2"
dest: "{{ (caddy_file_dir, 'Caddyfile') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
# - name: "Reload caddy"
# community.docker.docker_compose_v2_exec:
# project_src: '{{ base_dir }}'
# service: "{{ service_name }}"
# command: caddy reload --config /etc/caddy/Caddyfile
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"

View File

@ -1,171 +0,0 @@
---
- hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
tasks:
# Applications
- import_role:
name: docker-app
vars:
username: homepage
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
DOCKER_PREFIX: homepage
PROJECT_NAME: homepage
IMAGE_PREFIX: homepage
CONTAINER_PREFIX: homepage
WEB_SERVER_PORT: '127.0.0.1:{{ homepage_port }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: dayoff
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
- '{{ lookup("file", "files/dayoff_id_rsa.pub") }}'
env:
DOCKER_PREFIX: dayoff
PROJECT_NAME: dayoff
IMAGE_PREFIX: dayoff
CONTAINER_PREFIX: dayoff
WEB_SERVER_PORT: '127.0.0.1:{{ dayoff_port }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: wiki
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: wiki
DOCKER_PREFIX: wiki
IMAGE_PREFIX: wiki
CONTAINER_PREFIX: wiki
WEB_SERVER_PORT: '127.0.0.1:{{ wiki_port }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: nomie
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: nomie
DOCKER_PREFIX: nomie
IMAGE_PREFIX: nomie
CONTAINER_PREFIX: nomie
WEB_SERVER_PORT: '127.0.0.1:{{ nomie_port }}'
COUCH_DB_PORT: '127.0.0.1:{{ nomie_db_port }}'
COUCH_DB_USER: 'couch-admin'
COUCH_DB_PASSWORD: '{{ nomie.couch_db_password }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: gitea
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: gitea
DOCKER_PREFIX: gitea
IMAGE_PREFIX: gitea
CONTAINER_PREFIX: gitea
WEB_SERVER_PORT: '127.0.0.1:{{ gitea_port }}'
USER_UID: '{{ uc_result.uid }}'
USER_GID: '{{ uc_result.group }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: keycloak
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: keycloak
DOCKER_PREFIX: keycloak
IMAGE_PREFIX: keycloak
CONTAINER_PREFIX: keycloak
WEB_SERVER_PORT: '127.0.0.1:{{ keycloak_port }}'
KEYCLOAK_ADMIN: '{{ keycloak.admin_login }}'
KEYCLOAK_ADMIN_PASSWORD: '{{ keycloak.admin_password }}'
USER_UID: '{{ uc_result.uid }}'
USER_GID: '{{ uc_result.group }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: outline
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: outline
DOCKER_PREFIX: outline
IMAGE_PREFIX: outline
CONTAINER_PREFIX: outline
WEB_SERVER_PORT: '127.0.0.1:{{ outline_port }}'
USER_UID: '{{ uc_result.uid }}'
USER_GID: '{{ uc_result.group }}'
# Postgres
POSTGRES_USER: '{{ outline.postgres_user }}'
POSTGRES_PASSWORD: '{{ outline.postgres_password }}'
POSTGRES_DB: 'outline'
# See sample https://github.com/outline/outline/blob/main/.env.sample
NODE_ENV: 'production'
SECRET_KEY: '{{ outline.secret_key }}'
UTILS_SECRET: '{{ outline.utils_secret }}'
DATABASE_URL: 'postgres://{{ outline.postgres_user }}:{{ outline.postgres_password }}@postgres:5432/outline'
PGSSLMODE: 'disable'
REDIS_URL: 'redis://redis:6379'
URL: 'https://outline.vakhrushev.me'
FILE_STORAGE: 's3'
AWS_ACCESS_KEY_ID: '{{ outline.s3_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ outline.s3_secret_key }}'
AWS_REGION: 'ru-central1'
AWS_S3_ACCELERATE_URL: ''
AWS_S3_UPLOAD_BUCKET_URL: 'https://storage.yandexcloud.net'
AWS_S3_UPLOAD_BUCKET_NAME: 'av-outline-wiki'
AWS_S3_FORCE_PATH_STYLE: 'true'
AWS_S3_ACL: 'private'
OIDC_CLIENT_ID: '{{ outline.oidc_client_id }}'
OIDC_CLIENT_SECRET: '{{ outline.oidc_client_secret }}'
OIDC_AUTH_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/auth'
OIDC_TOKEN_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/token'
OIDC_USERINFO_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/userinfo'
OIDC_LOGOUT_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/logout'
OIDC_USERNAME_CLAIM: 'email'
OIDC_DISPLAY_NAME: 'KK'
tags:
- apps

View File

@ -1,25 +1,33 @@
---
- name: 'Configure docker parameters'
- name: "Configure docker parameters"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
- vars/secrets.yml
tasks:
- name: 'Install python docker lib from pip'
- name: "Install python docker lib from pip"
ansible.builtin.pip:
name: docker
- name: 'Install docker'
- name: "Install docker"
ansible.builtin.import_role:
name: geerlingguy.docker
vars:
docker_edition: 'ce'
docker_edition: "ce"
docker_packages:
- "docker-{{ docker_edition }}"
- "docker-{{ docker_edition }}-cli"
- "docker-{{ docker_edition }}-rootless-extras"
docker_users:
- major
- name: "Login to yandex docker registry."
ansible.builtin.script:
cmd: "files/yandex-docker-registry-auth.sh"
- name: Create a network for web proxy
community.docker.docker_network:
name: "{{ web_proxy_network }}"
driver: "bridge"

View File

@ -1,16 +1,46 @@
---
- name: 'Install eget'
- name: "Install eget"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
- vars/secrets.yml
# See: https://github.com/zyedidia/eget/releases
vars:
eget_install_dir: "{{ bin_prefix }}"
eget_bin_path: '{{ (eget_install_dir, "eget") | path_join }}'
tasks:
- name: 'Install eget'
- name: "Install eget"
ansible.builtin.import_role:
name: eget
vars:
eget_version: '1.3.4'
eget_install_path: '/usr/bin/eget'
eget_version: "1.3.4"
eget_install_path: "{{ eget_bin_path }}"
- name: "Install rclone"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} rclone/rclone --quiet --upgrade-only --to {{ eget_install_dir }} --asset zip --tag v1.69.2"
changed_when: false
- name: "Install btop"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} aristocratos/btop --quiet --upgrade-only --to {{ eget_install_dir }} --tag v1.4.2"
changed_when: false
- name: "Install restic"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} restic/restic --quiet --upgrade-only --to {{ eget_install_dir }} --tag v0.18.0"
changed_when: false
- name: "Install gobackup"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} gobackup/gobackup --quiet --upgrade-only --to {{ eget_install_dir }} --tag v2.14.0"
changed_when: false
- name: "Install task"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} go-task/task --quiet --upgrade-only --to {{ eget_install_dir }} --asset tar.gz --tag v3.43.3"
changed_when: false

58
playbook-gitea.yml Normal file
View File

@ -0,0 +1,58 @@
---
- name: "Configure gitea application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "gitea"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

67
playbook-gramps.yml Normal file
View File

@ -0,0 +1,67 @@
---
- name: "Configure gramps application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "gramps"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.yml.j2"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

67
playbook-homepage.yml Normal file
View File

@ -0,0 +1,67 @@
---
# Play 1: Setup environment for the application
- name: "Setup environment for homepage application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
- vars/homepage.yml
tags:
- setup
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Login to yandex docker registry."
ansible.builtin.script:
cmd: "files/yandex-docker-registry-auth.sh"
# Play 2: Deploy the application
- name: "Deploy homepage application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
- vars/homepage.yml
tags:
- deploy
tasks:
- name: "Check is web service image passed"
ansible.builtin.assert:
that:
- "homepage_web_image is defined"
fail_msg: 'You must pass variable "homepage_web_image"'
- name: "Create full image name with container registry"
ansible.builtin.set_fact:
registry_homepage_web_image: "{{ (docker_registry_prefix, homepage_web_image) | path_join }}"
- name: "Push web service image to remote registry"
community.docker.docker_image:
state: present
source: local
name: "{{ homepage_web_image }}"
repository: "{{ registry_homepage_web_image }}"
push: true
delegate_to: 127.0.0.1
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

55
playbook-miniflux.yml Normal file
View File

@ -0,0 +1,55 @@
---
- name: "Configure miniflux application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "miniflux"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
postgres_data_dir: "{{ (base_dir, 'data', 'postgres') | path_join }}"
postgres_backups_dir: "{{ (base_dir, 'backups', 'postgres') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ postgres_backups_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "./files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -1,17 +1,87 @@
---
- name: 'Install Netdata monitoring service'
- name: "Install Netdata monitoring service"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
- vars/secrets.yml
vars:
app_name: "netdata"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
config_go_d_dir: "{{ (config_dir, 'go.d') | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
tasks:
- name: 'Install Netdata from role'
- name: "Create user and environment"
ansible.builtin.import_role:
name: netdata
name: owner
vars:
netdata_version: 'v2.1.0'
netdata_exposed_port: '{{ netdata_port }}'
tags:
- monitoring
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ config_dir }}"
- "{{ config_go_d_dir }}"
- "{{ data_dir }}"
- name: "Copy netdata config file"
ansible.builtin.template:
src: "files/{{ app_name }}/netdata.conf.j2"
dest: "{{ config_dir }}/netdata.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy prometheus plugin config file"
ansible.builtin.copy:
src: "files/{{ app_name }}/go.d/prometheus.conf"
dest: "{{ config_go_d_dir }}/prometheus.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy fail2ban plugin config file"
ansible.builtin.copy:
src: "files/{{ app_name }}/go.d/fail2ban.conf"
dest: "{{ config_go_d_dir }}/fail2ban.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Grab docker group id."
ansible.builtin.shell:
cmd: |
set -o pipefail
grep docker /etc/group | cut -d ':' -f 3
executable: /bin/bash
register: netdata_docker_group_output
changed_when: netdata_docker_group_output.rc != 0
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"

58
playbook-outline.yml Normal file
View File

@ -0,0 +1,58 @@
---
- name: "Configure outline application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "outline"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
postgres_data_dir: "{{ (base_dir, 'data', 'postgres') | path_join }}"
postgres_backups_dir: "{{ (base_dir, 'backups', 'postgres') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Create internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ postgres_backups_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "./files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -1,27 +1,32 @@
---
- name: 'Update and upgrade system packages'
- name: "Update and upgrade system packages"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
- vars/secrets.yml
vars:
user_name: '<put-name-here>'
user_name: "<put-name-here>"
tasks:
- name: 'Remove user "{{ user_name }}"'
ansible.builtin.user:
name: '{{ user_name }}'
name: "{{ user_name }}"
state: absent
remove: true
- name: 'Remove group "{{ user_name }}"'
ansible.builtin.group:
name: '{{ user_name }}'
name: "{{ user_name }}"
state: absent
- name: 'Remove web dir'
- name: "Remove web dir"
ansible.builtin.file:
path: '/var/www/{{ user_name }}'
path: "/var/www/{{ user_name }}"
state: absent
- name: "Remove home dir"
ansible.builtin.file:
path: "/home/{{ user_name }}"
state: absent

34
playbook-rssbridge.yml Normal file
View File

@ -0,0 +1,34 @@
---
- name: "Configure rssbridge application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "rssbridge"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -1,37 +1,42 @@
---
- name: 'Configure base system parameters'
- name: "Configure base system parameters"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
- vars/secrets.yml
vars:
apt_packages:
- acl
- curl
- fuse
- git
- htop
- jq
- make
- python3-pip
- sqlite3
- tree
tasks:
- name: 'Install additional apt packages'
- name: "Install additional apt packages"
ansible.builtin.apt:
name: '{{ apt_packages }}'
name: "{{ apt_packages }}"
update_cache: true
- name: 'Configure timezone'
ansible.builtin.import_role:
name: yatesr.timezone
vars:
timezone: UTC
tags:
- skip_ansible_lint
- name: 'Configure security settings'
- name: "Configure security settings"
ansible.builtin.import_role:
name: geerlingguy.security
vars:
security_ssh_permit_root_login: "yes"
security_autoupdate_enabled: "no"
security_fail2ban_enabled: "yes"
security_fail2ban_enabled: true
- name: "Copy keep files script"
ansible.builtin.copy:
src: "files/keep-files.py"
dest: "{{ bin_prefix }}/keep-files.py"
owner: root
group: root
mode: "0755"

View File

@ -1,15 +1,15 @@
---
- name: 'Update and upgrade system packages'
- name: "Update and upgrade system packages"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
- vars/secrets.yml
tasks:
- name: Perform an upgrade of packages
ansible.builtin.apt:
upgrade: 'yes'
upgrade: "yes"
update_cache: true
- name: Check if a reboot is required

64
playbook-wakapi.yml Normal file
View File

@ -0,0 +1,64 @@
---
- name: "Configure wakapi application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "wakapi"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.yml.j2"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -2,6 +2,6 @@
ungrouped:
hosts:
server:
ansible_host: '158.160.46.255'
ansible_user: 'major'
ansible_host: "158.160.46.255"
ansible_user: "major"
ansible_become: true

View File

@ -3,10 +3,7 @@
version: 1.2.2
- src: geerlingguy.security
version: 2.4.0
version: 3.0.0
- src: geerlingguy.docker
version: 7.4.3
- src: caddy_ansible.caddy_ansible
version: v3.2.0
version: 7.4.7

View File

@ -1,24 +0,0 @@
---
- name: 'Create owner.'
import_role:
name: owner
vars:
owner_name: '{{ username }}'
owner_group: '{{ username }}'
owner_extra_groups: '{{ extra_groups | default([]) }}'
owner_ssh_keys: '{{ ssh_keys | default([]) }}'
owner_env: '{{ env | default({}) }}'
- name: 'Create web dir.'
file:
path: '/var/www/{{ username }}'
state: directory
owner: '{{ username }}'
group: '{{ username }}'
recurse: True
- name: 'Login to yandex docker registry.'
ansible.builtin.script:
cmd: 'files/yandex-docker-registry-auth.sh'
become: yes
become_user: '{{ username }}'

View File

@ -1,8 +1,8 @@
---
# defaults file for eget
eget_version: '1.3.4'
eget_download_url: 'https://github.com/zyedidia/eget/releases/download/v{{ eget_version }}/eget-{{ eget_version }}-linux_amd64.tar.gz'
eget_install_path: '/usr/bin/eget'
eget_version: "1.3.4"
eget_download_url: "https://github.com/zyedidia/eget/releases/download/v{{ eget_version }}/eget-{{ eget_version }}-linux_amd64.tar.gz"
eget_install_path: "/usr/bin/eget"
eget_download_dest: '/tmp/{{ eget_download_url | split("/") | last }}'
eget_unarchive_dest: '{{ eget_download_dest | regex_replace("(\.tar\.gz|\.zip)$", "") }}'

View File

@ -1,6 +1,7 @@
---
galaxy_info:
author: 'Anton Vakhrushev'
description: 'Role for installation eget utility'
author: "Anton Vakhrushev"
description: "Role for installation eget utility"
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
@ -13,9 +14,9 @@ galaxy_info:
# - GPL-3.0-only
# - Apache-2.0
# - CC-BY-4.0
license: 'MIT'
license: "MIT"
min_ansible_version: '2.1'
min_ansible_version: "2.1"
# If this a Container Enabled role, provide the minimum Ansible Container version.
# min_ansible_container_version:

View File

@ -1,33 +1,30 @@
---
# - name: 'Begin installation'
# ansible.builtin.debug:
# msg: 'Begin installation'
- name: 'Download eget from url "{{ eget_download_url }}"'
ansible.builtin.get_url:
url: '{{ eget_download_url }}'
dest: '{{ eget_download_dest }}'
mode: '0600'
url: "{{ eget_download_url }}"
dest: "{{ eget_download_dest }}"
mode: "0600"
- name: 'Unarchive eget'
- name: "Unarchive eget"
ansible.builtin.unarchive:
src: '{{ eget_download_dest }}'
dest: '/tmp'
src: "{{ eget_download_dest }}"
dest: "/tmp"
list_files: true
remote_src: true
- name: 'Install eget binary'
- name: "Install eget binary"
ansible.builtin.copy:
src: '{{ (eget_unarchive_dest, "eget") | path_join }}'
dest: '{{ eget_install_path }}'
mode: '0755'
dest: "{{ eget_install_path }}"
mode: "0755"
remote_src: true
- name: 'Remove temporary files'
- name: "Remove temporary files"
ansible.builtin.file:
path: '{{ eget_download_dest }}'
path: "{{ eget_download_dest }}"
state: absent
- name: 'Remove temporary directories'
- name: "Remove temporary directories"
ansible.builtin.file:
path: '{{ eget_unarchive_dest }}'
path: "{{ eget_unarchive_dest }}"
state: absent

View File

@ -1,24 +1,24 @@
---
# tasks file for eget
- name: 'Check if eget installed'
- name: "Check if eget installed"
ansible.builtin.command:
cmd: '{{ eget_install_path }} --version'
cmd: "{{ eget_install_path }} --version"
register: eget_installed_output
ignore_errors: true
changed_when: false
- name: 'Check eget installed --version'
- name: "Check eget installed version"
ansible.builtin.set_fact:
eget_need_install: '{{ not (eget_installed_output.rc == 0 and eget_version in eget_installed_output.stdout) }}'
eget_need_install: "{{ not (eget_installed_output.rc == 0 and eget_version in eget_installed_output.stdout) }}"
- name: 'Assert that installation flag is defined'
- name: "Assert that installation flag is defined"
ansible.builtin.assert:
that:
- eget_need_install is defined
- eget_need_install is boolean
- name: 'Download eget and install eget'
- name: "Download eget and install eget"
ansible.builtin.include_tasks:
file: 'install.yml'
file: "install.yml"
when: eget_need_install

View File

@ -1,4 +0,0 @@
---
netdata_version: 'v2.0.0'
netdata_image: 'netdata/netdata:{{ netdata_version }}'
netdata_exposed_port: '19999'

View File

@ -1,36 +0,0 @@
---
- name: 'Grab docker group id.'
ansible.builtin.shell:
cmd: |
set -o pipefail
grep docker /etc/group | cut -d ':' -f 3
executable: /bin/bash
register: netdata_docker_group_output
changed_when: netdata_docker_group_output.rc != 0
- name: 'Create NetData container from {{ netdata_image }}'
community.docker.docker_container:
name: netdata
image: '{{ netdata_image }}'
image_name_mismatch: 'recreate'
restart_policy: 'always'
published_ports:
- '127.0.0.1:{{ netdata_exposed_port }}:19999'
volumes:
- '/:/host/root:ro,rslave'
- '/etc/group:/host/etc/group:ro'
- '/etc/localtime:/etc/localtime:ro'
- '/etc/os-release:/host/etc/os-release:ro'
- '/etc/passwd:/host/etc/passwd:ro'
- '/proc:/host/proc:ro'
- '/run/dbus:/run/dbus:ro'
- '/sys:/host/sys:ro'
- '/var/log:/host/var/log:ro'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
capabilities:
- 'SYS_PTRACE'
- 'SYS_ADMIN'
security_opts:
- 'apparmor:unconfined'
env:
PGID: '{{ netdata_docker_group_output.stdout | default(999) }}'

View File

@ -1,5 +1,6 @@
---
owner_name: ''
owner_group: '{{ owner_name }}'
owner_name: ""
owner_group: "{{ owner_name }}"
owner_extra_groups: []
owner_ssh_keys: []
owner_env: {}

View File

@ -1,60 +1,51 @@
---
- name: 'Check app requirements for user "{{ owner_name }}".'
fail:
ansible.builtin.fail:
msg: You must set owner name.
when: not owner_name
- name: 'Create group "{{ owner_group }}".'
group:
name: '{{ owner_group }}'
ansible.builtin.group:
name: "{{ owner_group }}"
state: present
- name: 'Create user "{{ owner_name }}".'
user:
name: '{{ owner_name }}'
group: '{{ owner_group }}'
groups: '{{ owner_extra_groups }}'
ansible.builtin.user:
name: "{{ owner_name }}"
group: "{{ owner_group }}"
groups: "{{ owner_extra_groups }}"
shell: /bin/bash
register: uc_result
register: user_create_result
- name: 'Set up user ssh keys for user "{{ owner_name }}".'
authorized_key:
user: '{{ owner_name }}'
key: '{{ item }}'
ansible.posix.authorized_key:
user: "{{ owner_name }}"
key: "{{ item }}"
state: present
with_items: '{{ owner_ssh_keys }}'
with_items: "{{ owner_ssh_keys }}"
when: owner_ssh_keys | length > 0
- name: 'Prepare env variables.'
set_fact:
env_dict: '{{ owner_env | combine({
"CURRENT_UID": uc_result.uid | default(owner_name),
"CURRENT_GID": uc_result.group | default(owner_group) }) }}'
tags:
- env
- name: "Prepare env variables."
ansible.builtin.set_fact:
env_dict: '{{ owner_env | combine({"USER_UID": user_create_result.uid, "USER_GID": user_create_result.group}) }}'
- name: 'Set up environment variables for user "{{ owner_name }}".'
template:
ansible.builtin.template:
src: env.j2
dest: '/home/{{ owner_name }}/.env'
owner: '{{ owner_name }}'
group: '{{ owner_group }}'
tags:
- env
dest: "/home/{{ owner_name }}/.env"
owner: "{{ owner_name }}"
group: "{{ owner_group }}"
mode: "0640"
- name: 'Remove absent environment variables for user "{{ owner_name }}" from bashrc.'
lineinfile:
path: '/home/{{ owner_name }}/.bashrc'
regexp: '^export {{ item.key }}='
- name: 'Remove from bashrc absent environment variables for user "{{ owner_name }}".'
ansible.builtin.lineinfile:
path: "/home/{{ owner_name }}/.bashrc"
regexp: "^export {{ item.key }}="
state: absent
with_dict: '{{ env_dict }}'
tags:
- env
with_dict: "{{ env_dict }}"
- name: 'Include environment variables for user "{{ owner_name }}" in bashrc.'
lineinfile:
path: '/home/{{ owner_name }}/.bashrc'
regexp: '^export \$\(grep -v'
- name: 'Include in bashrc environment variables for user "{{ owner_name }}".'
ansible.builtin.lineinfile:
path: "/home/{{ owner_name }}/.bashrc"
regexp: "^export \\$\\(grep -v"
line: 'export $(grep -v "^#" "$HOME"/.env | xargs)'
tags:
- env

View File

@ -1,57 +0,0 @@
import os
import shlex
import fabric
from invoke import task
SERVER_HOST_FILE = "hosts_prod"
DOKER_REGISTRY = "cr.yandex/crplfk0168i4o8kd7ade"
@task(name="deploy:gitea")
def deploy_gitea(context):
deploy("gitea", dirs=["data"])
@task(name="deploy:keycloak")
def deploy_keykloak(context):
deploy("keycloak", compose_file="docker-compose.prod.yml", dirs=["data"])
@task(name="deploy:outline")
def deploy_outline(context):
deploy("outline", compose_file="docker-compose.prod.yml", dirs=["data/postgres"])
def read_host():
with open(SERVER_HOST_FILE) as f:
return f.read().strip()
def ssh_host(app_name):
return f"{app_name}@{read_host()}"
def deploy(app_name: str, compose_file="docker-compose.yml", dirs=None):
docker_compose = os.path.join("app", app_name, compose_file)
assert os.path.exists(docker_compose)
conn_str = ssh_host(app_name)
dirs = dirs or []
print("Deploy app from", docker_compose)
print("Start setup remote host", conn_str)
with fabric.Connection(conn_str) as c:
print("Copy docker compose file to remote host")
c.put(
local=docker_compose,
remote=f"/home/{app_name}/docker-compose.yml",
)
print("Copy environment file")
c.run("cp .env .env.prod")
for d in dirs:
print("Create remote directory", d)
c.run(f"mkdir -p {d}")
print("Up services")
c.run(
f"docker compose --project-name {shlex.quote(app_name)} --env-file=.env.prod up --detach --remove-orphans"
)
c.run(f"docker system prune --all --volumes --force")
print("Done.")

View File

@ -1,59 +0,0 @@
# -------------------------------------------------------------------
# Global options
# -------------------------------------------------------------------
{
grace_period 15s
}
# -------------------------------------------------------------------
# Netdata service
# -------------------------------------------------------------------
status.vakhrushev.me, :29999 {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ netdata_port }}
}
basicauth / {
{{ netdata.login }} {{ netdata.password_hash }}
}
}
# -------------------------------------------------------------------
# Applications
# -------------------------------------------------------------------
vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ homepage_port }}
}
}
git.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ gitea_port }}
}
}
kk.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ keycloak_port }}
}
}
outline.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ outline_port }}
}
}
}

3
templates/env.j2 Normal file
View File

@ -0,0 +1,3 @@
{% for name in env_dict.keys() | sort %}
{{ name }}={{ env_dict[name] }}
{% endfor %}

7
vars/homepage.yml Normal file
View File

@ -0,0 +1,7 @@
---
app_name: "homepage"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
docker_registry_prefix: "cr.yandex/crplfk0168i4o8kd7ade"
homepage_web_image: "{{ homepage_web_image | default(omit) }}"

View File

@ -1,12 +1,7 @@
---
base_port: 41080
notes_port: "{{ base_port + 1 }}"
dayoff_port: "{{ base_port + 2 }}"
homepage_port: "{{ base_port + 3 }}"
netdata_port: "{{ base_port + 4 }}"
wiki_port: "{{ base_port + 5 }}"
nomie_port: "{{ base_port + 6 }}"
nomie_db_port: "{{ base_port + 7 }}"
gitea_port: "{{ base_port + 8 }}"
keycloak_port: "{{ base_port + 9 }}"
outline_port: "{{ base_port + 10 }}"
gramps_port: "{{ base_port + 12 }}"

142
vars/secrets.yml Normal file
View File

@ -0,0 +1,142 @@
$ANSIBLE_VAULT;1.1;AES256
62653431636461623338643536653736633166303934626565363963373637396534303130373035
6565376162653735313737333439633862643366336264650a633265316463323062653032363861
32626536343138663837633334316537373662653262366163633334623764633938323363363962
6230333564643665320a613862653632363363616266336338346539323964383736366235306437
33306363353163383663643062656330313134353836666232616532316264303564336235356661
30653262363866653139646436333036393837383262643537313933613939326433313565393465
31373036353133663337613935343038616164316132303833363338623863633234656537653039
62626436346238636234393939366139363034306432326538656264343733356537393332633836
38636639626665666238656338363633383566616638353235383465623232646537616230626630
63303130316438353934656636393366306566346362356564393661643064323630636463383061
37636461386432323136393739633862313337333261306664323361393835323034643134383461
31313762616538336666656137373631336132383364646163633732323431613239333563653332
65616664333839363834333362626238633833666430653738613636333432333430333861356339
61323865663661383534343964346238383134613532616637346235616139383434623564333361
31636165653261363830623162623738333937316664633434346431626630393837366666643434
61643734653834326434353431393732376266626266313264376235323838313539306463653864
36393461366230643234376161623330326365616539323965633431633238386262373562383161
39323634633166643038356434616461613864303334393932663730303839373530643933323839
66353337326336656635636362356531613634623633303461336565363564393964663430393666
64326439346233346132653230343234653430653239636362616561636166343030303863373337
36363633646432613138313062346164663730313061363432396138323561366430316439343036
32353931393064666231323863656165363066313236613332356161363139616636333963386130
37363030383765613132353161613766633635363033656561343038633839313933646264383730
64336339646264383332373639326164373163383966626363653762643037353636376336626136
33346533303036326531316332306461646361376435316438376161663162336335353938366565
30633133653431393066393961313138383337313731653031323432633766356338316366373432
32373937663961623739633439636661336461346132376533373961666432353937373066643165
61663063363661633938373365393665356665636562646265313834373962336566393835633339
34396666396162613162326331313037303933366564623837386338363063636564656339336639
66346465366233663534373465313930323134313835316464363263383866313563396263616535
63383265623865636162346635613863356266336664343434393437656134353639353535383332
62623934643930313939646466663336633034343534396137333264623263663866663339663266
30343234356536663262616363376663646264353331646164376331376639363135373137396437
37363166386233356434656237373535326162303437346233623263663534383032363638376134
61653939306433393437656465343066613530396265396262373433383637656266303064623234
64333062353435373863636439663561393763333538303836303631666262326430623835656138
37653562353562373935333235316430613737653862303933333062643663333364333966643461
33323335346566363337643161303835356336306232653763346639323265373432376239363566
64373562653238333865326335613133636335373739396335633631313431363061616139303463
37333364393438666532396131343637373833353766396234383739306565646439366438653032
33656330343061636338643465653664326338663233316631303465666632653436633135643664
64616132366632666431653262393035393163343664303961396431666236303864303865343634
35616634613165373637653235323164323666343436646339646637646234306163333462393063
32346534636165656436353036316232303266616135303663343631303565623562616237306365
65303938646239393564333461343238636335336533633265383066653734613332656563666434
31316665613630336263613934316361383332363164323266373565323239343033666663396534
39323739313636616232663535386439363065333766623837336230303334656466656262613363
37386664336436376530373436353235616437333834646563353830626162336261333135383866
64383930316531373366646335306131633166353161336463376530353066356530393665393063
31613636386532623035373866373065633233633135343439616662616232366337313764646436
64626262643532613136373238316561616361393433323066326333663663353236393662396539
31653036303031303462643231333965653536666136313638613832393361666131363435633932
31663864326563663230626237643763333737613239373134626433636564386231383961316162
39383165336433626466393935383363396333636131643733663866356434366664613766396263
34313934626133653361633665323131613736306331373732323434323535346136393964356231
62346136356331393238346333393266613365633563626238353530333931613330663765393936
32333261353634646366323238353238643837633735636662356630373464343330626630656130
36356565356430643133386461313335343436316263303064366139316638663161356332386362
37376431393661386231313763303266313630323362363664336366633035353562303439373630
33343265633630343065363461363064653933303932613761303538393734373962613633386539
66636534333537313135356665633966326430373062346136326532666638303334653263646431
38393131653338316663313265653861663334326635353137623739396636333637343137636339
32303836373535326363396434326233623532633931653039643763326263616232333462616631
36666564623030396134346665386661386433366266363739626161653062323963313365353161
35643530343439326133613939353737653165326538666530366530323963363839373032326462
34666235376263616364656130633637346334353934396132353263313237316366303137386430
64653563333963313361303239666361336136356363306266633833366262326431616161613238
38653538613032386238623839663332613064333031303939363733396635373238666562386536
32316566666435376239386637396334643861643634316338613063656465373164646530363865
34373130636435326130633437303539646535336131393339613139383636333763336530636534
34636666666265373636326666333130623863316465663333653466353063313134386262333739
62626264393362353663303531313061643538663532333164336662343732373463623166396539
39396531376338616538633633343733343765306237656466666232623163303738643431633763
61656335616430653936303831393664653365363764333362373337323364323039363163353461
61336536316466396636306266353830316665343739613033346538333830306263386134613737
64316339613462346438656362346664303762643766373364343931626530626439336634666537
31633964386564663531343764326666666261643464353438353035333665363434646661646663
38636239373331623061343730376632393963303732393533396464633131633435373161303163
66383461343861326665623463636262336562633936623563373136613063356362383862663232
37333331373431393137363735613366656434323065346661366433663464666363343231393863
64633530316230653065356165366135396531663731323866376162306238343962376362633234
61626563306431623336623737353931316236623333623337383366613262346631646330313637
39366239396330303461303666396431663062626533336136643039353034633230353765353334
38613362653963336162326163356662356661386630353664333265373032316531656131376665
37376262363130336161613230333863653662623436666361396561613935323432663665643138
38616564636634613164313666393532396265396135326538336665373232316461326635306131
34343632636637653835653131613161316237346239363830386536363933643532333533373333
39643364306163666366376535653333323435383332633961343930633635383030356463333964
39626130666166313234386439383833616265316265363430343134633730336261383435356138
62373063346238613061363033343366623633373034346531303538396335653938646664303962
31336634623135616237323837623831306535316463613266326262663934303938373132343735
37656335333263326531646162393738653632376164323165393563656138613830633936396433
61353332343134636564333233393863643837353366386234376237623435663765343366363033
63326233383962633266303962613361643464613764303531333930363736323535386632393766
61353666303134663466333330383031333933666137346364656364313965656164303065303530
34616130653061613934393831373130333566363736626261316330303966656162326638333130
66373133613536623566303432356666346535636237616561323063643439616436393666376536
32613830343636393031333737376332396230313034393062663437613838363263333233613439
30623039336339373234326261306435366332656164613439376139346333616331326561383963
30643133376632656564616536323863373237623263366266396264633464373765316164346165
37636233633661643362636630356333333766613036663335613264333439323239633861363034
34663937376530653837653236303839336631313863363239626632646436653638366638366566
39306538353231623434373537313862386335393262633062313432646232623863383731313031
30656366363837366666393933346238363336363030373836386230343062363661306263633163
33626562623935643665626239386133636531393536336661613430343630333961303233343430
63656666346138643163393663316134666336323961626163376461663635633834333337393062
61656163613234633965356133666335343065626137633137333266613561633936386136643134
37383562663031393133326662623136386539633066323336306262346236613161613637626162
36636133666334333636653535623732343233396430653566393165353431303739656239373738
33323939633264303139323162613964306237376461383261646635343036313639626539373238
32336537373436373338386432646139303831383138326564333739353761616336346461356532
38303138656533386231303336336564656135346162376662663962663763353830663237323138
33373331656637363139626132393231313136303936633161636261643264313230356261366165
39666331306262643566663830626663656530303831343231323336306266363735393966613062
63353938386263376166316335656164633233633465303065663565373764343031663866653135
64663766386436653665356265333565323336636539656237303334383636353161643366656637
66356532373130323236313936623964663433333965326662333833316437326461326165376661
66396537653032346666363965313339323331303864616230646361386335663138613433326261
35613430363864336635343434333761656639633863323534653862383936653762646134356664
38326463326239636162333435656561343739366364313738663535636136323439373462643832
62633661663337343538393466613734633531666532353161616231323161646237653736346561
64323063656366373931396639393261643333393333626539663561636661393936316539633263
63343331313464623636353031343232613534663565303538333164306531303438616539386364
30376233333630336431336364663834633734636261353364343564333639623737363538313462
61616233663335303062336635376435643965373039336231346234363436356238356162613138
65326532663461616263626238346535623136633039613939353132313836373962646463333535
65313562346631633435616232366166373763346337303561326130333936346130363431383036
62356435616630396539303633343166646461393030336462366463636138316333633363643636
65376131333731356566333237363266656466376539326438313930376363386231616138336335
65333735653830373035656265336331346562353233663465343935383235303930633831613137
64303130666532303733633133386334613733383562613661643931636136386264396438316366
61653964643135646332343764666134336666336232376465353462356632346533633961636534
32643234396636303135663562656435376561336235303837643932366334616265383639343733
65633833653763643366646232343765306131313465326263623636386131376463356139623334
39343163366439643334646663393434353333316234623530393431643539346435616263303734
61633066653838363933646230623238653431393061646430383537343363643562653831336362
37626630633161653763386663373630306564663339393265663732623434643231326335376562
37663234643466366535326461396631633430613431346134316635653032663033623465346338
61353331393631343365663233376330333730366161353362626166646232313666336333386265
33373761313536326165343339346263316636363362393365663034353964373164643763383037
3666

View File

@ -1,52 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
66343038363033616463663935663066663937373633383364626362656565333861373734626265
3239643335313165383962663832633935393039616134620a626436643836623037316233373461
62623330376438633933353965356262386138383864313461623062343836313739623562336431
3433356538323334640a663835313739646231316639616562373335646461616239383335373565
34303533376563393464303137633639396364366261376532623462653030363238343233623864
37316337383736626262643133636337623532343334386665643637386265313933386332663030
38663839303431656332356334303764353932316465363032386431376235313734666238383030
64323236636531643932663838383336616236303231326265333735346664633732363963653166
31616264356132396162356435373363666165353231336262386265393637393563663739383161
31393835353165636137356536353239366339636638646532396632666332393630663631626261
30313761393931386236353763663363393436643864643735326233323262626438616338326162
34333236653030326564363465316463663263346565343534326363333361663238626362356666
32663362616361623235393362363766316366616337666637353039653862646632393065336239
33666264366366323333663238623138613863653233623931373766373739383766333139653734
30333031363134326263343437656238613865306236306235396262306133653334616637376637
34663533386134373139343263373265326261666161326334626137313930643964656165636135
37656532653633616232356361326139613061633936333337383536616364623035623865663431
61633433656462373730303264363830376137323666326561333065326339303430663731633031
30326632373337663733363965383836623930366432323763333265363937363430333965316566
30653161353266623532383331336139383336626162306435613634363261623430343634656161
62353233633165626339633931616237316638653566363534333130373366633464346662343738
34336265343936653566623464356531336561626139633466316632373639666330666138343965
64326334343162626666366635363034303566303836376566613632353633346139616534333438
30633731333434666135656238633664376338303166373131643334343030326632316265373433
61646534663135613433343864353936623661653337646461353233336365363030376465666666
31626531303039333932306531303833313731386435633438373363633461623433613265643435
31643233386564323135653363363038386366353232303032316361613733313966393664656362
31373039393933326631343163383335376561616433623332646239303562363738393865336562
35613662376238613339316130386365643836663334346362623832623639636330363365616263
36613962396537653837303161613236616537383736666431313164393639353130653563343536
32373764376438366566396437376137383133303365623532626537373832346665333634353364
36326531323865323866623532383666333230646234383539313430396234313430323439393631
61313332336162343232393937343231626532303037323163333165626230646134633036393732
35633633613366643736343462336434633438333634653639623734346665383832316161383539
34656231306535613639613737626630666430326432633737616435613965613738363836373535
34613739643966653833393639313064356131393533653037376264306636353538373838613166
61633032643564383030323066623863643037306639653164356236353564616634376537383534
63343638303762656635383837613535646235313137336336343964333633306662633866343037
64393264653737656238663939663137616637356238636265313438373363316166373862653331
65383039393939373365633738306239373631333663613463313036393563643537306562386631
61313064653831343138343832366261613936316330386332613034383139373861363038613030
66616665646434363731346234313261353035323263333035616531316161356536653536613136
63356136373035363864356633373139643761323734343030353130353866653337336338316364
39643466386139346233373837656165393732326662353934656563386436313964643131376264
61336331353937373965653363396438303561636531626330333063316533616566316165646562
64306164343162646634366235363333636638353562656438343937346666643565303264323265
65353961623937623665366536333366663462376135396132663161383563383738333232653435
30366637393065653233323261633063373235306238323032623562326261613938646535396536
34633730393234363866643533396435646137313365636136643239383662643837626565663739
31313130626230353162393037653265633336343537313564373531636166666532356138633331
336233346133636565623763646438373536