1
0

Compare commits

..

62 Commits

Author SHA1 Message Date
8a9b3db287 Gramps: upgrade to 25.7.0 2025-07-02 13:43:33 +03:00
a72c67f070 Wakapi: install 2.14.0
And transfer data from local
2025-07-01 11:21:05 +03:00
47745b7bc9 RSS-Bridge: install version 2025-06-03 2025-06-30 19:18:45 +03:00
c568f00db1 Miniflux: install and configure rss reader 2025-06-28 12:12:19 +03:00
99b6959c84 Tasks: add quick commands for authelia 2025-06-28 11:00:32 +03:00
fa65726096 Authelia: upgrade to 4.39.4 2025-06-28 10:02:57 +03:00
f9eaf7a41e Rename encrypted vars to secrets 2025-06-28 09:59:04 +03:00
d825b1f391 Netdata: upgrade to 2.5.4 2025-06-28 09:57:19 +03:00
b296a3f2fe Netdata: upgrade to 2.5.3 2025-06-22 09:34:57 +03:00
8ff89c9ee1 Gitea: upgrade to 1.24.2 2025-06-22 09:31:46 +03:00
62a4e598bd Gitea: upgrade to v1.24.0 2025-06-11 20:48:51 +03:00
b65aaa5072 Gramps: upgrade to v25.6.0 2025-06-11 20:48:27 +03:00
98b7aff274 Gramps: upgrade to v25.5.2 2025-05-24 12:04:45 +03:00
6eaf7f7390 Netdata: upgrade to 2.5.1 2025-05-21 21:24:22 +03:00
32e80282ef Update ansible roles 2025-05-17 17:17:01 +03:00
c8bd9f4ec3 Netdata: add fail2ban monitoring 2025-05-17 16:58:12 +03:00
d3d189e284 Gitea: upgrade to 1.23.8 2025-05-17 13:51:10 +03:00
71fe688ef8 Caddy: upgrade to 2.10.0 2025-05-17 13:50:47 +03:00
c5d0f96bdf Netdata + Authelia: add monitoring 2025-05-17 13:33:35 +03:00
eea8db6499 Netdata + Caddy: add monitoring for http-server 2025-05-17 11:55:38 +03:00
7893349da4 Netdata: refactoring as docker compose app 2025-05-17 10:27:41 +03:00
a4c61f94e6 Gramps: upgrade to 25.5.1 (with Gramps API 3.0.0) 2025-05-12 15:56:23 +03:00
da0a261ddd Outline: upgrade to 0.84.0 2025-05-12 12:58:21 +03:00
b9954d1bba Authelia: upgrade to 4.39.3 2025-05-12 12:55:41 +03:00
3a23c08f37 Remove keycloak 2025-05-07 12:51:05 +03:00
d1500ea373 Outline: use oidc from authelia 2025-05-07 12:37:07 +03:00
a77fefcded Authelia: introduce to protect system services 2025-05-07 11:23:22 +03:00
41fac2c4f9 Remove caddy system-wide installation 2025-05-06 12:00:32 +03:00
280ea24dea Caddy: web proxy in docker container 2025-05-06 11:50:26 +03:00
855bafee5b Format files with ansible-lint 2025-05-06 11:20:00 +03:00
adde4e32c1 Networks: create internal docker network for proxy server
Prepare to use caddy in docker
2025-05-06 11:11:48 +03:00
527067146f Gramps: refactor app
Move scripts, configs and data to separate user space
2025-05-06 10:25:38 +03:00
93326907d2 Remove unused var 2025-05-06 10:02:39 +03:00
bcad87c6e0 Remove legacy files 2025-05-05 20:57:47 +03:00
5d127d27ef Homepage: refactoring 2025-05-05 20:40:32 +03:00
2d6cb3ffe0 Format files with ansible-lint 2025-05-05 18:04:54 +03:00
e68920c0e2 Netdata as playbook 2025-05-05 18:02:14 +03:00
c5c15341b8 Outline: update to 0.83.0 2025-05-05 17:00:48 +03:00
cd4a7177d7 Outline: configure backups 2025-05-05 16:53:09 +03:00
daeef1bc4b Backups: rewrite backup script 2025-05-05 11:48:49 +03:00
ddae18f8b3 Gitea: configure backups again 2025-05-05 11:39:06 +03:00
8c8657fdd8 Gramps: configure backup again 2025-05-05 11:26:54 +03:00
c4b0200dc6 Outline: configure mailer 2025-05-04 14:02:28 +03:00
38bafd7186 Remove old configs 2025-05-04 11:12:44 +03:00
c6db39b55a Remove old playbooks and configs 2025-05-04 11:05:18 +03:00
528512e665 Refactor outline app: deploy with ansible 2025-05-04 10:59:41 +03:00
0e05d3e066 Make consistent container names 2025-05-04 10:26:17 +03:00
4221fb0009 Refactor keycloac app: deploy with ansible 2025-05-04 10:18:18 +03:00
255ac33e04 Configure gitea mailer 2025-05-03 19:39:02 +03:00
0bdd2c2543 Update gitea to 1.23.7 2025-05-03 16:58:38 +03:00
155d065dd0 Add backups for gitea 2025-05-03 16:56:22 +03:00
9a3e646d8a Refactor gitea app: deploy with ansible 2025-05-03 14:44:23 +03:00
f4b5fcb0f1 Format playbooks with ansible-lint 2025-05-03 10:41:00 +03:00
3054836085 Fix cronjob for backups 2025-05-03 10:35:33 +03:00
838f959fd9 Remove apps dir in files, simplify layout 2025-05-02 19:52:48 +03:00
5b60af6061 gramps: fix redis host and baclups 2025-05-02 19:45:48 +03:00
d1eae9b5b5 Configure baclup for sqlite databases 2025-05-02 19:05:17 +03:00
76328bf6c6 Update gramps to v25.4.1
- Inline vars into docker compose file
- Replace redis with valkey
2025-05-02 18:40:13 +03:00
a31cbbe18e Add backups with gobackup and restic 2025-05-02 17:34:31 +03:00
132da79fab Add utils for backups: task. restic. gobaclup 2025-05-02 10:57:42 +03:00
676f6626f2 Update netdata to 2.4.0 2025-05-02 10:33:56 +03:00
dda5cb4449 Update eget installation path 2025-05-02 10:31:41 +03:00
83 changed files with 6183 additions and 920 deletions

View File

@ -1,3 +1,5 @@
--- ---
exclude_paths: exclude_paths:
- 'galaxy.roles/' - ".ansible/"
- "galaxy.roles/"
- "Taskfile.yml"

View File

@ -9,6 +9,9 @@ indent_size = 4
[*.yml] [*.yml]
indent_size = 2 indent_size = 2
[*.yml.j2]
indent_size = 2
[Vagrantfile] [Vagrantfile]
indent_size = 2 indent_size = 2

1
.gitignore vendored
View File

@ -5,6 +5,7 @@
/galaxy.roles/ /galaxy.roles/
/ansible-vault-password-file /ansible-vault-password-file
/temp
*.retry *.retry
test_smtp.py test_smtp.py

View File

@ -3,12 +3,11 @@
Настройки виртуального сервера для домашних проектов. Настройки виртуального сервера для домашних проектов.
> В этом проекте не самые оптимальные решения. > В этом проекте не самые оптимальные решения.
> Но они помогают мне поддерживать сервер для моих личных проектов уже семь лет. > Но они помогают мне поддерживать сервер для моих личных проектов уже много лет.
## Требования ## Требования
- [ansible](https://docs.ansible.com/ansible/latest/getting_started/index.html) - [ansible](https://docs.ansible.com/ansible/latest/getting_started/index.html)
- [invoke](https://www.pyinvoke.org/)
- [task](https://taskfile.dev/) - [task](https://taskfile.dev/)
- [yq](https://github.com/mikefarah/yq) - [yq](https://github.com/mikefarah/yq)
@ -21,7 +20,7 @@ $ ansible-galaxy install --role-file requirements.yml
## Структура ## Структура
- Для каждого приложения создается свой пользователь. - Для каждого приложения создается свой пользователь (опционально).
- Для доступа используется ssh-ключ. - Для доступа используется ssh-ключ.
- Докер используется для запуска и изоляции приложений. Для загрузки образов настраивается Yandex Docker Registry. - Докер используется для запуска и изоляции приложений. Для загрузки образов настраивается Yandex Docker Registry.
- Выход во внешнюю сеть через proxy server [Caddy](https://caddyserver.com/). - Выход во внешнюю сеть через proxy server [Caddy](https://caddyserver.com/).
@ -32,30 +31,10 @@ $ ansible-galaxy install --role-file requirements.yml
В организации Яндекс: https://admin.yandex.ru/domains/vakhrushev.me?action=set_dns&uid=46045840 В организации Яндекс: https://admin.yandex.ru/domains/vakhrushev.me?action=set_dns&uid=46045840
## Частые команды
Конфигурация приложений (если нужно добавить новое приложение):
```bash
$ task configure-apps
```
Конфигурация мониторинга (если нужно обновить netdata):
```bash
$ task configure-monitoring
```
## Деплой приложений ## Деплой приложений
Доступные для деплоя приложения: Деплой всех приложений через ansible:
```bash ```bash
invoke --list ansible-playbook -i production.yml --diff playbook-gitea.yml
```
Выполнить команду деплоя, например:
```bash
invoke deploy:gitea
``` ```

View File

@ -12,8 +12,13 @@ vars:
sh: 'yq .ungrouped.hosts.server.ansible_user {{.HOSTS_FILE}}' sh: 'yq .ungrouped.hosts.server.ansible_user {{.HOSTS_FILE}}'
REMOTE_HOST: REMOTE_HOST:
sh: 'yq .ungrouped.hosts.server.ansible_host {{.HOSTS_FILE}}' sh: 'yq .ungrouped.hosts.server.ansible_host {{.HOSTS_FILE}}'
AUTHELIA_DOCKER: 'docker run --rm -v $PWD:/data authelia/authelia:4.39.4 authelia'
tasks: tasks:
install-roles:
cmds:
- ansible-galaxy role install --role-file requirements.yml --force
ssh: ssh:
cmds: cmds:
- ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} - ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}}
@ -22,11 +27,43 @@ tasks:
cmds: cmds:
- ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} -t btop - ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} -t btop
edit-vars: vars-decrypt:
cmds: cmds:
- ansible-vault edit vars/vars.yml - ansible-vault decrypt vars/vars.yml
env:
EDITOR: micro vars-encrypt:
cmds:
- ansible-vault encrypt vars/vars.yml
authelia-cli:
cmds:
- "{{.AUTHELIA_DOCKER}} {{.CLI_ARGS}}"
authelia-validate-config:
vars:
DEST_FILE: "temp/configuration.yml"
cmds:
- >
ansible localhost
--module-name template
--args "src=files/authelia/configuration.yml.j2 dest={{.DEST_FILE}}"
--extra-vars "@vars/secrets.yml"
- defer: rm -f {{.DEST_FILE}}
- >
{{.AUTHELIA_DOCKER}}
validate-config --config /data/{{.DEST_FILE}}
authelia-gen-random-string:
cmds:
- >
{{.AUTHELIA_DOCKER}}
crypto rand --length 32 --charset alphanumeric
authelia-gen-secret-and-hash:
cmds:
- >
{{.AUTHELIA_DOCKER}}
crypto hash generate pbkdf2 --variant sha512 --random --random.length 72 --random.charset rfc3986
format-py-files: format-py-files:
cmds: cmds:

28
Vagrantfile vendored
View File

@ -1,28 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Этот файл предназначен для запуска тестовой виртуальной машины,
# на которой можно обкатать роли для настройки сервера.
ENV["LC_ALL"] = "en_US.UTF-8"
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.network "private_network", ip: "192.168.50.10"
# Приватный ключ для доступа к машине
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end

View File

@ -1,3 +0,0 @@
WEB_SERVER_PORT=9494
USER_UID=1000
USER_GID=1000

View File

@ -1 +0,0 @@
data/

View File

@ -1,16 +0,0 @@
services:
server:
image: gitea/gitea:1.23.1
restart: unless-stopped
environment:
- "USER_UID=${USER_UID}"
- "USER_GID=${USER_GID}"
- "GITEA__server__SSH_PORT=2222"
volumes:
- ./data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "${WEB_SERVER_PORT}:3000"
- "2222:22"

View File

@ -1,5 +0,0 @@
WEB_SERVER_PORT=9595
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=password
USER_UID=1000
USER_GID=1000

View File

@ -1 +0,0 @@
data/

View File

@ -1,22 +0,0 @@
# Images: https://quay.io/repository/keycloak/keycloak?tab=tags&tag=latest
# Configuration: https://www.keycloak.org/server/all-config
# NB
# - На проде были проблемы с правами к директории data, пришлось выдать 777
# - Переменную KC_HOSTNAME_ADMIN_URL нужно указать вместе с KC_HOSTNAME_URL, иначе будут ошибки 403
services:
keycloak:
image: quay.io/keycloak/keycloak:24.0.4
command: ["start-dev"]
restart: unless-stopped
environment:
KEYCLOAK_ADMIN: "${KEYCLOAK_ADMIN}"
KEYCLOAK_ADMIN_PASSWORD: "${KEYCLOAK_ADMIN_PASSWORD}"
KC_HOSTNAME_URL: "https://kk.vakhrushev.me"
KC_HOSTNAME_ADMIN_URL: "https://kk.vakhrushev.me"
ports:
- "${WEB_SERVER_PORT}:8080"
volumes:
- "./data:/opt/keycloak/data"

View File

@ -1,16 +0,0 @@
# Images: https://quay.io/repository/keycloak/keycloak?tab=tags&tag=latest
# Configuration: https://www.keycloak.org/server/all-config
services:
keycloak:
image: quay.io/keycloak/keycloak:24.0.4
command: ["start-dev"]
restart: unless-stopped
environment:
KEYCLOAK_ADMIN: "${KEYCLOAK_ADMIN}"
KEYCLOAK_ADMIN_PASSWORD: "${KEYCLOAK_ADMIN_PASSWORD}"
ports:
- "${WEB_SERVER_PORT}:8080"
volumes:
- "./data:/opt/keycloak/data"

View File

@ -1,60 +0,0 @@
services:
outline-app:
image: outlinewiki/outline:0.81.1
restart: unless-stopped
ports:
- "${WEB_SERVER_PORT}:3000"
depends_on:
- postgres
- redis
environment:
NODE_ENV: '${NODE_ENV}'
SECRET_KEY: '${SECRET_KEY}'
UTILS_SECRET: '${UTILS_SECRET}'
DATABASE_URL: '${DATABASE_URL}'
PGSSLMODE: '${PGSSLMODE}'
REDIS_URL: '${REDIS_URL}'
URL: '${URL}'
FILE_STORAGE: '${FILE_STORAGE}'
FILE_STORAGE_UPLOAD_MAX_SIZE: '262144000'
AWS_ACCESS_KEY_ID: '${AWS_ACCESS_KEY_ID}'
AWS_SECRET_ACCESS_KEY: '${AWS_SECRET_ACCESS_KEY}'
AWS_REGION: '${AWS_REGION}'
AWS_S3_ACCELERATE_URL: '${AWS_S3_ACCELERATE_URL}'
AWS_S3_UPLOAD_BUCKET_URL: '${AWS_S3_UPLOAD_BUCKET_URL}'
AWS_S3_UPLOAD_BUCKET_NAME: '${AWS_S3_UPLOAD_BUCKET_NAME}'
AWS_S3_FORCE_PATH_STYLE: '${AWS_S3_FORCE_PATH_STYLE}'
AWS_S3_ACL: '${AWS_S3_ACL}'
OIDC_CLIENT_ID: '${OIDC_CLIENT_ID}'
OIDC_CLIENT_SECRET: '${OIDC_CLIENT_SECRET}'
OIDC_AUTH_URI: '${OIDC_AUTH_URI}'
OIDC_TOKEN_URI: '${OIDC_TOKEN_URI}'
OIDC_USERINFO_URI: '${OIDC_USERINFO_URI}'
OIDC_LOGOUT_URI: '${OIDC_LOGOUT_URI}'
OIDC_USERNAME_CLAIM: '${OIDC_USERNAME_CLAIM}'
OIDC_DISPLAY_NAME: '${OIDC_DISPLAY_NAME}'
redis:
image: redis:7.2-bookworm
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./redis.conf:/redis.conf
command: ["redis-server", "/redis.conf"]
postgres:
image: postgres:16.3-bookworm
restart: unless-stopped
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: '${POSTGRES_USER}'
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD}'
POSTGRES_DB: '${POSTGRES_DB}'
volumes:
database-data:

View File

@ -1,55 +0,0 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
grampsweb: &grampsweb
image: ghcr.io/gramps-project/grampsweb:v25.2.0
restart: unless-stopped
ports:
- "127.0.0.1:${WEB_SERVER_PORT}:5000" # host:docker
environment:
GRAMPSWEB_TREE: "Gramps" # will create a new tree if not exists
GRAMPSWEB_SECRET_KEY: "${SECRET_KEY}"
GRAMPSWEB_BASE_URL: "https://gramps.vakhrushev.me"
GRAMPSWEB_REGISTRATION_DISABLED: "true"
GRAMPSWEB_CELERY_CONFIG__broker_url: "redis://grampsweb_redis:6379/0"
GRAMPSWEB_CELERY_CONFIG__result_backend: "redis://grampsweb_redis:6379/0"
GRAMPSWEB_RATELIMIT_STORAGE_URI: redis://grampsweb_redis:6379/1
GRAMPSWEB_EMAIL_HOST: "${POSTBOX_HOST}"
GRAMPSWEB_EMAIL_PORT: "${POSTBOX_PORT}"
GRAMPSWEB_EMAIL_HOST_USER: "${POSTBOX_USER}"
GRAMPSWEB_EMAIL_HOST_PASSWORD: "${POSTBOX_PASS}"
GRAMPSWEB_EMAIL_USE_TLS: "false"
GRAMPSWEB_DEFAULT_FROM_EMAIL: "gramps@vakhrushev.me"
GUNICORN_NUM_WORKERS: 2
# media storage at s3
GRAMPSWEB_MEDIA_BASE_DIR: "s3://av-gramps-media-storage"
AWS_ENDPOINT_URL: "https://storage.yandexcloud.net"
AWS_ACCESS_KEY_ID: "${AWS_ACCESS_KEY_ID}"
AWS_SECRET_ACCESS_KEY: "${AWS_SECRET_ACCESS_KEY}"
AWS_DEFAULT_REGION: "ru-central1"
depends_on:
- grampsweb_redis
volumes:
- ./data/gramps_users:/app/users # persist user database
- ./data/gramps_index:/app/indexdir # persist search index
- ./data/gramps_thumb_cache:/app/thumbnail_cache # persist thumbnails
- ./data/gramps_cache:/app/cache # persist export and report caches
- ./data/gramps_secret:/app/secret # persist flask secret
- ./data/gramps_db:/root/.gramps/grampsdb # persist Gramps database
- ./data/gramps_media:/app/media # persist media files
- ./data/gramps_tmp:/tmp
grampsweb_celery:
<<: *grampsweb # YAML merge key copying the entire grampsweb service config
ports: []
container_name: grampsweb_celery
restart: unless-stopped
depends_on:
- grampsweb_redis
command: celery -A gramps_webapi.celery worker --loglevel=INFO --concurrency=2
grampsweb_redis:
image: docker.io/library/redis:7.2.4-alpine
container_name: grampsweb_redis
restart: unless-stopped

View File

@ -1,6 +0,0 @@
services:
homepage-web:
image: "${WEB_SERVICE_IMAGE}"
ports:
- "127.0.0.1:${WEB_SERVICE_PORT}:80"
restart: unless-stopped

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,15 @@
services:
authelia_app:
container_name: 'authelia_app'
image: 'docker.io/authelia/authelia:4.39.4'
user: '{{ user_create_result.uid }}:{{ user_create_result.group }}'
restart: 'unless-stopped'
networks:
- "{{ web_proxy_network }}"
volumes:
- "{{ config_dir }}:/config"
networks:
{{ web_proxy_network }}:
external: true

37
files/authelia/users.yml Normal file
View File

@ -0,0 +1,37 @@
$ANSIBLE_VAULT;1.1;AES256
33323463653739626134366261626263396338333966376262313263613131343962326432613263
6430616564313432666436376432383539626231616438330a646161313364353566373833353337
64633361306564646564663736663937303435356332316432666135353863393439663235646462
3136303031383835390a396531366636386133656366653835633833633733326561383066656464
31613933333731643065316130303561383563626636346633396266346332653234373732326535
39663765353938333835646563663633393835633163323435303164663261303661666435306239
34353264633736383565306336633565376436646536623835613330393466363935303031346664
63626465656435383162633761333131393934666632336539386435613362353135383538643836
66373261306139353134393839333539366531393163393266386531613732366431663865343134
64363933616338663966353431396133316561653366396130653232636561343739336265386339
38646238653436663531633465616164303633356233363433623038666465326339656238653233
36323162303233633935646132353835336364303833636563346535316166346533636536656665
64323030616665316133363739393364306462316135636630613262646436643062373138656431
35663334616239623534383564643738616264373762663034376332323637626337306639653830
65386339666465343931303933663561643664313364386662656663643336636264636333666435
66366531613538363233346137383462326334306534333564636232393931393433386664363036
39623134636331646536323531653063326231613363366562643561353939633062663132303035
38303265326136303633666566613966636133666336396133333033643434303138303065666463
36643765316134636133333937396332613233383932663265386264623133633364646237346465
32623965653662336335366639643765393636623236323036396538353666646132393636663536
65646638643236313762373135336430643731643961386264303134366633353934366431333430
34313362633836613166336437323835626537653237666139383230663835626630623933383834
32636136663830643661363663303136393733646133626538333836666135653936323832336433
64396234396430326334656561393264366263313730306631383037643135613765373861356561
37363933383238316232336564363364376637626630373963666262376165343838303530653764
64343937666365646666363939383662313334656236326566373565643637313434616261616635
35646131396432623534396133666239613036386332663038353531313935636139363136666562
62616234663935383262626235313337623332333733383035666633393965336535316234323561
37353563623138343339616565653465633633383563636631356333303435376536393634343031
63653062303432366230643333353634383061313135616533643935316263393366653335353964
36363135356365373064613338393261326265396330323930613538326330663532616163666564
39313631633434353938626637626462376139383536306531633733646331303030333238373161
36336364383939663132366461383264346631366566363638333738386235623264623331343738
34316436393363323165396430343163653837623035626236313663643038336666633535666462
33323566353062653964643362363233346264396365336637376661323730336437333031363830
38303962646561346262

View File

@ -0,0 +1,37 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Backup: perform gitea backup"
su --login gitea --command '/home/gitea/backup.sh'
echo "Backup: perform outline backup"
su --login outline --command '/home/outline/backup.sh'
echo "Backup: perform gramps backup"
su --login gramps --command '/home/gramps/backup.sh'
echo "Backup: perform miniflux backup"
su --login miniflux --command '/home/miniflux/backup.sh'
echo "Backup: perform wakapi backup"
su --login wakapi --command '/home/wakapi/backup.sh'
echo "Backup: send backups to remote storage with retic"
restic-shell.sh backup --verbose /home/gitea/backups /home/outline/backups /home/gramps/backups /home/miniflux/backups /home/wakapi/backups \
&& restic-shell.sh check \
&& restic-shell.sh forget --compact --prune --keep-daily 90 --keep-monthly 36 \
&& restic-shell.sh check
echo "Backup: send notification"
curl -s -X POST 'https://api.telegram.org/bot{{ notifications_tg_bot_token }}/sendMessage' \
-d 'chat_id={{ notifications_tg_chat_id }}' \
-d 'parse_mode=HTML' \
-d 'text=<b>{{ notifications_name }}</b>: бекап успешно завершен!'
echo -e "\nBackup: done"

View File

@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
export RESTIC_REPOSITORY={{ restic_repository }}
export RESTIC_PASSWORD={{ restic_password }}
export AWS_ACCESS_KEY_ID={{ restic_s3_access_key }}
export AWS_SECRET_ACCESS_KEY={{ restic_s3_access_secret }}
export AWS_DEFAULT_REGION={{ restic_s3_region }}
restic "$@"

View File

@ -0,0 +1,93 @@
# -------------------------------------------------------------------
# Global options
# -------------------------------------------------------------------
{
grace_period 15s
admin :2019
# Enable metrics in Prometheus format
# https://caddyserver.com/docs/metrics
metrics
}
# -------------------------------------------------------------------
# Applications
# -------------------------------------------------------------------
vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to homepage_app:80
}
}
auth.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy authelia_app:9091
}
status.vakhrushev.me, :29999 {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy netdata:19999
}
git.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to gitea_app:3000
}
}
outline.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to outline_app:3000
}
}
gramps.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to gramps_app:5000
}
}
miniflux.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to miniflux_app:8080
}
}
wakapi.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to wakapi_app:3000
}
}
rssbridge.vakhrushev.me {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy rssbridge_app:80
}
}

View File

@ -0,0 +1,22 @@
services:
{{ service_name }}:
image: caddy:2.10.0
restart: unless-stopped
container_name: {{ service_name }}
ports:
- "80:80"
- "443:443"
- "443:443/udp"
cap_add:
- NET_ADMIN
volumes:
- {{ caddy_file_dir }}:/etc/caddy
- {{ data_dir }}:/data
- {{ config_dir }}:/config
networks:
- "{{ web_proxy_network }}"
networks:
{{ web_proxy_network }}:
external: true

View File

@ -1,25 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
36373937313831396330393762313931643536363765353936333166376465343033376564613538
3235356131646564393664376535646561323435363330660a353632613334633461383562306662
37373439373636383834383464316337656531626663393830323332613136323438313762656435
6338353136306338640a636539363766663030356432663361636438386538323238373235663766
37393035356137653763373364623836346439663062313061346537353634306138376231633635
30363465663836373830366231636265663837646137313764316364623637623333346636363934
33666164343832653536303262663635616632663561633739636561333964653862313131613232
39316239376566633964633064393532613935306161666666323337343130393861306532623666
39653463323532333932646262663862313961393430306663643866623865346666313731366331
32663262636132663238313630373937663936326532643730613161376565653263633935393363
63373163346566363639396432653132646334643031323532613238666531363630353266303139
31613138303131343364343438663762343936393165356235646239343039396637643666653065
31363163623863613533663366303664623134396134393765636435633464373731653563646537
39373766626338646564356463623531373337303861383862613966323132656639326533356533
38346263326361656563386333663531663232623436653866383865393964353363353563653532
65343130383262386262393634636338313732623565666531303636303433333638323230346565
61633837373531343530383238396162373632623135333263323234623833383731336463333063
62656533636237303962653238653934346430366533636436646264306461323639666665623839
32643637623630613863323335666138303538313236343932386461346433656432626433663365
38376666623839393630343637386336623334623064383131316331333564363934636662633630
31363337393339643738306363306538373133626564613765643138666237303330613036666537
61363838353736613531613436313730313936363564303464346661376137303133633062613932
36383631303739306264386663333338666235346339623338333663386663303439363362376239
35626136646634363430

21
files/gitea/backup.sh.j2 Normal file
View File

@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gitea: backup data with gitea dump"
(cd "{{ base_dir }}" && \
docker compose exec \
-u "{{ user_create_result.uid }}:{{ user_create_result.group }}" \
-w /backups gitea_app \
gitea dump -c /data/gitea/conf/app.ini \
)
echo "Gitea: remove old backups"
keep-files.py "{{ backups_dir }}" --keep 3
echo "Gitea: done."

View File

@ -0,0 +1,33 @@
services:
gitea_app:
image: gitea/gitea:1.24.2
restart: unless-stopped
container_name: gitea_app
ports:
- "127.0.0.1:{{ gitea_port }}:3000"
- "2222:22"
volumes:
- {{ data_dir }}:/data
- {{ backups_dir }}:/backups
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- "{{ web_proxy_network }}"
environment:
- "USER_UID=${USER_UID}"
- "USER_GID=${USER_GID}"
- "GITEA__server__SSH_PORT=2222"
# Mailer
- "GITEA__mailer__ENABLED=true"
- "GITEA__mailer__PROTOCOL=smtp+starttls"
- "GITEA__mailer__SMTP_ADDR={{ postbox_host }}"
- "GITEA__mailer__SMTP_PORT={{ postbox_port }}"
- "GITEA__mailer__USER={{ postbox_user }}"
- "GITEA__mailer__PASSWD={{ postbox_pass }}"
- "GITEA__mailer__FROM=gitea@vakhrushev.me"
networks:
{{ web_proxy_network }}:
external: true

10
files/gramps/backup.sh.j2 Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gramps: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "Gramps: done."

View File

@ -0,0 +1,69 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
gramps_app: &gramps_app
image: ghcr.io/gramps-project/grampsweb:25.7.0
container_name: gramps_app
depends_on:
- gramps_redis
restart: unless-stopped
networks:
- "gramps_network"
- "{{ web_proxy_network }}"
volumes:
- "{{ (data_dir, 'gramps_db') | path_join }}:/root/.gramps/grampsdb" # persist Gramps database
- "{{ (data_dir, 'gramps_users') | path_join }}:/app/users" # persist user database
- "{{ (data_dir, 'gramps_index') | path_join }}:/app/indexdir" # persist search index
- "{{ (data_dir, 'gramps_thumb_cache') | path_join }}:/app/thumbnail_cache" # persist thumbnails
- "{{ (data_dir, 'gramps_cache') | path_join }}:/app/cache" # persist export and report caches
- "{{ (data_dir, 'gramps_secret') | path_join }}:/app/secret" # persist flask secret
- "{{ (data_dir, 'gramps_media') | path_join }}:/app/media" # persist media files
environment:
GRAMPSWEB_TREE: "Gramps" # will create a new tree if not exists
GRAMPSWEB_SECRET_KEY: "{{ gramps_secret_key }}"
GRAMPSWEB_BASE_URL: "https://gramps.vakhrushev.me"
GRAMPSWEB_REGISTRATION_DISABLED: "true"
GRAMPSWEB_CELERY_CONFIG__broker_url: "redis://gramps_redis:6379/0"
GRAMPSWEB_CELERY_CONFIG__result_backend: "redis://gramps_redis:6379/0"
GRAMPSWEB_RATELIMIT_STORAGE_URI: "redis://gramps_redis:6379/1"
GUNICORN_NUM_WORKERS: 2
# Email options
GRAMPSWEB_EMAIL_HOST: "{{ postbox_host }}"
GRAMPSWEB_EMAIL_PORT: "{{ postbox_port }}"
GRAMPSWEB_EMAIL_HOST_USER: "{{ postbox_user }}"
GRAMPSWEB_EMAIL_HOST_PASSWORD: "{{ postbox_pass }}"
GRAMPSWEB_EMAIL_USE_TLS: "false"
GRAMPSWEB_DEFAULT_FROM_EMAIL: "gramps@vakhrushev.me"
# media storage at s3
GRAMPSWEB_MEDIA_BASE_DIR: "s3://av-gramps-media-storage"
AWS_ENDPOINT_URL: "{{ gramps_s3_endpoint }}"
AWS_ACCESS_KEY_ID: "{{ gramps_s3_access_key_id }}"
AWS_SECRET_ACCESS_KEY: "{{ gramps_s3_secret_access_key }}"
AWS_DEFAULT_REGION: "{{ gramps_s3_region }}"
gramps_celery:
<<: *gramps_app # YAML merge key copying the entire grampsweb service config
container_name: gramps_celery
depends_on:
- gramps_redis
restart: unless-stopped
ports: []
networks:
- "gramps_network"
command: celery -A gramps_webapi.celery worker --loglevel=INFO --concurrency=2
gramps_redis:
image: valkey/valkey:8.1.1-alpine
container_name: gramps_redis
restart: unless-stopped
networks:
- "gramps_network"
networks:
gramps_network:
driver: bridge
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,32 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
users:
type: sqlite
path: "{{ (data_dir, 'gramps_users/users.sqlite') | path_join }}"
search_index:
type: sqlite
path: "{{ (data_dir, 'gramps_index/search_index.db') | path_join }}"
sqlite:
type: sqlite
path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/sqlite.db') | path_join }}"
undo:
type: sqlite
path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/undo.db') | path_join }}"
archive:
includes:
- "{{ data_dir }}"
excludes:
- "{{ (data_dir, 'gramps_cache') | path_join }}"
- "{{ (data_dir, 'gramps_thumb_cache') | path_join }}"
- "{{ (data_dir, 'gramps_tmp') | path_join }}"

View File

@ -0,0 +1,14 @@
services:
homepage_app:
image: "{{ registry_homepage_web_image }}"
container_name: homepage_app
restart: unless-stopped
ports:
- "127.0.0.1:{{ homepage_port }}:80"
networks:
- "{{ web_proxy_network }}"
networks:
{{ web_proxy_network }}:
external: true

48
files/keep-files.py Normal file
View File

@ -0,0 +1,48 @@
#!/usr/bin/env python3
import os
import argparse
def main():
parser = argparse.ArgumentParser(
description="Retain specified number of files in a directory sorted by name, delete others."
)
parser.add_argument("directory", type=str, help="Path to target directory")
parser.add_argument(
"--keep", type=int, default=2, help="Number of files to retain (default: 2)"
)
args = parser.parse_args()
# Validate arguments
if args.keep < 0:
parser.error("--keep value cannot be negative")
if not os.path.isdir(args.directory):
parser.error(f"Directory not found: {args.directory}")
# Get list of files (exclude subdirectories)
files = []
with os.scandir(args.directory) as entries:
for entry in entries:
if entry.is_file():
files.append(entry.name)
# Sort files alphabetically
sorted_files = sorted(files)
# Identify files to delete
to_delete = sorted_files[:-args.keep] if args.keep > 0 else sorted_files.copy()
# Delete files and print results
for filename in to_delete:
filepath = os.path.join(args.directory, filename)
try:
os.remove(filepath)
print(f"Deleted: {filename}")
except Exception as e:
print(f"Error deleting {filename}: {str(e)}")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="miniflux_postgres_${TIMESTAMP}.sql.gz"
echo "miniflux: backing up postgresql database"
docker compose --file "{{ base_dir }}/docker-compose.yml" exec \
miniflux_postgres \
pg_dump \
-U "{{ miniflux_postgres_user }}" \
"{{ miniflux_postgres_database }}" \
| gzip > "{{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "miniflux: PostgreSQL backup saved to {{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "miniflux: removing old backups"
# Keep only the 3 most recent backups
keep-files.py "{{ postgres_backups_dir }}" --keep 3
echo "miniflux: backup completed successfully."

View File

@ -0,0 +1,52 @@
# See sample https://miniflux.app/docs/docker.html#docker-compose
# See env https://miniflux.app/docs/configuration.html
services:
miniflux_app:
image: miniflux/miniflux:2.2.10
container_name: miniflux_app
depends_on:
miniflux_postgres:
condition: service_healthy
networks:
- "miniflux_network"
- "{{ web_proxy_network }}"
environment:
- DATABASE_URL=postgres://{{ miniflux_postgres_user }}:{{ miniflux_postgres_password }}@miniflux_postgres/{{ miniflux_postgres_database }}?sslmode=disable
- RUN_MIGRATIONS=1
- CREATE_ADMIN=1
- ADMIN_USERNAME={{ miniflux_admin_user }}
- ADMIN_PASSWORD={{ miniflux_admin_password }}
- BASE_URL=https://miniflux.vakhrushev.me
- DISABLE_LOCAL_AUTH=1
- OAUTH2_OIDC_DISCOVERY_ENDPOINT=https://auth.vakhrushev.me
- OAUTH2_CLIENT_ID={{ miniflux_oidc_client_id }}
- OAUTH2_CLIENT_SECRET={{ miniflux_oidc_client_secret }}
- OAUTH2_OIDC_PROVIDER_NAME=Authelia
- OAUTH2_PROVIDER=oidc
- OAUTH2_REDIRECT_URL=https://miniflux.vakhrushev.me/oauth2/oidc/callback
- OAUTH2_USER_CREATION=1
- METRICS_COLLECTOR=1
- METRICS_ALLOWED_NETWORKS=0.0.0.0/0
miniflux_postgres:
image: postgres:16.3-bookworm
container_name: miniflux_postgres
environment:
- POSTGRES_USER={{ miniflux_postgres_user }}
- POSTGRES_PASSWORD={{ miniflux_postgres_password }}
- POSTGRES_DB={{ miniflux_postgres_database }}
networks:
- "miniflux_network"
volumes:
- {{ postgres_data_dir }}:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "miniflux"]
interval: 10s
start_period: 30s
networks:
miniflux_network:
driver: bridge
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,37 @@
services:
netdata:
image: netdata/netdata:v2.5.4
container_name: netdata
restart: unless-stopped
cap_add:
- SYS_PTRACE
- SYS_ADMIN
security_opt:
- apparmor:unconfined
networks:
- "{{ web_proxy_network }}"
volumes:
- "{{ config_dir }}:/etc/netdata"
- "{{ (data_dir, 'lib') | path_join }}:/var/lib/netdata"
- "{{ (data_dir, 'cache') | path_join }}:/var/cache/netdata"
# Netdata system volumes
- "/:/host/root:ro,rslave"
- "/etc/group:/host/etc/group:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/os-release:/host/etc/os-release:ro"
- "/etc/passwd:/host/etc/passwd:ro"
- "/proc:/host/proc:ro"
- "/run/dbus:/run/dbus:ro"
- "/sys:/host/sys:ro"
- "/var/log:/host/var/log:ro"
- "/var/run:/host/var/run:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
PGID: "{{ netdata_docker_group_output.stdout | default(999) }}"
NETDATA_EXTRA_DEB_PACKAGES: "fail2ban"
networks:
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,3 @@
jobs:
- name: fail2ban
update_every: 5 # Collect Fail2Ban jails statistics every 5 seconds

View File

@ -0,0 +1,22 @@
update_every: 5
autodetection_retry: 0
jobs:
- name: caddyproxy
url: http://caddyproxy:2019/metrics
selector:
allow:
- "caddy_http_*"
- name: authelia
url: http://authelia_app:9959/metrics
selector:
allow:
- "authelia_*"
- name: miniflux
url: http://miniflux_app:8080/metrics
selector:
allow:
- "miniflux_*"

View File

@ -0,0 +1,687 @@
# netdata configuration
#
# You can download the latest version of this file, using:
#
# wget -O /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
# or
# curl -o /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
#
# You can uncomment and change any of the options below.
# The value shown in the commented settings, is the default value.
#
# global netdata configuration
[global]
# run as user = netdata
# host access prefix = /host
# pthread stack size = 8MiB
# cpu cores = 2
# libuv worker threads = 16
# profile = standalone
hostname = {{ host_name }}
# glibc malloc arena max for plugins = 1
# glibc malloc arena max for netdata = 1
# crash reports = all
# timezone = Etc/UTC
# OOM score = 0
# process scheduling policy = keep
# is ephemeral node = no
# has unstable connection = no
[db]
# enable replication = yes
# replication period = 1d
# replication step = 1h
# replication threads = 1
# replication prefetch = 10
# update every = 1s
# db = dbengine
# memory deduplication (ksm) = auto
# cleanup orphan hosts after = 1h
# cleanup ephemeral hosts after = off
# cleanup obsolete charts after = 1h
# gap when lost iterations above = 1
# dbengine page type = gorilla
# dbengine page cache size = 32MiB
# dbengine extent cache size = off
# dbengine enable journal integrity check = no
# dbengine use all ram for caches = no
# dbengine out of memory protection = 391.99MiB
# dbengine use direct io = yes
# dbengine journal v2 unmount time = 2m
# dbengine pages per extent = 109
# storage tiers = 3
# dbengine tier backfill = new
# dbengine tier 1 update every iterations = 60
# dbengine tier 2 update every iterations = 60
# dbengine tier 0 retention size = 1024MiB
# dbengine tier 0 retention time = 14d
# dbengine tier 1 retention size = 1024MiB
# dbengine tier 1 retention time = 3mo
# dbengine tier 2 retention size = 1024MiB
# dbengine tier 2 retention time = 2y
# extreme cardinality protection = yes
# extreme cardinality keep instances = 1000
# extreme cardinality min ephemerality = 50
[directories]
# config = /etc/netdata
# stock config = /usr/lib/netdata/conf.d
# log = /var/log/netdata
# web = /usr/share/netdata/web
# cache = /var/cache/netdata
# lib = /var/lib/netdata
# cloud.d = /var/lib/netdata/cloud.d
# plugins = "/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"
# registry = /var/lib/netdata/registry
# home = /etc/netdata
# stock health config = /usr/lib/netdata/conf.d/health.d
# health config = /etc/netdata/health.d
[logs]
# facility = daemon
# logs flood protection period = 1m
# logs to trigger flood protection = 1000
# level = info
# debug = /var/log/netdata/debug.log
# daemon = /var/log/netdata/daemon.log
# collector = /var/log/netdata/collector.log
# access = /var/log/netdata/access.log
# health = /var/log/netdata/health.log
# debug flags = 0x0000000000000000
[environment variables]
# PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# PYTHONPATH =
# TZ = :/etc/localtime
[host labels]
# name = value
[cloud]
# conversation log = no
# scope = full
# query threads = 6
# proxy = env
[ml]
# enabled = auto
# maximum num samples to train = 21600
# minimum num samples to train = 900
# train every = 3h
# number of models per dimension = 18
# delete models older than = 7d
# num samples to diff = 1
# num samples to smooth = 3
# num samples to lag = 5
# random sampling ratio = 0.20000
# maximum number of k-means iterations = 1000
# dimension anomaly score threshold = 0.99000
# host anomaly rate threshold = 1.00000
# anomaly detection grouping method = average
# anomaly detection grouping duration = 5m
# num training threads = 1
# flush models batch size = 256
# dimension anomaly rate suppression window = 15m
# dimension anomaly rate suppression threshold = 450
# enable statistics charts = yes
# hosts to skip from training = !*
# charts to skip from training = netdata.*
# stream anomaly detection charts = yes
[health]
# silencers file = /var/lib/netdata/health.silencers.json
# enabled = yes
# enable stock health configuration = yes
# use summary for notifications = yes
# default repeat warning = off
# default repeat critical = off
# in memory max health log entries = 1000
# health log retention = 5d
# script to execute on alarm = /usr/libexec/netdata/plugins.d/alarm-notify.sh
# enabled alarms = *
# run at least every = 10s
# postpone alarms during hibernation for = 1m
[web]
#| >>> [web].default port <<<
#| migrated from: [global].default port
# default port = 19999
# ssl key = /etc/netdata/ssl/key.pem
# ssl certificate = /etc/netdata/ssl/cert.pem
# tls version = 1.3
# tls ciphers = none
# ses max tg_des_window = 15
# des max tg_des_window = 15
# mode = static-threaded
# listen backlog = 4096
# bind to = *
# bearer token protection = no
# disconnect idle clients after = 1m
# timeout for first request = 1m
# accept a streaming request every = off
# respect do not track policy = no
# x-frame-options response header =
# allow connections from = localhost *
# allow connections by dns = heuristic
# allow dashboard from = localhost *
# allow dashboard by dns = heuristic
# allow badges from = *
# allow badges by dns = heuristic
# allow streaming from = *
# allow streaming by dns = heuristic
# allow netdata.conf from = localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN
# allow netdata.conf by dns = no
# allow management from = localhost
# allow management by dns = heuristic
# enable gzip compression = yes
# gzip compression strategy = default
# gzip compression level = 3
# ssl skip certificate verification = no
# web server threads = 6
# web server max sockets = 262144
[registry]
# enabled = no
# registry db file = /var/lib/netdata/registry/registry.db
# registry log file = /var/lib/netdata/registry/registry-log.db
# registry save db every new entries = 1000000
# registry expire idle persons = 1y
# registry domain =
# registry to announce = https://registry.my-netdata.io
# registry hostname = 7171b7f9fc69
# verify browser cookies support = yes
# enable cookies SameSite and Secure = yes
# max URL length = 1024
# max URL name length = 50
# netdata management api key file = /var/lib/netdata/netdata.api.key
# allow from = *
# allow by dns = heuristic
[pulse]
# extended = no
# update every = 1s
[plugins]
# idlejitter = yes
# netdata pulse = yes
# profile = no
# tc = yes
# diskspace = yes
# proc = yes
# cgroups = yes
# timex = yes
# statsd = yes
# enable running new plugins = yes
# check for new plugins every = 1m
# slabinfo = no
# freeipmi = no
# python.d = yes
# go.d = yes
# apps = yes
# systemd-journal = yes
# network-viewer = yes
# charts.d = yes
# debugfs = yes
# perf = yes
# ioping = yes
[statsd]
# update every (flushInterval) = 1s
# udp messages to process at once = 10
# create private charts for metrics matching = *
# max private charts hard limit = 1000
# set charts as obsolete after = off
# decimal detail = 1000
# disconnect idle tcp clients after = 10m
# private charts hidden = no
# histograms and timers percentile (percentThreshold) = 95.00000
# dictionaries max unique dimensions = 200
# add dimension for number of events received = no
# gaps on gauges (deleteGauges) = no
# gaps on counters (deleteCounters) = no
# gaps on meters (deleteMeters) = no
# gaps on sets (deleteSets) = no
# gaps on histograms (deleteHistograms) = no
# gaps on timers (deleteTimers) = no
# gaps on dictionaries (deleteDictionaries) = no
# statsd server max TCP sockets = 262144
# listen backlog = 4096
# default port = 8125
# bind to = udp:localhost tcp:localhost
[plugin:idlejitter]
# loop time = 20ms
[plugin:timex]
# update every = 10s
# clock synchronization state = yes
# time offset = yes
[plugin:proc]
# /proc/net/dev = yes
# /proc/pagetypeinfo = no
# /proc/stat = yes
# /proc/uptime = yes
# /proc/loadavg = yes
# /proc/sys/fs/file-nr = yes
# /proc/sys/kernel/random/entropy_avail = yes
# /run/reboot_required = yes
# /proc/pressure = yes
# /proc/interrupts = yes
# /proc/softirqs = yes
# /proc/vmstat = yes
# /proc/meminfo = yes
# /sys/kernel/mm/ksm = yes
# /sys/block/zram = yes
# /sys/devices/system/edac/mc = yes
# /sys/devices/pci/aer = yes
# /sys/devices/system/node = yes
# /proc/net/wireless = yes
# /proc/net/sockstat = yes
# /proc/net/sockstat6 = yes
# /proc/net/netstat = yes
# /proc/net/sctp/snmp = yes
# /proc/net/softnet_stat = yes
# /proc/net/ip_vs/stats = yes
# /sys/class/infiniband = yes
# /proc/net/stat/conntrack = yes
# /proc/net/stat/synproxy = yes
# /proc/diskstats = yes
# /proc/mdstat = yes
# /proc/net/rpc/nfsd = yes
# /proc/net/rpc/nfs = yes
# /proc/spl/kstat/zfs/arcstats = yes
# /sys/fs/btrfs = yes
# ipc = yes
# /sys/class/power_supply = yes
# /sys/class/drm = yes
[plugin:cgroups]
# update every = 1s
# check for new cgroups every = 10s
# use unified cgroups = auto
# max cgroups to allow = 1000
# max cgroups depth to monitor = 0
# enable by default cgroups matching = !*/init.scope !/system.slice/run-*.scope *user.slice/docker-* !*user.slice* *.scope !/machine.slice/*/.control !/machine.slice/*/payload* !/machine.slice/*/supervisor /machine.slice/*.service */kubepods/pod*/* */kubepods/*/pod*/* */*-kubepods-pod*/* */*-kubepods-*-pod*/* !*kubepods* !*kubelet* !*/vcpu* !*/emulator !*.mount !*.partition !*.service !*.service/udev !*.socket !*.slice !*.swap !*.user !/ !/docker !*/libvirt !/lxc !/lxc/*/* !/lxc.monitor* !/lxc.pivot !/lxc.payload !*lxcfs.service/.control !/machine !/qemu !/system !/systemd !/user *
# enable by default cgroups names matching = *
# search for cgroups in subpaths matching = !*/init.scope !*-qemu !*.libvirt-qemu !/init.scope !/system !/systemd !/user !/lxc/*/* !/lxc.monitor !/lxc.payload/*/* !/lxc.payload.* *
# script to get cgroup names = /usr/libexec/netdata/plugins.d/cgroup-name.sh
# script to get cgroup network interfaces = /usr/libexec/netdata/plugins.d/cgroup-network
# run script to rename cgroups matching = !/ !*.mount !*.socket !*.partition /machine.slice/*.service !*.service !*.slice !*.swap !*.user !init.scope !*.scope/vcpu* !*.scope/emulator *.scope *docker* *lxc* *qemu* */kubepods/pod*/* */kubepods/*/pod*/* */*-kubepods-pod*/* */*-kubepods-*-pod*/* !*kubepods* !*kubelet* *.libvirt-qemu *
# cgroups to match as systemd services = !/system.slice/*/*.service /system.slice/*.service
[plugin:proc:diskspace]
# remove charts of unmounted disks = yes
# update every = 1s
# check for new mount points every = 15s
# exclude space metrics on paths = /dev /dev/shm /proc/* /sys/* /var/run/user/* /run/lock /run/user/* /snap/* /var/lib/docker/* /var/lib/containers/storage/* /run/credentials/* /run/containerd/* /rpool /rpool/*
# exclude space metrics on filesystems = *gvfs *gluster* *s3fs *ipfs *davfs2 *httpfs *sshfs *gdfs *moosefs fusectl autofs cgroup cgroup2 hugetlbfs devtmpfs fuse.lxcfs
# exclude inode metrics on filesystems = msdosfs msdos vfat overlayfs aufs* *unionfs
# space usage for all disks = auto
# inodes usage for all disks = auto
[plugin:tc]
# script to run to get tc values = /usr/libexec/netdata/plugins.d/tc-qos-helper.sh
[plugin:python.d]
# update every = 1s
# command options =
[plugin:go.d]
# update every = 1s
# command options =
[plugin:apps]
# update every = 1s
# command options =
[plugin:systemd-journal]
# update every = 1s
# command options =
[plugin:network-viewer]
# update every = 1s
# command options =
[plugin:charts.d]
# update every = 1s
# command options =
[plugin:debugfs]
# update every = 1s
# command options =
[plugin:perf]
# update every = 1s
# command options =
[plugin:ioping]
# update every = 1s
# command options =
[plugin:proc:/proc/net/dev]
# compressed packets for all interfaces = no
# disable by default interfaces matching = lo fireqos* *-ifb fwpr* fwbr* fwln* ifb4*
[plugin:proc:/proc/stat]
# cpu utilization = yes
# per cpu core utilization = no
# cpu interrupts = yes
# context switches = yes
# processes started = yes
# processes running = yes
# keep per core files open = yes
# keep cpuidle files open = yes
# core_throttle_count = auto
# package_throttle_count = no
# cpu frequency = yes
# cpu idle states = no
# core_throttle_count filename to monitor = /host/sys/devices/system/cpu/%s/thermal_throttle/core_throttle_count
# package_throttle_count filename to monitor = /host/sys/devices/system/cpu/%s/thermal_throttle/package_throttle_count
# scaling_cur_freq filename to monitor = /host/sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq
# time_in_state filename to monitor = /host/sys/devices/system/cpu/%s/cpufreq/stats/time_in_state
# schedstat filename to monitor = /host/proc/schedstat
# cpuidle name filename to monitor = /host/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/name
# cpuidle time filename to monitor = /host/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/time
# filename to monitor = /host/proc/stat
[plugin:proc:/proc/uptime]
# filename to monitor = /host/proc/uptime
[plugin:proc:/proc/loadavg]
# filename to monitor = /host/proc/loadavg
# enable load average = yes
# enable total processes = yes
[plugin:proc:/proc/sys/fs/file-nr]
# filename to monitor = /host/proc/sys/fs/file-nr
[plugin:proc:/proc/sys/kernel/random/entropy_avail]
# filename to monitor = /host/proc/sys/kernel/random/entropy_avail
[plugin:proc:/proc/pressure]
# base path of pressure metrics = /proc/pressure
# enable cpu some pressure = yes
# enable cpu full pressure = no
# enable memory some pressure = yes
# enable memory full pressure = yes
# enable io some pressure = yes
# enable io full pressure = yes
# enable irq some pressure = no
# enable irq full pressure = yes
[plugin:proc:/proc/interrupts]
# interrupts per core = no
# filename to monitor = /host/proc/interrupts
[plugin:proc:/proc/softirqs]
# interrupts per core = no
# filename to monitor = /host/proc/softirqs
[plugin:proc:/proc/vmstat]
# filename to monitor = /host/proc/vmstat
# swap i/o = auto
# disk i/o = yes
# memory page faults = yes
# out of memory kills = yes
# system-wide numa metric summary = auto
# transparent huge pages = auto
# zswap i/o = auto
# memory ballooning = auto
# kernel same memory = auto
[plugin:proc:/sys/devices/system/node]
# directory to monitor = /host/sys/devices/system/node
# enable per-node numa metrics = auto
[plugin:proc:/proc/meminfo]
# system ram = yes
# system swap = auto
# hardware corrupted ECC = auto
# committed memory = yes
# writeback memory = yes
# kernel memory = yes
# slab memory = yes
# hugepages = auto
# transparent hugepages = auto
# memory reclaiming = yes
# high low memory = yes
# cma memory = auto
# direct maps = yes
# filename to monitor = /host/proc/meminfo
[plugin:proc:/sys/kernel/mm/ksm]
# /sys/kernel/mm/ksm/pages_shared = /host/sys/kernel/mm/ksm/pages_shared
# /sys/kernel/mm/ksm/pages_sharing = /host/sys/kernel/mm/ksm/pages_sharing
# /sys/kernel/mm/ksm/pages_unshared = /host/sys/kernel/mm/ksm/pages_unshared
# /sys/kernel/mm/ksm/pages_volatile = /host/sys/kernel/mm/ksm/pages_volatile
[plugin:proc:/sys/devices/system/edac/mc]
# directory to monitor = /host/sys/devices/system/edac/mc
[plugin:proc:/sys/class/pci/aer]
# enable root ports = no
# enable pci slots = no
[plugin:proc:/proc/net/wireless]
# filename to monitor = /host/proc/net/wireless
# status for all interfaces = auto
# quality for all interfaces = auto
# discarded packets for all interfaces = auto
# missed beacon for all interface = auto
[plugin:proc:/proc/net/sockstat]
# ipv4 sockets = auto
# ipv4 TCP sockets = auto
# ipv4 TCP memory = auto
# ipv4 UDP sockets = auto
# ipv4 UDP memory = auto
# ipv4 UDPLITE sockets = auto
# ipv4 RAW sockets = auto
# ipv4 FRAG sockets = auto
# ipv4 FRAG memory = auto
# update constants every = 1m
# filename to monitor = /host/proc/net/sockstat
[plugin:proc:/proc/net/sockstat6]
# ipv6 TCP sockets = auto
# ipv6 UDP sockets = auto
# ipv6 UDPLITE sockets = auto
# ipv6 RAW sockets = auto
# ipv6 FRAG sockets = auto
# filename to monitor = /host/proc/net/sockstat6
[plugin:proc:/proc/net/netstat]
# bandwidth = auto
# input errors = auto
# multicast bandwidth = auto
# broadcast bandwidth = auto
# multicast packets = auto
# broadcast packets = auto
# ECN packets = auto
# TCP reorders = auto
# TCP SYN cookies = auto
# TCP out-of-order queue = auto
# TCP connection aborts = auto
# TCP memory pressures = auto
# TCP SYN queue = auto
# TCP accept queue = auto
# filename to monitor = /host/proc/net/netstat
[plugin:proc:/proc/net/snmp]
# ipv4 packets = auto
# ipv4 fragments sent = auto
# ipv4 fragments assembly = auto
# ipv4 errors = auto
# ipv4 TCP connections = auto
# ipv4 TCP packets = auto
# ipv4 TCP errors = auto
# ipv4 TCP opens = auto
# ipv4 TCP handshake issues = auto
# ipv4 UDP packets = auto
# ipv4 UDP errors = auto
# ipv4 ICMP packets = auto
# ipv4 ICMP messages = auto
# ipv4 UDPLite packets = auto
# filename to monitor = /host/proc/net/snmp
[plugin:proc:/proc/net/snmp6]
# ipv6 packets = auto
# ipv6 fragments sent = auto
# ipv6 fragments assembly = auto
# ipv6 errors = auto
# ipv6 UDP packets = auto
# ipv6 UDP errors = auto
# ipv6 UDPlite packets = auto
# ipv6 UDPlite errors = auto
# bandwidth = auto
# multicast bandwidth = auto
# broadcast bandwidth = auto
# multicast packets = auto
# icmp = auto
# icmp redirects = auto
# icmp errors = auto
# icmp echos = auto
# icmp group membership = auto
# icmp router = auto
# icmp neighbor = auto
# icmp mldv2 = auto
# icmp types = auto
# ect = auto
# filename to monitor = /host/proc/net/snmp6
[plugin:proc:/proc/net/sctp/snmp]
# established associations = auto
# association transitions = auto
# fragmentation = auto
# packets = auto
# packet errors = auto
# chunk types = auto
# filename to monitor = /host/proc/net/sctp/snmp
[plugin:proc:/proc/net/softnet_stat]
# softnet_stat per core = no
# filename to monitor = /host/proc/net/softnet_stat
[plugin:proc:/proc/net/ip_vs_stats]
# IPVS bandwidth = yes
# IPVS connections = yes
# IPVS packets = yes
# filename to monitor = /host/proc/net/ip_vs_stats
[plugin:proc:/sys/class/infiniband]
# dirname to monitor = /host/sys/class/infiniband
# bandwidth counters = yes
# packets counters = yes
# errors counters = yes
# hardware packets counters = auto
# hardware errors counters = auto
# monitor only active ports = auto
# disable by default interfaces matching =
# refresh ports state every = 30s
[plugin:proc:/proc/net/stat/nf_conntrack]
# filename to monitor = /host/proc/net/stat/nf_conntrack
# netfilter new connections = no
# netfilter connection changes = no
# netfilter connection expectations = no
# netfilter connection searches = no
# netfilter errors = no
# netfilter connections = yes
[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max]
# filename to monitor = /host/proc/sys/net/netfilter/nf_conntrack_max
# read every seconds = 10
[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_count]
# filename to monitor = /host/proc/sys/net/netfilter/nf_conntrack_count
[plugin:proc:/proc/net/stat/synproxy]
# SYNPROXY cookies = auto
# SYNPROXY SYN received = auto
# SYNPROXY connections reopened = auto
# filename to monitor = /host/proc/net/stat/synproxy
[plugin:proc:/proc/diskstats]
# enable new disks detected at runtime = yes
# performance metrics for physical disks = auto
# performance metrics for virtual disks = auto
# performance metrics for partitions = no
# bandwidth for all disks = auto
# operations for all disks = auto
# merged operations for all disks = auto
# i/o time for all disks = auto
# queued operations for all disks = auto
# utilization percentage for all disks = auto
# extended operations for all disks = auto
# backlog for all disks = auto
# bcache for all disks = auto
# bcache priority stats update every = off
# remove charts of removed disks = yes
# path to get block device = /host/sys/block/%s
# path to get block device bcache = /host/sys/block/%s/bcache
# path to get virtual block device = /host/sys/devices/virtual/block/%s
# path to get block device infos = /host/sys/dev/block/%lu:%lu/%s
# path to device mapper = /host/dev/mapper
# path to /dev/disk = /host/dev/disk
# path to /sys/block = /host/sys/block
# path to /dev/disk/by-label = /host/dev/disk/by-label
# path to /dev/disk/by-id = /host/dev/disk/by-id
# path to /dev/vx/dsk = /host/dev/vx/dsk
# name disks by id = no
# preferred disk ids = *
# exclude disks = loop* ram*
# filename to monitor = /host/proc/diskstats
# performance metrics for disks with major 252 = yes
[plugin:proc:/proc/mdstat]
# faulty devices = yes
# nonredundant arrays availability = yes
# mismatch count = auto
# disk stats = yes
# operation status = yes
# make charts obsolete = yes
# filename to monitor = /host/proc/mdstat
# mismatch_cnt filename to monitor = /host/sys/block/%s/md/mismatch_cnt
[plugin:proc:/proc/net/rpc/nfsd]
# filename to monitor = /host/proc/net/rpc/nfsd
[plugin:proc:/proc/net/rpc/nfs]
# filename to monitor = /host/proc/net/rpc/nfs
[plugin:proc:/proc/spl/kstat/zfs/arcstats]
# filename to monitor = /host/proc/spl/kstat/zfs/arcstats
[plugin:proc:/sys/fs/btrfs]
# path to monitor = /host/sys/fs/btrfs
# check for btrfs changes every = 1m
# physical disks allocation = auto
# data allocation = auto
# metadata allocation = auto
# system allocation = auto
# commit stats = auto
# error stats = auto
[plugin:proc:ipc]
# message queues = yes
# semaphore totals = yes
# shared memory totals = yes
# msg filename to monitor = /host/proc/sysvipc/msg
# shm filename to monitor = /host/proc/sysvipc/shm
# max dimensions in memory allowed = 50
[plugin:proc:/sys/class/power_supply]
# battery capacity = yes
# battery power = yes
# battery charge = no
# battery energy = no
# power supply voltage = no
# keep files open = auto
# directory to monitor = /host/sys/class/power_supply
[plugin:proc:/sys/class/drm]
# directory to monitor = /host/sys/class/drm

View File

@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="outline_postgres_${TIMESTAMP}.sql.gz"
echo "Outline: backing up PostgreSQL database"
docker compose --file "{{ base_dir }}/docker-compose.yml" exec \
outline_postgres \
pg_dump \
-U "{{ outline_postgres_user }}" \
"{{ outline_postgres_database }}" \
| gzip > "{{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "Outline: PostgreSQL backup saved to {{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "Outline: removing old backups"
# Keep only the 3 most recent backups
keep-files.py "{{ postgres_backups_dir }}" --keep 3
echo "Outline: backup completed successfully."

View File

@ -0,0 +1,81 @@
services:
# See sample https://github.com/outline/outline/blob/main/.env.sample
outline_app:
image: outlinewiki/outline:0.84.0
container_name: outline_app
restart: unless-stopped
depends_on:
- outline_postgres
- outline_redis
ports:
- "127.0.0.1:{{ outline_port }}:3000"
networks:
- "outline_network"
- "{{ web_proxy_network }}"
environment:
NODE_ENV: 'production'
URL: 'https://outline.vakhrushev.me'
FORCE_HTTPS: 'true'
SECRET_KEY: '{{ outline_secret_key }}'
UTILS_SECRET: '{{ outline_utils_secret }}'
DATABASE_URL: 'postgres://{{ outline_postgres_user }}:{{ outline_postgres_password }}@outline_postgres:5432/{{ outline_postgres_database }}'
PGSSLMODE: 'disable'
REDIS_URL: 'redis://outline_redis:6379'
FILE_STORAGE: 's3'
FILE_STORAGE_UPLOAD_MAX_SIZE: '262144000'
AWS_ACCESS_KEY_ID: '{{ outline_s3_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ outline_s3_secret_key }}'
AWS_REGION: '{{ outline_s3_region }}'
AWS_S3_ACCELERATE_URL: ''
AWS_S3_UPLOAD_BUCKET_URL: '{{ outline_s3_url }}'
AWS_S3_UPLOAD_BUCKET_NAME: '{{ outline_s3_bucket }}'
AWS_S3_FORCE_PATH_STYLE: 'true'
AWS_S3_ACL: 'private'
OIDC_CLIENT_ID: '{{ outline_oidc_client_id | replace("$", "$$") }}'
OIDC_CLIENT_SECRET: '{{ outline_oidc_client_secret | replace("$", "$$") }}'
OIDC_AUTH_URI: 'https://auth.vakhrushev.me/api/oidc/authorization'
OIDC_TOKEN_URI: 'https://auth.vakhrushev.me/api/oidc/token'
OIDC_USERINFO_URI: 'https://auth.vakhrushev.me/api/oidc/userinfo'
OIDC_LOGOUT_URI: 'https://auth.vakhrushev.me/logout'
OIDC_USERNAME_CLAIM: 'email'
OIDC_SCOPES: 'openid profile email'
OIDC_DISPLAY_NAME: 'Authelia'
SMTP_HOST: '{{ postbox_host }}'
SMTP_PORT: '{{ postbox_port }}'
SMTP_USERNAME: '{{ postbox_user }}'
SMTP_PASSWORD: '{{ postbox_pass }}'
SMTP_FROM_EMAIL: 'outline@vakhrushev.me'
SMTP_TLS_CIPHERS: 'TLSv1.2'
SMTP_SECURE: 'false'
outline_redis:
image: valkey/valkey:8.1.1-alpine
container_name: outline_redis
restart: unless-stopped
networks:
- "outline_network"
outline_postgres:
image: postgres:16.3-bookworm
container_name: outline_postgres
restart: unless-stopped
volumes:
- {{ postgres_data_dir }}:/var/lib/postgresql/data
networks:
- "outline_network"
environment:
POSTGRES_USER: '{{ outline_postgres_user }}'
POSTGRES_PASSWORD: '{{ outline_postgres_password }}'
POSTGRES_DB: '{{ outline_postgres_database }}'
networks:
outline_network:
driver: bridge
{{ web_proxy_network }}:
external: true

View File

@ -1,26 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
66626231663733396232343163306138366434663364373937396137313134373033626539356166
3038316664383731623635336233393566636234636532630a393234336561613133373662383161
33653330663364363832346331653037663363643238326334326431336331373936666162363561
3064656630666431330a626430353063313866663730663236343437356661333164653636376538
62303164393766363933336163386663333030336132623661346565333861313537333566346563
32666436383335353866396539663936376134653762613137343035376639376135616334326161
62343366313032306664303030323433666230333665386630383635633863303366313639616462
38643466356666653337383833366565633932613539666563653634643063663166623337303865
64303365373932346233653237626363363964366431663966393937343966633735356563373735
66366464346436303036383161316466323639396162346537653134626663303662326462656563
63343065323636643266396532333331333137303131373633653233333837656665346635373564
62613733613634356335636663336634323463376266373665306232626330363132313362373032
30613366626563383236636262656135613431343639633339336135353362373665326264633438
65306539663166623533336531356639306235346566313764343835643437663963613639326430
36303031346339366561366166386532373838623635663837663466643032653930613635666237
38313235343662623733613637616164366134613635343135646439623464623233303330333361
62623166376337343838636564383633646432653436646236363262316438613333616236656532
37336539343130343133626262616634303561326631363564353064336130613666353531646237
66373036363764653435326638313036653135396362666439623431313930633539613965333263
39383937616165333962366134343936323930386233356662303864643236396562313339313739
64303934336164333563623263323236663531613265383833336239306435333735396666633666
30663566653361343238306133613839333962373838623633363138353331616264363064316433
36663233643134353333623264643238396438366633376530336134313365323832346663316535
66653436323338636565303133316637353338346366633564306230386632373235653836626338
3935

View File

@ -0,0 +1,12 @@
services:
rssbridge_app:
image: rssbridge/rss-bridge:2025-06-03
container_name: rssbridge_app
restart: unless-stopped
networks:
- "{{ web_proxy_network }}"
networks:
{{ web_proxy_network }}:
external: true

10
files/wakapi/backup.sh.j2 Normal file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "{{ app_name }}: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "{{ app_name }}: done."

View File

@ -0,0 +1,32 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
wakapi_app:
image: ghcr.io/muety/wakapi:2.14.0
container_name: wakapi_app
restart: unless-stopped
user: '{{ user_create_result.uid }}:{{ user_create_result.group }}'
networks:
- "{{ web_proxy_network }}"
volumes:
- "{{ data_dir }}:/data"
environment:
WAKAPI_PUBLIC_URL: "https://wakapi.vakhrushev.me"
WAKAPI_PASSWORD_SALT: "{{ wakapi_password_salt }}"
WAKAPI_ALLOW_SIGNUP: "false"
WAKAPI_DISABLE_FRONTPAGE: "true"
WAKAPI_COOKIE_MAX_AGE: 31536000
# Mail
WAKAPI_MAIL_SENDER: "Wakapi <wakapi@vakhrushev.me>"
WAKAPI_MAIL_PROVIDER: "smtp"
WAKAPI_MAIL_SMTP_HOST: "{{ postbox_host }}"
WAKAPI_MAIL_SMTP_PORT: "{{ postbox_port }}"
WAKAPI_MAIL_SMTP_USER: "{{ postbox_user }}"
WAKAPI_MAIL_SMTP_PASS: "{{ postbox_pass }}"
WAKAPI_MAIL_SMTP_TLS: "false"
networks:
{{ web_proxy_network }}:
external: true

View File

@ -0,0 +1,16 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
wakapi:
type: sqlite
path: "{{ (data_dir, 'wakapi.db') | path_join }}"

View File

@ -1,5 +1,6 @@
#!/usr/bin/env sh #!/usr/bin/env sh
# Must be executed for every user
# See https://cloud.yandex.ru/docs/container-registry/tutorials/run-docker-on-vm#run # See https://cloud.yandex.ru/docs/container-registry/tutorials/run-docker-on-vm#run
set -eu set -eu

View File

@ -1 +0,0 @@
192.168.50.10

View File

@ -1,51 +0,0 @@
---
- name: 'Configure gramps application'
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
vars:
app_name: 'gramps'
base_dir: '/home/major/applications/{{ app_name }}/'
tasks:
- name: 'Create application directories'
ansible.builtin.file:
path: '{{ item }}'
state: 'directory'
mode: '0755'
loop:
- '{{ base_dir }}'
- '{{ (base_dir, "data") | path_join }}'
- name: 'Copy application files'
ansible.builtin.copy:
src: '{{ item }}'
dest: '{{ base_dir }}'
mode: '0644'
loop:
- './files/apps/{{ app_name }}/docker-compose.yml'
- name: 'Set up environment variables for application'
ansible.builtin.template:
src: 'env.j2'
dest: '{{ (base_dir, ".env") | path_join }}'
mode: '0644'
vars:
env_dict:
WEB_SERVER_PORT: '{{ gramps_port }}'
SECRET_KEY: '{{ gramps.secret_key }}'
AWS_ACCESS_KEY_ID: '{{ gramps.aws_access_key_id }}'
AWS_SECRET_ACCESS_KEY: '{{ gramps.aws_secret_access_key }}'
POSTBOX_HOST: '{{ postbox.host }}'
POSTBOX_PORT: '{{ postbox.port }}'
POSTBOX_USER: '{{ postbox.user }}'
POSTBOX_PASS: '{{ postbox.pass }}'
- name: 'Run application with docker compose'
community.docker.docker_compose_v2:
project_src: '{{ base_dir }}'
state: 'present'

View File

@ -1,65 +0,0 @@
---
- name: 'Deploy homepage application'
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
vars:
app_name: 'homepage'
base_dir: '/home/major/applications/{{ app_name }}/'
docker_registry_prefix: 'cr.yandex/crplfk0168i4o8kd7ade'
homepage_web_image: '{{ homepage_web_image | default(omit) }}'
tasks:
- name: 'Check is web service imape passed'
ansible.builtin.assert:
that:
- 'homepage_web_image is defined'
fail_msg: 'You must pass variable "homepage_web_image"'
- name: 'Create full image name with container registry'
ansible.builtin.set_fact:
registry_homepage_web_image: '{{ (docker_registry_prefix, homepage_web_image) | path_join }}'
- name: 'Push web service image to remote registry'
community.docker.docker_image:
state: present
source: local
name: '{{ homepage_web_image }}'
repository: '{{ registry_homepage_web_image }}'
push: true
delegate_to: 127.0.0.1
- name: 'Create application directories'
ansible.builtin.file:
path: '{{ item }}'
state: 'directory'
mode: '0755'
loop:
- '{{ base_dir }}'
- name: 'Copy application files'
ansible.builtin.copy:
src: '{{ item }}'
dest: '{{ base_dir }}'
mode: '0644'
loop:
- './files/apps/{{ app_name }}/docker-compose.yml'
- name: 'Set up environment variables for application'
ansible.builtin.template:
src: 'env.j2'
dest: '{{ (base_dir, ".env") | path_join }}'
mode: '0644'
vars:
env_dict:
WEB_SERVICE_IMAGE: '{{ registry_homepage_web_image }}'
WEB_SERVICE_PORT: '{{ homepage_port }}'
- name: 'Run application with docker compose'
community.docker.docker_compose_v2:
project_src: '{{ base_dir }}'
state: 'present'

68
playbook-authelia.yml Normal file
View File

@ -0,0 +1,68 @@
---
- name: "Configure authelia application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "authelia"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0700"
loop:
- "{{ config_dir }}"
- name: "Copy configuration files"
ansible.builtin.copy:
src: "files/{{ app_name }}/{{ item }}"
dest: "{{ (config_dir, item) | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
loop:
- "users.yml"
- name: "Copy configuration files (templates)"
ansible.builtin.template:
src: "files/{{ app_name }}/configuration.yml.j2"
dest: "{{ (config_dir, 'configuration.yml') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"

53
playbook-backups.yml Normal file
View File

@ -0,0 +1,53 @@
---
- name: "Configure restic and backup schedule"
hosts: all
vars_files:
- vars/secrets.yml
- vars/secrets.yml
vars:
restic_shell_script: "{{ (bin_prefix, 'restic-shell.sh') | path_join }}"
backup_all_script: "{{ (bin_prefix, 'backup-all.sh') | path_join }}"
tasks:
- name: "Copy restic shell script"
ansible.builtin.template:
src: "files/backups/restic-shell.sh.j2"
dest: "{{ restic_shell_script }}"
owner: root
group: root
mode: "0700"
- name: "Copy backup all script"
ansible.builtin.template:
src: "files/backups/backup-all.sh.j2"
dest: "{{ backup_all_script }}"
owner: root
group: root
mode: "0700"
- name: "Setup paths for backup cron file"
ansible.builtin.cron:
cron_file: "ansible_restic_backup"
user: "root"
env: true
name: "PATH"
job: "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin"
- name: "Setup mail for backup cron file"
ansible.builtin.cron:
cron_file: "ansible_restic_backup"
user: "root"
env: true
name: "MAILTO"
job: ""
- name: "Creates a cron file for backups under /etc/cron.d"
ansible.builtin.cron:
name: "restic backup"
minute: "0"
hour: "1"
job: "{{ backup_all_script }} 2>&1 | logger -t backup"
cron_file: "ansible_restic_backup"
user: "root"

View File

@ -1,27 +0,0 @@
---
- name: 'Install and configure Caddy server'
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
tasks:
- name: 'Ensure networkd service is started (required by Caddy).'
ansible.builtin.systemd:
name: systemd-networkd
state: started
enabled: true
- name: 'Install and configure Caddy server'
ansible.builtin.import_role:
name: caddy_ansible.caddy_ansible
vars:
caddy_github_token: '{{ caddy_vars.github_token }}'
caddy_config: '{{ lookup("template", "templates/Caddyfile.j2") }}'
caddy_setcap: true
caddy_systemd_capabilities_enabled: true
caddy_systemd_capabilities: "CAP_NET_BIND_SERVICE"
# Поменяй на true, чтобы обновить Caddy
caddy_update: false

72
playbook-caddyproxy.yml Normal file
View File

@ -0,0 +1,72 @@
---
- name: "Configure caddy reverse proxy service"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "caddyproxy"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
caddy_file_dir: "{{ (base_dir, 'caddy_file') | path_join }}"
service_name: "{{ app_name }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ data_dir }}"
- "{{ config_dir }}"
- "{{ caddy_file_dir }}"
- name: "Copy caddy file"
ansible.builtin.template:
src: "./files/{{ app_name }}/Caddyfile.j2"
dest: "{{ (caddy_file_dir, 'Caddyfile') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
# - name: "Reload caddy"
# community.docker.docker_compose_v2_exec:
# project_src: '{{ base_dir }}'
# service: "{{ service_name }}"
# command: caddy reload --config /etc/caddy/Caddyfile
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"

View File

@ -1,99 +0,0 @@
---
- hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
tasks:
# Applications
- import_role:
name: docker-app
vars:
username: gitea
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: gitea
DOCKER_PREFIX: gitea
IMAGE_PREFIX: gitea
CONTAINER_PREFIX: gitea
WEB_SERVER_PORT: '127.0.0.1:{{ gitea_port }}'
USER_UID: '{{ uc_result.uid }}'
USER_GID: '{{ uc_result.group }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: keycloak
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: keycloak
DOCKER_PREFIX: keycloak
IMAGE_PREFIX: keycloak
CONTAINER_PREFIX: keycloak
WEB_SERVER_PORT: '127.0.0.1:{{ keycloak_port }}'
KEYCLOAK_ADMIN: '{{ keycloak.admin_login }}'
KEYCLOAK_ADMIN_PASSWORD: '{{ keycloak.admin_password }}'
USER_UID: '{{ uc_result.uid }}'
USER_GID: '{{ uc_result.group }}'
tags:
- apps
- import_role:
name: docker-app
vars:
username: outline
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: outline
DOCKER_PREFIX: outline
IMAGE_PREFIX: outline
CONTAINER_PREFIX: outline
WEB_SERVER_PORT: '127.0.0.1:{{ outline_port }}'
USER_UID: '{{ uc_result.uid }}'
USER_GID: '{{ uc_result.group }}'
# Postgres
POSTGRES_USER: '{{ outline.postgres_user }}'
POSTGRES_PASSWORD: '{{ outline.postgres_password }}'
POSTGRES_DB: 'outline'
# See sample https://github.com/outline/outline/blob/main/.env.sample
NODE_ENV: 'production'
SECRET_KEY: '{{ outline.secret_key }}'
UTILS_SECRET: '{{ outline.utils_secret }}'
DATABASE_URL: 'postgres://{{ outline.postgres_user }}:{{ outline.postgres_password }}@postgres:5432/outline'
PGSSLMODE: 'disable'
REDIS_URL: 'redis://redis:6379'
URL: 'https://outline.vakhrushev.me'
FILE_STORAGE: 's3'
AWS_ACCESS_KEY_ID: '{{ outline.s3_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ outline.s3_secret_key }}'
AWS_REGION: 'ru-central1'
AWS_S3_ACCELERATE_URL: ''
AWS_S3_UPLOAD_BUCKET_URL: 'https://storage.yandexcloud.net'
AWS_S3_UPLOAD_BUCKET_NAME: 'av-outline-wiki'
AWS_S3_FORCE_PATH_STYLE: 'true'
AWS_S3_ACL: 'private'
OIDC_CLIENT_ID: '{{ outline.oidc_client_id }}'
OIDC_CLIENT_SECRET: '{{ outline.oidc_client_secret }}'
OIDC_AUTH_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/auth'
OIDC_TOKEN_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/token'
OIDC_USERINFO_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/userinfo'
OIDC_LOGOUT_URI: 'https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/logout'
OIDC_USERNAME_CLAIM: 'email'
OIDC_DISPLAY_NAME: 'KK'
tags:
- apps

View File

@ -1,22 +1,21 @@
--- ---
- name: 'Configure docker parameters' - name: "Configure docker parameters"
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/ports.yml
- vars/vars.yml - vars/secrets.yml
tasks: tasks:
- name: "Install python docker lib from pip"
- name: 'Install python docker lib from pip'
ansible.builtin.pip: ansible.builtin.pip:
name: docker name: docker
- name: 'Install docker' - name: "Install docker"
ansible.builtin.import_role: ansible.builtin.import_role:
name: geerlingguy.docker name: geerlingguy.docker
vars: vars:
docker_edition: 'ce' docker_edition: "ce"
docker_packages: docker_packages:
- "docker-{{ docker_edition }}" - "docker-{{ docker_edition }}"
- "docker-{{ docker_edition }}-cli" - "docker-{{ docker_edition }}-cli"
@ -24,6 +23,11 @@
docker_users: docker_users:
- major - major
- name: 'Login to yandex docker registry.' - name: "Login to yandex docker registry."
ansible.builtin.script: ansible.builtin.script:
cmd: 'files/yandex-docker-registry-auth.sh' cmd: "files/yandex-docker-registry-auth.sh"
- name: Create a network for web proxy
community.docker.docker_network:
name: "{{ web_proxy_network }}"
driver: "bridge"

View File

@ -1,26 +1,46 @@
--- ---
- name: 'Install eget' - name: "Install eget"
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/ports.yml
- vars/vars.yml - vars/secrets.yml
# See: https://github.com/zyedidia/eget/releases
vars:
eget_install_dir: "{{ bin_prefix }}"
eget_bin_path: '{{ (eget_install_dir, "eget") | path_join }}'
tasks: tasks:
- name: "Install eget"
- name: 'Install eget'
ansible.builtin.import_role: ansible.builtin.import_role:
name: eget name: eget
vars: vars:
eget_version: '1.3.4' eget_version: "1.3.4"
eget_install_path: '/usr/bin/eget' eget_install_path: "{{ eget_bin_path }}"
- name: 'Install rclone with eget' - name: "Install rclone"
ansible.builtin.command: ansible.builtin.command:
cmd: '/usr/bin/eget rclone/rclone --quiet --upgrade-only --to /usr/bin --tag v1.68.2 --asset zip' cmd: "{{ eget_bin_path }} rclone/rclone --quiet --upgrade-only --to {{ eget_install_dir }} --asset zip --tag v1.69.2"
changed_when: false changed_when: false
- name: 'Install btop with eget' - name: "Install btop"
ansible.builtin.command: ansible.builtin.command:
cmd: '/usr/bin/eget aristocratos/btop --quiet --upgrade-only --to /usr/bin --tag v1.4.0' cmd: "{{ eget_bin_path }} aristocratos/btop --quiet --upgrade-only --to {{ eget_install_dir }} --tag v1.4.2"
changed_when: false
- name: "Install restic"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} restic/restic --quiet --upgrade-only --to {{ eget_install_dir }} --tag v0.18.0"
changed_when: false
- name: "Install gobackup"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} gobackup/gobackup --quiet --upgrade-only --to {{ eget_install_dir }} --tag v2.14.0"
changed_when: false
- name: "Install task"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} go-task/task --quiet --upgrade-only --to {{ eget_install_dir }} --asset tar.gz --tag v3.43.3"
changed_when: false changed_when: false

58
playbook-gitea.yml Normal file
View File

@ -0,0 +1,58 @@
---
- name: "Configure gitea application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "gitea"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

67
playbook-gramps.yml Normal file
View File

@ -0,0 +1,67 @@
---
- name: "Configure gramps application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "gramps"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.yml.j2"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

67
playbook-homepage.yml Normal file
View File

@ -0,0 +1,67 @@
---
# Play 1: Setup environment for the application
- name: "Setup environment for homepage application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
- vars/homepage.yml
tags:
- setup
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Login to yandex docker registry."
ansible.builtin.script:
cmd: "files/yandex-docker-registry-auth.sh"
# Play 2: Deploy the application
- name: "Deploy homepage application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
- vars/homepage.yml
tags:
- deploy
tasks:
- name: "Check is web service image passed"
ansible.builtin.assert:
that:
- "homepage_web_image is defined"
fail_msg: 'You must pass variable "homepage_web_image"'
- name: "Create full image name with container registry"
ansible.builtin.set_fact:
registry_homepage_web_image: "{{ (docker_registry_prefix, homepage_web_image) | path_join }}"
- name: "Push web service image to remote registry"
community.docker.docker_image:
state: present
source: local
name: "{{ homepage_web_image }}"
repository: "{{ registry_homepage_web_image }}"
push: true
delegate_to: 127.0.0.1
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

55
playbook-miniflux.yml Normal file
View File

@ -0,0 +1,55 @@
---
- name: "Configure miniflux application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "miniflux"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
postgres_data_dir: "{{ (base_dir, 'data', 'postgres') | path_join }}"
postgres_backups_dir: "{{ (base_dir, 'backups', 'postgres') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ postgres_backups_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "./files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -1,17 +1,87 @@
--- ---
- name: 'Install Netdata monitoring service' - name: "Install Netdata monitoring service"
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/ports.yml
- vars/vars.yml - vars/secrets.yml
vars:
app_name: "netdata"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
config_go_d_dir: "{{ (config_dir, 'go.d') | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
tasks: tasks:
- name: 'Install Netdata from role' - name: "Create user and environment"
ansible.builtin.import_role: ansible.builtin.import_role:
name: netdata name: owner
vars: vars:
netdata_version: 'v2.2.0' owner_name: "{{ app_user }}"
netdata_exposed_port: '{{ netdata_port }}' owner_extra_groups: ["docker"]
tags:
- monitoring - name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ config_dir }}"
- "{{ config_go_d_dir }}"
- "{{ data_dir }}"
- name: "Copy netdata config file"
ansible.builtin.template:
src: "files/{{ app_name }}/netdata.conf.j2"
dest: "{{ config_dir }}/netdata.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy prometheus plugin config file"
ansible.builtin.copy:
src: "files/{{ app_name }}/go.d/prometheus.conf"
dest: "{{ config_go_d_dir }}/prometheus.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy fail2ban plugin config file"
ansible.builtin.copy:
src: "files/{{ app_name }}/go.d/fail2ban.conf"
dest: "{{ config_go_d_dir }}/fail2ban.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Grab docker group id."
ansible.builtin.shell:
cmd: |
set -o pipefail
grep docker /etc/group | cut -d ':' -f 3
executable: /bin/bash
register: netdata_docker_group_output
changed_when: netdata_docker_group_output.rc != 0
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"

58
playbook-outline.yml Normal file
View File

@ -0,0 +1,58 @@
---
- name: "Configure outline application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "outline"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
postgres_data_dir: "{{ (base_dir, 'data', 'postgres') | path_join }}"
postgres_backups_dir: "{{ (base_dir, 'backups', 'postgres') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups:
- "docker"
owner_ssh_keys:
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
- name: "Create internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ postgres_backups_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "./files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -1,27 +1,32 @@
--- ---
- name: 'Update and upgrade system packages' - name: "Update and upgrade system packages"
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/ports.yml
- vars/vars.yml - vars/secrets.yml
vars: vars:
user_name: '<put-name-here>' user_name: "<put-name-here>"
tasks: tasks:
- name: 'Remove user "{{ user_name }}"' - name: 'Remove user "{{ user_name }}"'
ansible.builtin.user: ansible.builtin.user:
name: '{{ user_name }}' name: "{{ user_name }}"
state: absent state: absent
remove: true remove: true
- name: 'Remove group "{{ user_name }}"' - name: 'Remove group "{{ user_name }}"'
ansible.builtin.group: ansible.builtin.group:
name: '{{ user_name }}' name: "{{ user_name }}"
state: absent state: absent
- name: 'Remove web dir' - name: "Remove web dir"
ansible.builtin.file: ansible.builtin.file:
path: '/var/www/{{ user_name }}' path: "/var/www/{{ user_name }}"
state: absent
- name: "Remove home dir"
ansible.builtin.file:
path: "/home/{{ user_name }}"
state: absent state: absent

34
playbook-rssbridge.yml Normal file
View File

@ -0,0 +1,34 @@
---
- name: "Configure rssbridge application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "rssbridge"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -1,10 +1,10 @@
--- ---
- name: 'Configure base system parameters' - name: "Configure base system parameters"
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/ports.yml
- vars/vars.yml - vars/secrets.yml
vars: vars:
apt_packages: apt_packages:
@ -16,26 +16,27 @@
- jq - jq
- make - make
- python3-pip - python3-pip
- sqlite3
- tree
tasks: tasks:
- name: "Install additional apt packages"
- name: 'Install additional apt packages'
ansible.builtin.apt: ansible.builtin.apt:
name: '{{ apt_packages }}' name: "{{ apt_packages }}"
update_cache: true update_cache: true
- name: 'Configure timezone' - name: "Configure security settings"
ansible.builtin.import_role:
name: yatesr.timezone
vars:
timezone: UTC
tags:
- skip_ansible_lint
- name: 'Configure security settings'
ansible.builtin.import_role: ansible.builtin.import_role:
name: geerlingguy.security name: geerlingguy.security
vars: vars:
security_ssh_permit_root_login: "yes" security_ssh_permit_root_login: "yes"
security_autoupdate_enabled: "no" security_autoupdate_enabled: "no"
security_fail2ban_enabled: "yes" security_fail2ban_enabled: true
- name: "Copy keep files script"
ansible.builtin.copy:
src: "files/keep-files.py"
dest: "{{ bin_prefix }}/keep-files.py"
owner: root
group: root
mode: "0755"

View File

@ -1,15 +1,15 @@
--- ---
- name: 'Update and upgrade system packages' - name: "Update and upgrade system packages"
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/ports.yml
- vars/vars.yml - vars/secrets.yml
tasks: tasks:
- name: Perform an upgrade of packages - name: Perform an upgrade of packages
ansible.builtin.apt: ansible.builtin.apt:
upgrade: 'yes' upgrade: "yes"
update_cache: true update_cache: true
- name: Check if a reboot is required - name: Check if a reboot is required

64
playbook-wakapi.yml Normal file
View File

@ -0,0 +1,64 @@
---
- name: "Configure wakapi application"
hosts: all
vars_files:
- vars/ports.yml
- vars/secrets.yml
vars:
app_name: "wakapi"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.yml.j2"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true

View File

@ -2,6 +2,6 @@
ungrouped: ungrouped:
hosts: hosts:
server: server:
ansible_host: '158.160.46.255' ansible_host: "158.160.46.255"
ansible_user: 'major' ansible_user: "major"
ansible_become: true ansible_become: true

View File

@ -3,10 +3,7 @@
version: 1.2.2 version: 1.2.2
- src: geerlingguy.security - src: geerlingguy.security
version: 2.4.0 version: 3.0.0
- src: geerlingguy.docker - src: geerlingguy.docker
version: 7.4.3 version: 7.4.7
- src: caddy_ansible.caddy_ansible
version: v3.2.0

View File

@ -1,18 +0,0 @@
---
- name: 'Create owner.'
import_role:
name: owner
vars:
owner_name: '{{ username }}'
owner_group: '{{ username }}'
owner_extra_groups: '{{ extra_groups | default([]) }}'
owner_ssh_keys: '{{ ssh_keys | default([]) }}'
owner_env: '{{ env | default({}) }}'
- name: 'Create web dir.'
file:
path: '/var/www/{{ username }}'
state: directory
owner: '{{ username }}'
group: '{{ username }}'
recurse: True

View File

@ -1,8 +1,8 @@
--- ---
# defaults file for eget # defaults file for eget
eget_version: '1.3.4' eget_version: "1.3.4"
eget_download_url: 'https://github.com/zyedidia/eget/releases/download/v{{ eget_version }}/eget-{{ eget_version }}-linux_amd64.tar.gz' eget_download_url: "https://github.com/zyedidia/eget/releases/download/v{{ eget_version }}/eget-{{ eget_version }}-linux_amd64.tar.gz"
eget_install_path: '/usr/bin/eget' eget_install_path: "/usr/bin/eget"
eget_download_dest: '/tmp/{{ eget_download_url | split("/") | last }}' eget_download_dest: '/tmp/{{ eget_download_url | split("/") | last }}'
eget_unarchive_dest: '{{ eget_download_dest | regex_replace("(\.tar\.gz|\.zip)$", "") }}' eget_unarchive_dest: '{{ eget_download_dest | regex_replace("(\.tar\.gz|\.zip)$", "") }}'

View File

@ -1,6 +1,7 @@
---
galaxy_info: galaxy_info:
author: 'Anton Vakhrushev' author: "Anton Vakhrushev"
description: 'Role for installation eget utility' description: "Role for installation eget utility"
# If the issue tracker for your role is not on github, uncomment the # If the issue tracker for your role is not on github, uncomment the
# next line and provide a value # next line and provide a value
@ -13,9 +14,9 @@ galaxy_info:
# - GPL-3.0-only # - GPL-3.0-only
# - Apache-2.0 # - Apache-2.0
# - CC-BY-4.0 # - CC-BY-4.0
license: 'MIT' license: "MIT"
min_ansible_version: '2.1' min_ansible_version: "2.1"
# If this a Container Enabled role, provide the minimum Ansible Container version. # If this a Container Enabled role, provide the minimum Ansible Container version.
# min_ansible_container_version: # min_ansible_container_version:

View File

@ -1,30 +1,30 @@
--- ---
- name: 'Download eget from url "{{ eget_download_url }}"' - name: 'Download eget from url "{{ eget_download_url }}"'
ansible.builtin.get_url: ansible.builtin.get_url:
url: '{{ eget_download_url }}' url: "{{ eget_download_url }}"
dest: '{{ eget_download_dest }}' dest: "{{ eget_download_dest }}"
mode: '0600' mode: "0600"
- name: 'Unarchive eget' - name: "Unarchive eget"
ansible.builtin.unarchive: ansible.builtin.unarchive:
src: '{{ eget_download_dest }}' src: "{{ eget_download_dest }}"
dest: '/tmp' dest: "/tmp"
list_files: true list_files: true
remote_src: true remote_src: true
- name: 'Install eget binary' - name: "Install eget binary"
ansible.builtin.copy: ansible.builtin.copy:
src: '{{ (eget_unarchive_dest, "eget") | path_join }}' src: '{{ (eget_unarchive_dest, "eget") | path_join }}'
dest: '{{ eget_install_path }}' dest: "{{ eget_install_path }}"
mode: '0755' mode: "0755"
remote_src: true remote_src: true
- name: 'Remove temporary files' - name: "Remove temporary files"
ansible.builtin.file: ansible.builtin.file:
path: '{{ eget_download_dest }}' path: "{{ eget_download_dest }}"
state: absent state: absent
- name: 'Remove temporary directories' - name: "Remove temporary directories"
ansible.builtin.file: ansible.builtin.file:
path: '{{ eget_unarchive_dest }}' path: "{{ eget_unarchive_dest }}"
state: absent state: absent

View File

@ -1,24 +1,24 @@
--- ---
# tasks file for eget # tasks file for eget
- name: 'Check if eget installed' - name: "Check if eget installed"
ansible.builtin.command: ansible.builtin.command:
cmd: '{{ eget_install_path }} --version' cmd: "{{ eget_install_path }} --version"
register: eget_installed_output register: eget_installed_output
ignore_errors: true ignore_errors: true
changed_when: false changed_when: false
- name: 'Check eget installed version' - name: "Check eget installed version"
ansible.builtin.set_fact: ansible.builtin.set_fact:
eget_need_install: '{{ not (eget_installed_output.rc == 0 and eget_version in eget_installed_output.stdout) }}' eget_need_install: "{{ not (eget_installed_output.rc == 0 and eget_version in eget_installed_output.stdout) }}"
- name: 'Assert that installation flag is defined' - name: "Assert that installation flag is defined"
ansible.builtin.assert: ansible.builtin.assert:
that: that:
- eget_need_install is defined - eget_need_install is defined
- eget_need_install is boolean - eget_need_install is boolean
- name: 'Download eget and install eget' - name: "Download eget and install eget"
ansible.builtin.include_tasks: ansible.builtin.include_tasks:
file: 'install.yml' file: "install.yml"
when: eget_need_install when: eget_need_install

View File

@ -1,4 +0,0 @@
---
netdata_version: 'v2.0.0'
netdata_image: 'netdata/netdata:{{ netdata_version }}'
netdata_exposed_port: '19999'

View File

@ -1,36 +0,0 @@
---
- name: 'Grab docker group id.'
ansible.builtin.shell:
cmd: |
set -o pipefail
grep docker /etc/group | cut -d ':' -f 3
executable: /bin/bash
register: netdata_docker_group_output
changed_when: netdata_docker_group_output.rc != 0
- name: 'Create NetData container from {{ netdata_image }}'
community.docker.docker_container:
name: netdata
image: '{{ netdata_image }}'
image_name_mismatch: 'recreate'
restart_policy: 'always'
published_ports:
- '127.0.0.1:{{ netdata_exposed_port }}:19999'
volumes:
- '/:/host/root:ro,rslave'
- '/etc/group:/host/etc/group:ro'
- '/etc/localtime:/etc/localtime:ro'
- '/etc/os-release:/host/etc/os-release:ro'
- '/etc/passwd:/host/etc/passwd:ro'
- '/proc:/host/proc:ro'
- '/run/dbus:/run/dbus:ro'
- '/sys:/host/sys:ro'
- '/var/log:/host/var/log:ro'
- '/var/run/docker.sock:/var/run/docker.sock:ro'
capabilities:
- 'SYS_PTRACE'
- 'SYS_ADMIN'
security_opts:
- 'apparmor:unconfined'
env:
PGID: '{{ netdata_docker_group_output.stdout | default(999) }}'

View File

@ -1,5 +1,6 @@
--- ---
owner_name: '' owner_name: ""
owner_group: '{{ owner_name }}' owner_group: "{{ owner_name }}"
owner_extra_groups: []
owner_ssh_keys: [] owner_ssh_keys: []
owner_env: {} owner_env: {}

View File

@ -1,60 +1,51 @@
--- ---
- name: 'Check app requirements for user "{{ owner_name }}".' - name: 'Check app requirements for user "{{ owner_name }}".'
fail: ansible.builtin.fail:
msg: You must set owner name. msg: You must set owner name.
when: not owner_name when: not owner_name
- name: 'Create group "{{ owner_group }}".' - name: 'Create group "{{ owner_group }}".'
group: ansible.builtin.group:
name: '{{ owner_group }}' name: "{{ owner_group }}"
state: present state: present
- name: 'Create user "{{ owner_name }}".' - name: 'Create user "{{ owner_name }}".'
user: ansible.builtin.user:
name: '{{ owner_name }}' name: "{{ owner_name }}"
group: '{{ owner_group }}' group: "{{ owner_group }}"
groups: '{{ owner_extra_groups }}' groups: "{{ owner_extra_groups }}"
shell: /bin/bash shell: /bin/bash
register: uc_result register: user_create_result
- name: 'Set up user ssh keys for user "{{ owner_name }}".' - name: 'Set up user ssh keys for user "{{ owner_name }}".'
authorized_key: ansible.posix.authorized_key:
user: '{{ owner_name }}' user: "{{ owner_name }}"
key: '{{ item }}' key: "{{ item }}"
state: present state: present
with_items: '{{ owner_ssh_keys }}' with_items: "{{ owner_ssh_keys }}"
when: owner_ssh_keys | length > 0 when: owner_ssh_keys | length > 0
- name: 'Prepare env variables.' - name: "Prepare env variables."
set_fact: ansible.builtin.set_fact:
env_dict: '{{ owner_env | combine({ env_dict: '{{ owner_env | combine({"USER_UID": user_create_result.uid, "USER_GID": user_create_result.group}) }}'
"CURRENT_UID": uc_result.uid | default(owner_name),
"CURRENT_GID": uc_result.group | default(owner_group) }) }}'
tags:
- env
- name: 'Set up environment variables for user "{{ owner_name }}".' - name: 'Set up environment variables for user "{{ owner_name }}".'
template: ansible.builtin.template:
src: env.j2 src: env.j2
dest: '/home/{{ owner_name }}/.env' dest: "/home/{{ owner_name }}/.env"
owner: '{{ owner_name }}' owner: "{{ owner_name }}"
group: '{{ owner_group }}' group: "{{ owner_group }}"
tags: mode: "0640"
- env
- name: 'Remove absent environment variables for user "{{ owner_name }}" from bashrc.' - name: 'Remove from bashrc absent environment variables for user "{{ owner_name }}".'
lineinfile: ansible.builtin.lineinfile:
path: '/home/{{ owner_name }}/.bashrc' path: "/home/{{ owner_name }}/.bashrc"
regexp: '^export {{ item.key }}=' regexp: "^export {{ item.key }}="
state: absent state: absent
with_dict: '{{ env_dict }}' with_dict: "{{ env_dict }}"
tags:
- env
- name: 'Include environment variables for user "{{ owner_name }}" in bashrc.' - name: 'Include in bashrc environment variables for user "{{ owner_name }}".'
lineinfile: ansible.builtin.lineinfile:
path: '/home/{{ owner_name }}/.bashrc' path: "/home/{{ owner_name }}/.bashrc"
regexp: '^export \$\(grep -v' regexp: "^export \\$\\(grep -v"
line: 'export $(grep -v "^#" "$HOME"/.env | xargs)' line: 'export $(grep -v "^#" "$HOME"/.env | xargs)'
tags:
- env

View File

@ -1,57 +0,0 @@
import os
import shlex
import fabric
from invoke import task
SERVER_HOST_FILE = "hosts_prod"
DOKER_REGISTRY = "cr.yandex/crplfk0168i4o8kd7ade"
@task(name="deploy:gitea")
def deploy_gitea(context):
deploy("gitea", dirs=["data"])
@task(name="deploy:keycloak")
def deploy_keykloak(context):
deploy("keycloak", compose_file="docker-compose.prod.yml", dirs=["data"])
@task(name="deploy:outline")
def deploy_outline(context):
deploy("outline", compose_file="docker-compose.prod.yml", dirs=["data/postgres"])
def read_host():
with open(SERVER_HOST_FILE) as f:
return f.read().strip()
def ssh_host(app_name):
return f"{app_name}@{read_host()}"
def deploy(app_name: str, compose_file="docker-compose.yml", dirs=None):
docker_compose = os.path.join("app", app_name, compose_file)
assert os.path.exists(docker_compose)
conn_str = ssh_host(app_name)
dirs = dirs or []
print("Deploy app from", docker_compose)
print("Start setup remote host", conn_str)
with fabric.Connection(conn_str) as c:
print("Copy docker compose file to remote host")
c.put(
local=docker_compose,
remote=f"/home/{app_name}/docker-compose.yml",
)
print("Copy environment file")
c.run("cp .env .env.prod")
for d in dirs:
print("Create remote directory", d)
c.run(f"mkdir -p {d}")
print("Up services")
c.run(
f"docker compose --project-name {shlex.quote(app_name)} --env-file=.env.prod up --detach --remove-orphans"
)
c.run(f"docker system prune --all --volumes --force")
print("Done.")

View File

@ -1,67 +0,0 @@
# -------------------------------------------------------------------
# Global options
# -------------------------------------------------------------------
{
grace_period 15s
}
# -------------------------------------------------------------------
# Netdata service
# -------------------------------------------------------------------
status.vakhrushev.me, :29999 {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ netdata_port }}
}
basicauth / {
{{ netdata.login }} {{ netdata.password_hash }}
}
}
# -------------------------------------------------------------------
# Applications
# -------------------------------------------------------------------
vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ homepage_port }}
}
}
git.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ gitea_port }}
}
}
kk.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ keycloak_port }}
}
}
outline.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ outline_port }}
}
}
gramps.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to 127.0.0.1:{{ gramps_port }}
}
}
}

7
vars/homepage.yml Normal file
View File

@ -0,0 +1,7 @@
---
app_name: "homepage"
app_user: "{{ app_name }}"
base_dir: "/home/{{ app_user }}"
docker_registry_prefix: "cr.yandex/crplfk0168i4o8kd7ade"
homepage_web_image: "{{ homepage_web_image | default(omit) }}"

View File

@ -1,14 +1,7 @@
--- ---
base_port: 41080 base_port: 41080
notes_port: "{{ base_port + 1 }}"
dayoff_port: "{{ base_port + 2 }}"
homepage_port: "{{ base_port + 3 }}" homepage_port: "{{ base_port + 3 }}"
netdata_port: "{{ base_port + 4 }}" netdata_port: "{{ base_port + 4 }}"
wiki_port: "{{ base_port + 5 }}"
nomie_port: "{{ base_port + 6 }}"
nomie_db_port: "{{ base_port + 7 }}"
gitea_port: "{{ base_port + 8 }}" gitea_port: "{{ base_port + 8 }}"
keycloak_port: "{{ base_port + 9 }}"
outline_port: "{{ base_port + 10 }}" outline_port: "{{ base_port + 10 }}"
navidrome_port: "{{ base_port + 11 }}"
gramps_port: "{{ base_port + 12 }}" gramps_port: "{{ base_port + 12 }}"

142
vars/secrets.yml Normal file
View File

@ -0,0 +1,142 @@
$ANSIBLE_VAULT;1.1;AES256
62653431636461623338643536653736633166303934626565363963373637396534303130373035
6565376162653735313737333439633862643366336264650a633265316463323062653032363861
32626536343138663837633334316537373662653262366163633334623764633938323363363962
6230333564643665320a613862653632363363616266336338346539323964383736366235306437
33306363353163383663643062656330313134353836666232616532316264303564336235356661
30653262363866653139646436333036393837383262643537313933613939326433313565393465
31373036353133663337613935343038616164316132303833363338623863633234656537653039
62626436346238636234393939366139363034306432326538656264343733356537393332633836
38636639626665666238656338363633383566616638353235383465623232646537616230626630
63303130316438353934656636393366306566346362356564393661643064323630636463383061
37636461386432323136393739633862313337333261306664323361393835323034643134383461
31313762616538336666656137373631336132383364646163633732323431613239333563653332
65616664333839363834333362626238633833666430653738613636333432333430333861356339
61323865663661383534343964346238383134613532616637346235616139383434623564333361
31636165653261363830623162623738333937316664633434346431626630393837366666643434
61643734653834326434353431393732376266626266313264376235323838313539306463653864
36393461366230643234376161623330326365616539323965633431633238386262373562383161
39323634633166643038356434616461613864303334393932663730303839373530643933323839
66353337326336656635636362356531613634623633303461336565363564393964663430393666
64326439346233346132653230343234653430653239636362616561636166343030303863373337
36363633646432613138313062346164663730313061363432396138323561366430316439343036
32353931393064666231323863656165363066313236613332356161363139616636333963386130
37363030383765613132353161613766633635363033656561343038633839313933646264383730
64336339646264383332373639326164373163383966626363653762643037353636376336626136
33346533303036326531316332306461646361376435316438376161663162336335353938366565
30633133653431393066393961313138383337313731653031323432633766356338316366373432
32373937663961623739633439636661336461346132376533373961666432353937373066643165
61663063363661633938373365393665356665636562646265313834373962336566393835633339
34396666396162613162326331313037303933366564623837386338363063636564656339336639
66346465366233663534373465313930323134313835316464363263383866313563396263616535
63383265623865636162346635613863356266336664343434393437656134353639353535383332
62623934643930313939646466663336633034343534396137333264623263663866663339663266
30343234356536663262616363376663646264353331646164376331376639363135373137396437
37363166386233356434656237373535326162303437346233623263663534383032363638376134
61653939306433393437656465343066613530396265396262373433383637656266303064623234
64333062353435373863636439663561393763333538303836303631666262326430623835656138
37653562353562373935333235316430613737653862303933333062643663333364333966643461
33323335346566363337643161303835356336306232653763346639323265373432376239363566
64373562653238333865326335613133636335373739396335633631313431363061616139303463
37333364393438666532396131343637373833353766396234383739306565646439366438653032
33656330343061636338643465653664326338663233316631303465666632653436633135643664
64616132366632666431653262393035393163343664303961396431666236303864303865343634
35616634613165373637653235323164323666343436646339646637646234306163333462393063
32346534636165656436353036316232303266616135303663343631303565623562616237306365
65303938646239393564333461343238636335336533633265383066653734613332656563666434
31316665613630336263613934316361383332363164323266373565323239343033666663396534
39323739313636616232663535386439363065333766623837336230303334656466656262613363
37386664336436376530373436353235616437333834646563353830626162336261333135383866
64383930316531373366646335306131633166353161336463376530353066356530393665393063
31613636386532623035373866373065633233633135343439616662616232366337313764646436
64626262643532613136373238316561616361393433323066326333663663353236393662396539
31653036303031303462643231333965653536666136313638613832393361666131363435633932
31663864326563663230626237643763333737613239373134626433636564386231383961316162
39383165336433626466393935383363396333636131643733663866356434366664613766396263
34313934626133653361633665323131613736306331373732323434323535346136393964356231
62346136356331393238346333393266613365633563626238353530333931613330663765393936
32333261353634646366323238353238643837633735636662356630373464343330626630656130
36356565356430643133386461313335343436316263303064366139316638663161356332386362
37376431393661386231313763303266313630323362363664336366633035353562303439373630
33343265633630343065363461363064653933303932613761303538393734373962613633386539
66636534333537313135356665633966326430373062346136326532666638303334653263646431
38393131653338316663313265653861663334326635353137623739396636333637343137636339
32303836373535326363396434326233623532633931653039643763326263616232333462616631
36666564623030396134346665386661386433366266363739626161653062323963313365353161
35643530343439326133613939353737653165326538666530366530323963363839373032326462
34666235376263616364656130633637346334353934396132353263313237316366303137386430
64653563333963313361303239666361336136356363306266633833366262326431616161613238
38653538613032386238623839663332613064333031303939363733396635373238666562386536
32316566666435376239386637396334643861643634316338613063656465373164646530363865
34373130636435326130633437303539646535336131393339613139383636333763336530636534
34636666666265373636326666333130623863316465663333653466353063313134386262333739
62626264393362353663303531313061643538663532333164336662343732373463623166396539
39396531376338616538633633343733343765306237656466666232623163303738643431633763
61656335616430653936303831393664653365363764333362373337323364323039363163353461
61336536316466396636306266353830316665343739613033346538333830306263386134613737
64316339613462346438656362346664303762643766373364343931626530626439336634666537
31633964386564663531343764326666666261643464353438353035333665363434646661646663
38636239373331623061343730376632393963303732393533396464633131633435373161303163
66383461343861326665623463636262336562633936623563373136613063356362383862663232
37333331373431393137363735613366656434323065346661366433663464666363343231393863
64633530316230653065356165366135396531663731323866376162306238343962376362633234
61626563306431623336623737353931316236623333623337383366613262346631646330313637
39366239396330303461303666396431663062626533336136643039353034633230353765353334
38613362653963336162326163356662356661386630353664333265373032316531656131376665
37376262363130336161613230333863653662623436666361396561613935323432663665643138
38616564636634613164313666393532396265396135326538336665373232316461326635306131
34343632636637653835653131613161316237346239363830386536363933643532333533373333
39643364306163666366376535653333323435383332633961343930633635383030356463333964
39626130666166313234386439383833616265316265363430343134633730336261383435356138
62373063346238613061363033343366623633373034346531303538396335653938646664303962
31336634623135616237323837623831306535316463613266326262663934303938373132343735
37656335333263326531646162393738653632376164323165393563656138613830633936396433
61353332343134636564333233393863643837353366386234376237623435663765343366363033
63326233383962633266303962613361643464613764303531333930363736323535386632393766
61353666303134663466333330383031333933666137346364656364313965656164303065303530
34616130653061613934393831373130333566363736626261316330303966656162326638333130
66373133613536623566303432356666346535636237616561323063643439616436393666376536
32613830343636393031333737376332396230313034393062663437613838363263333233613439
30623039336339373234326261306435366332656164613439376139346333616331326561383963
30643133376632656564616536323863373237623263366266396264633464373765316164346165
37636233633661643362636630356333333766613036663335613264333439323239633861363034
34663937376530653837653236303839336631313863363239626632646436653638366638366566
39306538353231623434373537313862386335393262633062313432646232623863383731313031
30656366363837366666393933346238363336363030373836386230343062363661306263633163
33626562623935643665626239386133636531393536336661613430343630333961303233343430
63656666346138643163393663316134666336323961626163376461663635633834333337393062
61656163613234633965356133666335343065626137633137333266613561633936386136643134
37383562663031393133326662623136386539633066323336306262346236613161613637626162
36636133666334333636653535623732343233396430653566393165353431303739656239373738
33323939633264303139323162613964306237376461383261646635343036313639626539373238
32336537373436373338386432646139303831383138326564333739353761616336346461356532
38303138656533386231303336336564656135346162376662663962663763353830663237323138
33373331656637363139626132393231313136303936633161636261643264313230356261366165
39666331306262643566663830626663656530303831343231323336306266363735393966613062
63353938386263376166316335656164633233633465303065663565373764343031663866653135
64663766386436653665356265333565323336636539656237303334383636353161643366656637
66356532373130323236313936623964663433333965326662333833316437326461326165376661
66396537653032346666363965313339323331303864616230646361386335663138613433326261
35613430363864336635343434333761656639633863323534653862383936653762646134356664
38326463326239636162333435656561343739366364313738663535636136323439373462643832
62633661663337343538393466613734633531666532353161616231323161646237653736346561
64323063656366373931396639393261643333393333626539663561636661393936316539633263
63343331313464623636353031343232613534663565303538333164306531303438616539386364
30376233333630336431336364663834633734636261353364343564333639623737363538313462
61616233663335303062336635376435643965373039336231346234363436356238356162613138
65326532663461616263626238346535623136633039613939353132313836373962646463333535
65313562346631633435616232366166373763346337303561326130333936346130363431383036
62356435616630396539303633343166646461393030336462366463636138316333633363643636
65376131333731356566333237363266656466376539326438313930376363386231616138336335
65333735653830373035656265336331346562353233663465343935383235303930633831613137
64303130666532303733633133386334613733383562613661643931636136386264396438316366
61653964643135646332343764666134336666336232376465353462356632346533633961636534
32643234396636303135663562656435376561336235303837643932366334616265383639343733
65633833653763643366646232343765306131313465326263623636386131376463356139623334
39343163366439643334646663393434353333316234623530393431643539346435616263303734
61633066653838363933646230623238653431393061646430383537343363643562653831336362
37626630633161653763386663373630306564663339393265663732623434643231326335376562
37663234643466366535326461396631633430613431346134316635653032663033623465346338
61353331393631343365663233376330333730366161353362626166646232313666336333386265
33373761313536326165343339346263316636363362393365663034353964373164643763383037
3666

View File

@ -1,69 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
31326232656538373331333566386562333865623563313931366165353939633431323062333663
3463313831316433383162383933623262636231356331370a393231393536653164653361663232
31626661623334636263336534323261336237323935336366366161366665336537383634653265
6563386130326531610a626463323739626531653731336366626139396331393531623232623332
32383937656365666239306565666232613361616439633438316636356433393139616663663433
39336631663633313362623263626237313839356562656462323235626433386138313132643732
30656165663362316433306438376266363530313539323338336563333365366465366530396465
30633431643563356131656332313564663963356662616235306333393137383135326665646530
30343131646430626462626563366133386639643033386130383063656434326634346532396636
33393165626535643731616535396431656435343032333539623538663632333462666339323839
66613463366665616165613832333931356139303835323363346564343539643062373963633263
39653533356533393530346263623930396339336630383661316163303232326135663761366534
64383766326134386462666662633965336135313064326536316332646364623430303838333737
33633332616231666434613532363963646433303531343362636232363232353161343735616533
37383064353839336436663134653237303962616132393534366234633338616634396536666338
34633237623662353066613936373862383264393931343830656563316662393133363536363331
65636531663838383538656339386134373762636164343630353232303639393130663035353335
36333461613031663435656465633934613934363238313462656234313833616234663265343436
39336635633232396130623166326662376234663632346437303131366534356439306238343764
30653732313266613430626637636235393831323237653665346238373363303439656438393436
35343464656164303565393430663930653764303761633737636532313964383037313665333064
35393466316462323566626235313637313136323331626134663863636534363666396236663132
31616164316666303465653863656536666331363433363163333566343338386130616333333364
61633331633965383834356630343237653466626237656164643435343433626434366331346531
36343165626664363039663439363236346466343061656137663932363962373639646339363164
32623265303762343833343535333463613138356336643563323435356431616366653539643065
32623639663137356262623131353135333630643435323462643032663061653066636662323233
36636364396465653637396464336161303761633366303339653834323036633666373630343762
34383734346436633962636131353235346462636632373461376265633365383861396262303032
37376434626637616437613364336536666431663434313238373333303362383538623962396262
30323334643064343237663862373034663338356430393935336131663634646130363733393164
33383832633630396434386339313331363035383634616463383363386433643334623331326261
33333463353133306530333937646238633831376165373735363462353263333930396264383039
30666239313237613437363635333863663137633961336235313036306335373166633465303239
31636135323965623836383231396535366263636164363737313761613531613633303461386533
66653230323962626539343338336533333435323565616536643436336534323730373864666366
39356330643562393434363032373338373363383565643934383464363634353435383731636534
36313031373365393236363735636234616134646334306266643336376336343464623534663766
62376330353232376232346337323562306437303631303833383430666638393835663033326135
31333166386632663564383637373266333961353139333662303333636439393835363630363539
61623435396638653937343866373165623530373664633665333962376235646163373762373734
62373262393562643737653965636462323065343530626132393834633361623531333361613337
62653966343863623666326463356130643766346638656436643738613032333462663061346335
33373263303438383236353733343766356338323231663161303830333663366232386461643730
37306663373262633635326338633136313938666230343334353735313731626363316436336130
66643264313239343334323362303165643966383661653239373731306433346465613839616261
33313438623235343366636630373963626664356531313934363035303137613465663434333265
65393665626539623232616336663832346265613934313666616266383537613066343930656237
62653935663234376634316433396631363232396337393165323131633632303330646538613330
30356361336432346435366537386362363630306333386131336663623661376163373039663461
33353936346462633732376132393339363334313137303965313762366439306361383963636130
37366336666330323665653266343662383065396563633238313564363863633165326166666634
61366666383236646161306465376635336334343461656436643161363038323534636632363464
65666133613437303931333534643235393438323138346130333338316233386536306238323463
64613335366430653766343061326361646339613363356563623466343466343930323032303532
61636561383531643664613833376636316364366166653365616336336564353130356564323331
33383934323166633338316265636363343232663033623732636636373437363837643237653464
63393436313836616335373562666532353338313035663632363265653162303233333566333538
61613563636234343433323635303462646362383763346264393734386130313362333736623236
33613237663064616330303733373434386538633463626332633534376465376135336230346366
38306135376539303131663237623764633633653933336162663636346361356664323565396430
38643262326132333832653536663535363136646336333236373661346431326430333161613535
65316438626436336235353765363233663131333063333330323731366266393466313062323539
30393564383430373661613737343634306138393566623830636633616430313531653736303739
30373439626362653639313162306237396330396633303761353635396235666333643339393061
35313535613264366435386338306633396631643838313962643334326236386237363935376531
36626338366136306631623235346138356132666632613466623132353161396464646539376665
31623862316466343435