Compare commits

..

204 Commits

Author SHA1 Message Date
a31a07bd16 Backup: remove old config
Some checks failed
Linting / YAML Lint (push) Failing after 8s
Linting / Ansible Lint (push) Successful in 16s
2025-12-21 10:13:05 +03:00
54a951b96a Backup: refactor notifications 2025-12-21 10:10:12 +03:00
e1379bc480 Backup: roots parameter 2025-12-20 21:33:21 +03:00
037e0cab9b Backup: restic backup refactoring 2025-12-20 21:31:39 +03:00
2655869814 Backup: support for multiple storages
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 19s
2025-12-20 21:19:06 +03:00
0e96b5030d Backup: refactoring 2025-12-20 21:04:54 +03:00
a217c79e7d Backup: extract restic storage into separate class 2025-12-20 21:03:32 +03:00
6a16ebf084 Backup: parse config to dataclasses
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 16s
2025-12-20 17:44:02 +03:00
2617aa2bd2 Backup: support multiple roots 2025-12-20 17:27:29 +03:00
b686e4da4d Backup: change config format to toml
With support of multiple config values
2025-12-20 17:13:35 +03:00
439c239ac8 Lefthook: fix python format hook 2025-12-20 11:55:13 +03:00
acf599f905 Lefthook: check py files with mypy
Some checks failed
Linting / YAML Lint (push) Failing after 8s
Linting / Ansible Lint (push) Successful in 18s
2025-12-20 11:38:14 +03:00
eae4f5e27b Lefthook: format py files on commit 2025-12-20 11:35:54 +03:00
4fbe9bd5de Backups: skip system dir lost+found
Some checks failed
Linting / YAML Lint (push) Failing after 8s
Linting / Ansible Lint (push) Successful in 15s
2025-12-20 11:22:24 +03:00
dcc4970b20 Add owner and group to backup-targets files 2025-12-20 11:18:37 +03:00
2eac1362b5 Wanderer: backup all data with restic 2025-12-20 11:18:11 +03:00
e3d8479397 Memos: exclude media files from gobackup
Backup media files with backup-targets
2025-12-20 11:06:56 +03:00
91c5eab236 Gramps: exclude media files from gobackup
Backup media files with backup-targets
2025-12-20 11:04:50 +03:00
ca7f089fe6 Backups: use dataclass Application for app info 2025-12-20 10:48:40 +03:00
479e256b1e Backups: use constants for file names 2025-12-20 10:36:19 +03:00
11e5b5752e Backups: add backup-targets file support 2025-12-20 10:32:00 +03:00
392938d0fb Gitea: upgrade to 1.25.3
Some checks failed
Linting / YAML Lint (push) Failing after 10s
Linting / Ansible Lint (push) Successful in 19s
2025-12-19 20:35:53 +03:00
2cc059104e Netdata: upgrade to 2.8.4
Some checks failed
Linting / YAML Lint (push) Failing after 10s
Linting / Ansible Lint (push) Successful in 20s
2025-12-17 20:18:12 +03:00
d09a26b73a Gramps: update vars
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 17s
2025-12-16 20:36:51 +03:00
097676f569 Gramps: move assets to local storage 2025-12-16 20:34:25 +03:00
e878661cb3 Specify secret files to ignore 2025-12-16 19:43:40 +03:00
cb50c1c515 Docker: prune images every night
Some checks failed
Linting / YAML Lint (push) Failing after 10s
Linting / Ansible Lint (push) Successful in 18s
2025-12-16 19:34:31 +03:00
33de71a087 Add agents.md file for ai agents 2025-12-16 19:29:52 +03:00
fbd5fa5faa Memos: upgrade to 0.25.3
Some checks failed
Linting / YAML Lint (push) Failing after 10s
Linting / Ansible Lint (push) Successful in 18s
2025-12-16 11:51:21 +03:00
faf7d58f78 Netdata: update config
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 17s
map /etc/hostname config into container
2025-12-14 21:22:00 +03:00
0a75378bbc Remove old ports config
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 16s
2025-12-14 19:25:33 +03:00
bdd74bdf2e Authelia: add backup for storage database
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 19s
2025-12-14 18:10:59 +03:00
78bee84061 Outline: add postgres health check 2025-12-14 17:58:04 +03:00
7b81858af6 Miniflux: change file names 2025-12-14 17:51:06 +03:00
08fda17561 Gramps: move cache to separate dir 2025-12-13 15:40:56 +03:00
841bd38807 Update valkey to 9.0 2025-12-13 15:35:43 +03:00
fb1fd711c2 Dozzle: upgrade to 8.14.11
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 17s
2025-12-13 14:57:41 +03:00
ecf714eda7 Gramps: reduce celery workers to 1
And update valkey to 9
2025-12-13 14:57:23 +03:00
81f693938e Netdata: upgrade to 2.8.2
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 17s
Tune config, setup update every 10s instead of 1s
2025-12-13 14:46:15 +03:00
10d67861a0 Netdata: revert to 2.7.3
All checks were successful
Linting / YAML Lint (push) Successful in 10s
Linting / Ansible Lint (push) Successful in 17s
High cpu usage for dockerd and containerd
2025-12-13 13:29:40 +03:00
3f5befb44d Netdata: upgrade to 2.8.2
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 17s
2025-12-13 09:52:08 +03:00
1b75ddaef2 Disable python docker package 2025-12-13 09:51:49 +03:00
7d6ef77e64 Authelia: fix run-app behavior 2025-12-13 09:51:35 +03:00
ae7c20a7aa Add mount configuration 2025-12-13 09:03:29 +03:00
67df03efca Add combined application playbook
All checks were successful
Linting / YAML Lint (push) Successful in 11s
Linting / Ansible Lint (push) Successful in 21s
2025-12-13 08:46:34 +03:00
48bb8c9d33 Add combined system playbook 2025-12-13 08:38:12 +03:00
5b53cb30ac Add tag 'run-app' for application run
Useful skip run when configure app
2025-12-13 08:22:11 +03:00
f2bc221663 Outline: change postgres owner
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 16s
2025-12-11 11:26:13 +03:00
b41a50006b Exclude home dir from backups 2025-12-11 10:59:27 +03:00
c2ea2cdb39 Fix app user and group uid and gid
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 16s
Prepare for system upgrade
2025-12-11 10:52:27 +03:00
7e67409393 Applications: move to new base directory
All checks were successful
Linting / YAML Lint (push) Successful in 10s
Linting / Ansible Lint (push) Successful in 21s
2025-12-11 10:14:27 +03:00
6882d61f8e Gitea: move to new app directory
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 17s
2025-12-07 17:53:08 +03:00
47a63202b8 Fix spaces in file 2025-12-07 17:52:48 +03:00
af289f1e28 Configure stateless apps for new storage
Some checks failed
Linting / YAML Lint (push) Failing after 9s
Linting / Ansible Lint (push) Successful in 18s
2025-12-07 17:40:32 +03:00
b08f681c92 Exclude lost+found dir from applications 2025-12-07 17:40:08 +03:00
8dfd061991 Backup apps from /mnt/applications 2025-12-07 17:16:43 +03:00
306d4bf8d0 Install dust for space usage check 2025-12-07 17:13:20 +03:00
dbd679aa8b Gramps: move to new app directory 2025-12-07 16:48:25 +03:00
47ed9c11c1 Add apps data directory (external drive) 2025-12-07 16:47:59 +03:00
f9ad08fd09 Simplify owner setup
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 19s
2025-12-07 15:56:16 +03:00
4c7338f857 Update eget tools
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 21s
2025-12-07 15:39:21 +03:00
a95da35389 Backups: move secrets to config file
Some checks failed
Linting / YAML Lint (push) Has been cancelled
Linting / Ansible Lint (push) Has been cancelled
Allow run backup script with sudo
2025-12-07 15:14:55 +03:00
c74683cfe7 Grams: upgrade to 25.11.2
All checks were successful
Linting / YAML Lint (push) Successful in 44s
Linting / Ansible Lint (push) Successful in 1m8s
2025-12-01 09:18:54 +03:00
9dff413867 Fix linting
All checks were successful
Linting / YAML Lint (push) Successful in 9s
Linting / Ansible Lint (push) Successful in 19s
2025-11-28 20:37:39 +03:00
23a2bae7ec Fix linting
Some checks failed
Linting / YAML Lint (push) Failing after 8s
Linting / Ansible Lint (push) Successful in 18s
2025-11-28 20:13:58 +03:00
942bb7d999 Fix editorconfig for yaml files
Some checks failed
Linting / YAML Lint (push) Failing after 8s
Linting / Ansible Lint (push) Failing after 19s
2025-11-28 18:30:23 +03:00
6ff7a7e3b4 Add lint workflow 2025-11-28 18:27:34 +03:00
8ae28e64f4 Gitea: upgrade to 1.25.2 2025-11-27 11:13:14 +03:00
f7e8248cac Dozzle: upgrade to 8.14.8 2025-11-18 11:36:51 +03:00
2af9066dec Gramps: upgrade to 25.11.0 2025-11-18 11:36:35 +03:00
e3b2e064c0 Outline: upgrade to 1.1.0 2025-11-18 11:36:15 +03:00
380a54cb25 Authelia: upgrade to 4.39.14 2025-11-18 11:36:02 +03:00
d5078186e7 Gitea: upgrade to 1.25.1 2025-11-05 09:34:07 +03:00
57bb696e6e Memos: configure local storage 2025-11-04 17:19:29 +03:00
c6cc7d4c6c Wakapi: upgrade to 2.16.1 2025-11-04 14:15:44 +03:00
90abc6c833 Gitea: upgrade to 1.25.0 2025-11-04 14:15:31 +03:00
395203f236 Memos: install 0.25 2025-11-04 14:10:39 +03:00
57cc639cc8 Outline: upgrade to 1.0.1 2025-10-29 09:44:15 +03:00
1405a2364e Netdata: upgrade to 2.7.3 2025-10-29 09:44:01 +03:00
b165899f25 Gramps: upgrade to 2.10.1 2025-10-27 11:20:21 +03:00
86147d0103 Authelia: add redis (valkey) as session storage 2025-10-26 20:13:31 +03:00
2c9ade0a8e Tools: add args to secret generation tasks 2025-10-26 20:05:06 +03:00
35c1f727f6 Wakapi: try to setup oidc, but failed
https://github.com/muety/wakapi/issues/856
2025-10-25 15:13:01 +03:00
b7d2fca2f2 Gitea: upgrade to 1.24.7 2025-10-25 13:37:31 +03:00
725c4c02cc Dozzle: upgrade to 8.14.6 2025-10-25 13:37:09 +03:00
328256c6be Transcriber: release transcriber:2fc5a56-1761210134 2025-10-23 12:03:51 +03:00
b08dc862c9 Transcriber: release transcriber:ec637c0-1761209409 2025-10-23 11:51:45 +03:00
0810c6c099 Transcriber: release transcriber:822e168-1761208842 2025-10-23 11:42:10 +03:00
dd6b34e983 Transcriber: release transcriber:822e168-1761208664 2025-10-23 11:39:35 +03:00
6fd6d76594 Transcriber: rewrite deploy
Same as homepage deploy

Prepare for two-step deploy:
- local build and fix local tag
- deploy to remote server
2025-10-23 11:37:24 +03:00
61e2431975 Homepage: release homepage-nginx:f797e17-1761204003 2025-10-23 10:20:14 +03:00
9a23e35126 Homepage: rewrite deploy
Prepare for two-step deploy:
- local build and fix local tag
- deploy to remote server
2025-10-23 10:10:06 +03:00
f2a9e660ed Wakapi: upgrade to 2.16.0 2025-10-21 12:02:41 +03:00
bd5f5ca452 Dozzle: upgrade to 8.14.5 2025-10-21 12:02:18 +03:00
860cfd0450 Authelia: upgrade to 4.39.13 2025-10-21 12:01:50 +03:00
884553892b Update tools: rclone 2025-10-21 12:01:14 +03:00
3e43c3e44d Wanderer: upgrade to 0.18.3 2025-10-06 09:55:59 +03:00
823533a8cb Netdata: upgrade to 2.7.1 2025-10-06 09:55:40 +03:00
f54cac170e Gramps: upgrade to 25.9.0 2025-10-06 09:55:20 +03:00
17950bcfad Dozzle: upgrade to 8.14.4 2025-10-06 09:54:58 +03:00
5f6891d192 Authelia: upgrade to 4.39.11 2025-10-06 09:54:35 +03:00
9c4ff91ccf Upgrade rclone, btop, restic, gobackup, task 2025-10-06 09:54:16 +03:00
541312b5e9 Gilea: upgrade to 1.24.6 2025-09-18 12:03:10 +03:00
c82168561e Outline: upgrade to 0.87.4 2025-09-18 12:02:48 +03:00
c37a5f0d7d Dozzle: upgrade to 8.13.14 2025-09-18 12:02:29 +03:00
8714f1bd95 Authelia: upgrade to 4.39.10 2025-09-18 12:02:01 +03:00
0e2eba5167 Wanderer: upgrade to 0.18.2 2025-09-18 12:01:28 +03:00
a1a94d29a8 Wanderer: install 0.18.1 2025-09-12 20:17:33 +03:00
1d5ce38922 Wakapi: upgrade to 2.15.0 2025-09-07 11:16:30 +03:00
0b9e66f067 Authelia: upgrade to 4.39.8 2025-09-05 09:29:39 +03:00
379a113b86 Dozzle: upgrade to 8.13.12 2025-09-05 09:29:22 +03:00
8538c00175 Outline: upgrade to 0.87.3 2025-09-05 09:29:05 +03:00
645276018b Authelia: upgrade to 4.39.7 2025-09-01 09:30:59 +03:00
ce5d682842 Dozzle: upgrade to 8.13.11 2025-09-01 09:30:35 +03:00
de5b0f66bd Gramps: upgrade to 25.8.0 2025-09-01 09:30:18 +03:00
64602b1db3 Outline: upgrade to 0.87.0 2025-09-01 09:29:55 +03:00
caecb9b57e Caddy: upgrade to 2.10.2 2025-08-29 08:31:52 +03:00
e8be04d5e1 Dozzle: upgrade to 8.13.10 2025-08-29 08:31:32 +03:00
a7f90da43f Netdata: upgrade to 2.6.3 2025-08-25 09:11:55 +03:00
0f80206c62 Gitea: upgrade to 1.24.5 2025-08-16 08:49:35 +03:00
1daff82cc5 Dozzle: upgrade to 8.13.9 2025-08-16 08:49:16 +03:00
9b4293c624 Add transcriber app 2025-08-14 15:13:29 +03:00
0d93e8094c Authelia: more strict policy 2025-08-13 19:21:14 +03:00
b92ab556e5 Dozzle: fix hostname 2025-08-13 19:21:00 +03:00
8086799c7b Dozzle: install version 8.13.8 2025-08-13 19:08:46 +03:00
6ec5df4b66 Netdata: upgrade to 2.6.2 2025-08-13 18:49:18 +03:00
fb91e45806 Outline: upgrade to 0.86.1 2025-08-11 08:15:06 +03:00
44f82434e7 Authelia: upgrade to 4.39.6 2025-08-11 08:14:37 +03:00
31ca27750e Docker: remove unnecessary call
Login to yandex registry only need in app playbooks
2025-08-07 15:46:39 +03:00
4be8d297ba Authelia: move secrets to separate file 2025-08-07 15:07:51 +03:00
bcd8e62691 Backups: rewrite backup script
To avoid specifying individual applications
2025-08-07 12:06:07 +03:00
160f4219c5 RSS-Bridge: upgrade to 2025-08-05 2025-08-07 09:57:05 +03:00
c518125bbd Gitea: upgrade to 1.24.4 2025-08-07 09:55:17 +03:00
e16e23d18c Outline: upgrade to 0.86.0 2025-08-07 09:52:44 +03:00
ede37e7fa3 Miniflux: add restart policy 2025-08-04 09:50:43 +03:00
b4cddb337a Miniflux: run postgres as app user 2025-08-04 09:15:37 +03:00
35f1abd718 Miniflux: change secret storage from env to files 2025-08-04 08:10:07 +03:00
21b52a1887 Secrets: add role for secret deploy 2025-08-04 08:06:58 +03:00
af39ca9de8 Security fixes: backups 2025-08-03 15:13:26 +03:00
e10b37a9f6 Secrets: remove unused variables 2025-08-03 15:03:33 +03:00
85627f8931 Authelia: protect secret files
Word "secrets" activate pre-commit hook
2025-08-03 11:11:50 +03:00
38e2294a65 Security fixes: telegram 2025-08-03 11:06:59 +03:00
dfcb781a90 Security fixes: S3 2025-08-03 11:04:33 +03:00
2c8bd2bb8d Security fixes: postbox 2025-08-03 10:42:35 +03:00
592c3a062b Postbox: refactor smtp tools 2025-08-01 14:04:56 +03:00
04d789f0a4 Secrets: remove unused 2025-08-01 13:43:24 +03:00
16719977c8 Add gitleaks and custom script to check secrets in commits
Additionally add lefthook to manage git hooks
2025-08-01 13:31:44 +03:00
6cd8d3b14b Netdata: add monitoring for postgresql databases 2025-08-01 10:58:07 +03:00
791caab704 Add tasks to clear docker objects 2025-07-20 11:51:10 +03:00
df60296f9d Gitea: upgrade to 1.24.3 2025-07-20 11:35:26 +03:00
232d06a123 Gramps: upgrade to 25.7.2 2025-07-20 11:32:28 +03:00
a3886c8675 Wakapi: upgrade to 2.14.1 2025-07-20 11:31:59 +03:00
db55fcd180 Authelia: upgrade to 4.39.5 2025-07-20 11:31:10 +03:00
53dd462cac Netdata: upgrade to 2.6.0 2025-07-20 11:30:05 +03:00
28faff3c99 Gramps: upgrade to 25.7.1 2025-07-11 10:06:05 +03:00
2619b8f9e6 Outline: upgrade to 0.85.1 2025-07-11 10:02:54 +03:00
8a9b3db287 Gramps: upgrade to 25.7.0 2025-07-02 13:43:33 +03:00
a72c67f070 Wakapi: install 2.14.0
And transfer data from local
2025-07-01 11:21:05 +03:00
47745b7bc9 RSS-Bridge: install version 2025-06-03 2025-06-30 19:18:45 +03:00
c568f00db1 Miniflux: install and configure rss reader 2025-06-28 12:12:19 +03:00
99b6959c84 Tasks: add quick commands for authelia 2025-06-28 11:00:32 +03:00
fa65726096 Authelia: upgrade to 4.39.4 2025-06-28 10:02:57 +03:00
f9eaf7a41e Rename encrypted vars to secrets 2025-06-28 09:59:04 +03:00
d825b1f391 Netdata: upgrade to 2.5.4 2025-06-28 09:57:19 +03:00
b296a3f2fe Netdata: upgrade to 2.5.3 2025-06-22 09:34:57 +03:00
8ff89c9ee1 Gitea: upgrade to 1.24.2 2025-06-22 09:31:46 +03:00
62a4e598bd Gitea: upgrade to v1.24.0 2025-06-11 20:48:51 +03:00
b65aaa5072 Gramps: upgrade to v25.6.0 2025-06-11 20:48:27 +03:00
98b7aff274 Gramps: upgrade to v25.5.2 2025-05-24 12:04:45 +03:00
6eaf7f7390 Netdata: upgrade to 2.5.1 2025-05-21 21:24:22 +03:00
32e80282ef Update ansible roles 2025-05-17 17:17:01 +03:00
c8bd9f4ec3 Netdata: add fail2ban monitoring 2025-05-17 16:58:12 +03:00
d3d189e284 Gitea: upgrade to 1.23.8 2025-05-17 13:51:10 +03:00
71fe688ef8 Caddy: upgrade to 2.10.0 2025-05-17 13:50:47 +03:00
c5d0f96bdf Netdata + Authelia: add monitoring 2025-05-17 13:33:35 +03:00
eea8db6499 Netdata + Caddy: add monitoring for http-server 2025-05-17 11:55:38 +03:00
7893349da4 Netdata: refactoring as docker compose app 2025-05-17 10:27:41 +03:00
a4c61f94e6 Gramps: upgrade to 25.5.1 (with Gramps API 3.0.0) 2025-05-12 15:56:23 +03:00
da0a261ddd Outline: upgrade to 0.84.0 2025-05-12 12:58:21 +03:00
b9954d1bba Authelia: upgrade to 4.39.3 2025-05-12 12:55:41 +03:00
3a23c08f37 Remove keycloak 2025-05-07 12:51:05 +03:00
d1500ea373 Outline: use oidc from authelia 2025-05-07 12:37:07 +03:00
a77fefcded Authelia: introduce to protect system services 2025-05-07 11:23:22 +03:00
41fac2c4f9 Remove caddy system-wide installation 2025-05-06 12:00:32 +03:00
280ea24dea Caddy: web proxy in docker container 2025-05-06 11:50:26 +03:00
855bafee5b Format files with ansible-lint 2025-05-06 11:20:00 +03:00
adde4e32c1 Networks: create internal docker network for proxy server
Prepare to use caddy in docker
2025-05-06 11:11:48 +03:00
527067146f Gramps: refactor app
Move scripts, configs and data to separate user space
2025-05-06 10:25:38 +03:00
93326907d2 Remove unused var 2025-05-06 10:02:39 +03:00
bcad87c6e0 Remove legacy files 2025-05-05 20:57:47 +03:00
5d127d27ef Homepage: refactoring 2025-05-05 20:40:32 +03:00
2d6cb3ffe0 Format files with ansible-lint 2025-05-05 18:04:54 +03:00
e68920c0e2 Netdata as playbook 2025-05-05 18:02:14 +03:00
c5c15341b8 Outline: update to 0.83.0 2025-05-05 17:00:48 +03:00
cd4a7177d7 Outline: configure backups 2025-05-05 16:53:09 +03:00
daeef1bc4b Backups: rewrite backup script 2025-05-05 11:48:49 +03:00
ddae18f8b3 Gitea: configure backups again 2025-05-05 11:39:06 +03:00
8c8657fdd8 Gramps: configure backup again 2025-05-05 11:26:54 +03:00
c4b0200dc6 Outline: configure mailer 2025-05-04 14:02:28 +03:00
38bafd7186 Remove old configs 2025-05-04 11:12:44 +03:00
c6db39b55a Remove old playbooks and configs 2025-05-04 11:05:18 +03:00
528512e665 Refactor outline app: deploy with ansible 2025-05-04 10:59:41 +03:00
0e05d3e066 Make consistent container names 2025-05-04 10:26:17 +03:00
4221fb0009 Refactor keycloac app: deploy with ansible 2025-05-04 10:18:18 +03:00
119 changed files with 6023 additions and 946 deletions

View File

@@ -1,3 +1,6 @@
--- ---
exclude_paths: exclude_paths:
- 'galaxy.roles/' - ".ansible/"
- ".gitea/"
- "galaxy.roles/"
- "Taskfile.yml"

4
.crushignore Normal file
View File

@@ -0,0 +1,4 @@
ansible-vault-password-file
*secrets.yml
*secrets.toml

View File

@@ -6,11 +6,9 @@ insert_final_newline = true
indent_style = space indent_style = space
indent_size = 4 indent_size = 4
[*.yml] [*.{yml,yaml,yml.j2}]
indent_size = 2 indent_size = 2
trim_trailing_whitespace = true
[Vagrantfile] [Vagrantfile]
indent_size = 2 indent_size = 2
[Makefile]
indent_style = tab

50
.gitea/workflows/lint.yml Normal file
View File

@@ -0,0 +1,50 @@
name: Linting
on: push
jobs:
yamllint:
name: YAML Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Install yamllint
run: pip install yamllint
- name: Run yamllint
run: yamllint . --format colored
ansible-lint:
name: Ansible Lint
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Install dependencies
run: pip install ansible ansible-lint
# Создаем пустой vault password file если он указан в конфиге, но отсутствует
- name: Fix vault issue
run: |
if grep -q "vault_password_file" ansible.cfg && [ ! -f ansible-vault-password-file ]; then
echo "Creating empty vault password file for CI..."
echo "foobar" > ansible-vault-password-file
fi
- name: Run ansible-lint
run: ansible-lint .

3
.gitignore vendored
View File

@@ -5,6 +5,7 @@
/galaxy.roles/ /galaxy.roles/
/ansible-vault-password-file /ansible-vault-password-file
/temp
*.retry *.retry
test_smtp.py __pycache__

25
.yamllint.yml Normal file
View File

@@ -0,0 +1,25 @@
extends: default
ignore:
- ".ansible/"
- "galaxy.roles/"
rules:
# Правила, требуемые ansible-lint
comments:
min-spaces-from-content: 1
comments-indentation: false
braces:
max-spaces-inside: 1
octal-values:
forbid-implicit-octal: true
forbid-explicit-octal: true
# Дополнительные настройки (опционально)
line-length:
max: 120
allow-non-breakable-words: true
allow-non-breakable-inline-mappings: true
document-start: disable # Не требовать --- в начале файла
truthy:
level: warning

69
AGENTS.md Normal file
View File

@@ -0,0 +1,69 @@
# AGENTS GUIDE
## Overview
Ansible-based server automation for personal services. Playbooks provision Dockerized apps (e.g., gitea, authelia, homepage, miniflux, wakapi, memos) via per-app users, Caddy proxy, and Yandex Docker Registry. Secrets are managed with Ansible Vault.
## Project Layout
- Playbooks: `playbook-*.yml` (per service), `playbook-all-*.yml` for grouped actions.
- Inventory: `production.yml` (ungrouped host `server`).
- Variables: `vars/*.yml` (app configs, images), secrets in `vars/secrets.yml` (vault-encrypted).
- Roles: custom roles under `roles/` (e.g., `eget`, `owner`, `secrets`) plus galaxy roles fetched to `galaxy.roles/`.
- Files/templates: service docker-compose and backup templates under `files/`, shared templates under `templates/`.
- Scripts: helper Python scripts in `scripts/` (SMTP utilities) and `files/backups/backup-all.py`.
- CI: `.gitea/workflows/lint.yml` runs yamllint and ansible-lint.
- Hooks: `lefthook.yml` references local hooks in `/home/av/projects/private/git-hooks` (gitleaks, vault check).
- Formatting: `.editorconfig` enforces LF, trailing newline, 4-space indent; YAML/Jinja use 2-space indent.
## Setup
- Copy vault password sample: `cp ansible-vault-password-file.dist ansible-vault-password-file` (needed for ansible and CI).
- Install galaxy roles: `ansible-galaxy role install --role-file requirements.yml --force` (or `task install-roles`).
- Ensure `yq`, `task`, `ansible` installed per README requirements.
## Tasks (taskfile)
- `task install-roles` — install galaxy roles into `galaxy.roles/`.
- `task ssh` — SSH to target using inventory (`production.yml`).
- `task btop` — run `btop` on remote.
- `task encrypt|decrypt -- <files>` — ansible-vault helpers.
- Authelia helpers:
- `task authelia-cli -- <args>` — run authelia CLI in docker.
- `task authelia-validate-config` — render `files/authelia/configuration.template.yml` with secrets and validate via authelia docker image.
- `task authelia-gen-random-string LEN=64` — generate random string.
- `task authelia-gen-secret-and-hash LEN=72` — generate hashed secret.
- `task format-py-files` — run Black via docker (pyfound/black).
## Ansible Usage
- Inventory: `production.yml` with `server` host. `ansible.cfg` points to `./ansible-vault-password-file` and `./galaxy.roles` for roles path.
- Typical deploy example (from README): `ansible-playbook -i production.yml --diff playbook-gitea.yml`.
- Per-app playbooks: `playbook-<app>.yml`; grouped runs: `playbook-all-setup.yml`, `playbook-all-applications.yml`, `playbook-upgrade.yml`, etc.
- Secrets: encrypted `vars/secrets.yml`; additional `files/<app>/secrets.yml` used for templating (e.g., Authelia). Respect `.crushignore` ignoring vault files.
- Templates: many `docker-compose.template.yml` and `*.template.sh` files under `files/*` plus shared `templates/env.j2`. Use `vars/*.yml` to supply values.
- Custom roles:
- `roles/eget`: installs `eget` tool; see defaults/vars for version/source.
- `roles/owner`: manages user/group and env template.
- `roles/secrets`: manages vault-related items.
## Linting & CI
- Local lint configs: `.yamllint.yml`, `.ansible-lint.yml` (excludes `.ansible/`, `.gitea/`, `galaxy.roles/`, `Taskfile.yml`).
- CI (.gitea/workflows/lint.yml) installs `yamllint` and `ansible-lint` and runs `yamllint .` then `ansible-lint .`; creates dummy vault file if missing.
- Pre-commit via lefthook (local hooks path): runs `gitleaks git --staged` and secret-file vault check script.
## Coding/Templating Conventions
- Indentation: 2 spaces for YAML/Jinja (`.editorconfig`), 4 spaces default elsewhere.
- End-of-line: LF; ensure final newline.
- Template suffixes `.template.yml`, `.yml.j2`, `.template.sh` are rendered via Ansible `template` module.
- Avoid committing real secrets; `.crushignore` excludes `ansible-vault-password-file` and `*secrets.yml`.
- Service directories under `files/` hold docker-compose and backup templates; ensure per-app users and registry settings align with `vars/*.yml`.
## Testing/Validation
- YAML lint: `yamllint .` (CI default).
- Ansible lint: `ansible-lint .` (CI default).
- Authelia config validation: `task authelia-validate-config` (renders with secrets then validates via docker).
- Black formatting for Python helpers: `task format-py-files`.
- Python types validation with mypy: `mypy <file.py>`.
## Operational Notes
- Deployments rely on `production.yml` inventory and per-app playbooks; run with `--diff` for visibility.
- Yandex Docker Registry auth helper: `files/yandex-docker-registry-auth.sh`.
- Backups: templates and scripts under `files/backups/` per service; `backup-all.py` orchestrates.
- Home network/DNS reference in README (Yandex domains).
- Ensure `ansible-vault-password-file` present for vault operations and CI.

View File

@@ -3,12 +3,11 @@
Настройки виртуального сервера для домашних проектов. Настройки виртуального сервера для домашних проектов.
> В этом проекте не самые оптимальные решения. > В этом проекте не самые оптимальные решения.
> Но они помогают мне поддерживать сервер для моих личных проектов уже семь лет. > Но они помогают мне поддерживать сервер для моих личных проектов уже много лет.
## Требования ## Требования
- [ansible](https://docs.ansible.com/ansible/latest/getting_started/index.html) - [ansible](https://docs.ansible.com/ansible/latest/getting_started/index.html)
- [invoke](https://www.pyinvoke.org/)
- [task](https://taskfile.dev/) - [task](https://taskfile.dev/)
- [yq](https://github.com/mikefarah/yq) - [yq](https://github.com/mikefarah/yq)
@@ -21,7 +20,7 @@ $ ansible-galaxy install --role-file requirements.yml
## Структура ## Структура
- Для каждого приложения создается свой пользователь. - Для каждого приложения создается свой пользователь (опционально).
- Для доступа используется ssh-ключ. - Для доступа используется ssh-ключ.
- Докер используется для запуска и изоляции приложений. Для загрузки образов настраивается Yandex Docker Registry. - Докер используется для запуска и изоляции приложений. Для загрузки образов настраивается Yandex Docker Registry.
- Выход во внешнюю сеть через proxy server [Caddy](https://caddyserver.com/). - Выход во внешнюю сеть через proxy server [Caddy](https://caddyserver.com/).
@@ -32,30 +31,10 @@ $ ansible-galaxy install --role-file requirements.yml
В организации Яндекс: https://admin.yandex.ru/domains/vakhrushev.me?action=set_dns&uid=46045840 В организации Яндекс: https://admin.yandex.ru/domains/vakhrushev.me?action=set_dns&uid=46045840
## Частые команды
Конфигурация приложений (если нужно добавить новое приложение):
```bash
$ task configure-apps
```
Конфигурация мониторинга (если нужно обновить netdata):
```bash
$ task configure-monitoring
```
## Деплой приложений ## Деплой приложений
Доступные для деплоя приложения: Деплой всех приложений через ansible:
```bash ```bash
invoke --list ansible-playbook -i production.yml --diff playbook-gitea.yml
```
Выполнить команду деплоя, например:
```bash
invoke deploy:gitea
``` ```

View File

@@ -12,8 +12,13 @@ vars:
sh: 'yq .ungrouped.hosts.server.ansible_user {{.HOSTS_FILE}}' sh: 'yq .ungrouped.hosts.server.ansible_user {{.HOSTS_FILE}}'
REMOTE_HOST: REMOTE_HOST:
sh: 'yq .ungrouped.hosts.server.ansible_host {{.HOSTS_FILE}}' sh: 'yq .ungrouped.hosts.server.ansible_host {{.HOSTS_FILE}}'
AUTHELIA_DOCKER: 'docker run --rm -v $PWD:/data authelia/authelia:4.39.4 authelia'
tasks: tasks:
install-roles:
cmds:
- ansible-galaxy role install --role-file requirements.yml --force
ssh: ssh:
cmds: cmds:
- ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} - ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}}
@@ -22,13 +27,52 @@ tasks:
cmds: cmds:
- ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} -t btop - ssh {{.REMOTE_USER}}@{{.REMOTE_HOST}} -t btop
vars-decrypt: encrypt:
cmds: cmds:
- ansible-vault decrypt vars/vars.yml - ansible-vault encrypt {{.CLI_ARGS}}
vars-encrypt: decrypt:
cmds: cmds:
- ansible-vault encrypt vars/vars.yml - ansible-vault decrypt {{.CLI_ARGS}}
authelia-cli:
cmds:
- "{{.AUTHELIA_DOCKER}} {{.CLI_ARGS}}"
authelia-validate-config:
vars:
DEST_FILE: "temp/configuration.yml"
cmds:
- >
ansible localhost
--module-name template
--args "src=files/authelia/configuration.template.yml dest={{.DEST_FILE}}"
--extra-vars "@vars/secrets.yml"
--extra-vars "@files/authelia/secrets.yml"
- defer: rm -f {{.DEST_FILE}}
- >
{{.AUTHELIA_DOCKER}}
validate-config --config /data/{{.DEST_FILE}}
authelia-gen-random-string:
summary: |
Generate random string.
Usage example:
task authelia-gen-random-string LEN=64
vars:
LEN: '{{ .LEN | default 10 }}'
cmds:
- >
{{.AUTHELIA_DOCKER}}
crypto rand --length {{.LEN}} --charset alphanumeric
authelia-gen-secret-and-hash:
vars:
LEN: '{{ .LEN | default 72 }}'
cmds:
- >
{{.AUTHELIA_DOCKER}}
crypto hash generate pbkdf2 --variant sha512 --random --random.length {{.LEN}} --random.charset rfc3986
format-py-files: format-py-files:
cmds: cmds:

28
Vagrantfile vendored
View File

@@ -1,28 +0,0 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Этот файл предназначен для запуска тестовой виртуальной машины,
# на которой можно обкатать роли для настройки сервера.
ENV["LC_ALL"] = "en_US.UTF-8"
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/bionic64"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
config.vm.network "private_network", ip: "192.168.50.10"
# Приватный ключ для доступа к машине
config.vm.provision "shell" do |s|
ssh_pub_key = File.readlines("#{Dir.home}/.ssh/id_rsa.pub").first.strip
s.inline = <<-SHELL
echo #{ssh_pub_key} >> /home/vagrant/.ssh/authorized_keys
echo #{ssh_pub_key} >> /root/.ssh/authorized_keys
SHELL
end
end

View File

@@ -1,5 +0,0 @@
WEB_SERVER_PORT=9595
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=password
USER_UID=1000
USER_GID=1000

View File

@@ -1 +0,0 @@
data/

View File

@@ -1,22 +0,0 @@
# Images: https://quay.io/repository/keycloak/keycloak?tab=tags&tag=latest
# Configuration: https://www.keycloak.org/server/all-config
# NB
# - На проде были проблемы с правами к директории data, пришлось выдать 777
# - Переменную KC_HOSTNAME_ADMIN_URL нужно указать вместе с KC_HOSTNAME_URL, иначе будут ошибки 403
services:
keycloak:
image: quay.io/keycloak/keycloak:24.0.4
command: ["start-dev"]
restart: unless-stopped
environment:
KEYCLOAK_ADMIN: "${KEYCLOAK_ADMIN}"
KEYCLOAK_ADMIN_PASSWORD: "${KEYCLOAK_ADMIN_PASSWORD}"
KC_HOSTNAME_URL: "https://kk.vakhrushev.me"
KC_HOSTNAME_ADMIN_URL: "https://kk.vakhrushev.me"
ports:
- "${WEB_SERVER_PORT}:8080"
volumes:
- "./data:/opt/keycloak/data"

View File

@@ -1,16 +0,0 @@
# Images: https://quay.io/repository/keycloak/keycloak?tab=tags&tag=latest
# Configuration: https://www.keycloak.org/server/all-config
services:
keycloak:
image: quay.io/keycloak/keycloak:24.0.4
command: ["start-dev"]
restart: unless-stopped
environment:
KEYCLOAK_ADMIN: "${KEYCLOAK_ADMIN}"
KEYCLOAK_ADMIN_PASSWORD: "${KEYCLOAK_ADMIN_PASSWORD}"
ports:
- "${WEB_SERVER_PORT}:8080"
volumes:
- "./data:/opt/keycloak/data"

View File

@@ -1,60 +0,0 @@
services:
outline-app:
image: outlinewiki/outline:0.81.1
restart: unless-stopped
ports:
- "${WEB_SERVER_PORT}:3000"
depends_on:
- postgres
- redis
environment:
NODE_ENV: '${NODE_ENV}'
SECRET_KEY: '${SECRET_KEY}'
UTILS_SECRET: '${UTILS_SECRET}'
DATABASE_URL: '${DATABASE_URL}'
PGSSLMODE: '${PGSSLMODE}'
REDIS_URL: '${REDIS_URL}'
URL: '${URL}'
FILE_STORAGE: '${FILE_STORAGE}'
FILE_STORAGE_UPLOAD_MAX_SIZE: '262144000'
AWS_ACCESS_KEY_ID: '${AWS_ACCESS_KEY_ID}'
AWS_SECRET_ACCESS_KEY: '${AWS_SECRET_ACCESS_KEY}'
AWS_REGION: '${AWS_REGION}'
AWS_S3_ACCELERATE_URL: '${AWS_S3_ACCELERATE_URL}'
AWS_S3_UPLOAD_BUCKET_URL: '${AWS_S3_UPLOAD_BUCKET_URL}'
AWS_S3_UPLOAD_BUCKET_NAME: '${AWS_S3_UPLOAD_BUCKET_NAME}'
AWS_S3_FORCE_PATH_STYLE: '${AWS_S3_FORCE_PATH_STYLE}'
AWS_S3_ACL: '${AWS_S3_ACL}'
OIDC_CLIENT_ID: '${OIDC_CLIENT_ID}'
OIDC_CLIENT_SECRET: '${OIDC_CLIENT_SECRET}'
OIDC_AUTH_URI: '${OIDC_AUTH_URI}'
OIDC_TOKEN_URI: '${OIDC_TOKEN_URI}'
OIDC_USERINFO_URI: '${OIDC_USERINFO_URI}'
OIDC_LOGOUT_URI: '${OIDC_LOGOUT_URI}'
OIDC_USERNAME_CLAIM: '${OIDC_USERNAME_CLAIM}'
OIDC_DISPLAY_NAME: '${OIDC_DISPLAY_NAME}'
redis:
image: redis:7.2-bookworm
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- ./redis.conf:/redis.conf
command: ["redis-server", "/redis.conf"]
postgres:
image: postgres:16.3-bookworm
restart: unless-stopped
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: '${POSTGRES_USER}'
POSTGRES_PASSWORD: '${POSTGRES_PASSWORD}'
POSTGRES_DB: '${POSTGRES_DB}'
volumes:
database-data:

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "{{ app_name }}: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "{{ app_name }}: done."

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,26 @@
services:
authelia_app:
container_name: 'authelia_app'
image: 'docker.io/authelia/authelia:4.39.14'
user: '{{ user_create_result.uid }}:{{ user_create_result.group }}'
restart: 'unless-stopped'
networks:
- "web_proxy_network"
- "monitoring_network"
volumes:
- "{{ config_dir }}:/config"
- "{{ data_dir }}:/data"
authelia_redis:
image: valkey/valkey:9.0-alpine
container_name: authelia_redis
restart: unless-stopped
networks:
- "monitoring_network"
networks:
web_proxy_network:
external: true
monitoring_network:
external: true

View File

@@ -0,0 +1,16 @@
# https://gobackup.github.io/configuration
models:
authelia:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
users:
type: sqlite
path: "{{ (data_dir, 'authelia_storage.sqlite3') | path_join }}"

147
files/authelia/secrets.yml Normal file
View File

@@ -0,0 +1,147 @@
$ANSIBLE_VAULT;1.1;AES256
65666236353432623164363161633732363265343935316466326332323638626138633130666663
3566393731353063393466663831623537393939663838640a303935383565376430646431613464
66363161373765663665316433393835323236363561663633623533363166323533623137646437
3438383538613832660a373337323633666535656439383438653132646235623834643730323762
64656336323037653937393836623061343563303337633830346164613731633761376138313666
33343265343830306139386164353861633230326333616539313032323935313435623762316163
63353535663533656166333034636462663266343061626637613830313962333531396339393334
39336233623462303137343264373631396464303038346131633333333033343839393438653330
61646166313762633333353233366430333764646261356438336532653463646264643562346262
32653339353861653964663935393638656563373239633234656131633133643366653833376233
34643033393039616363656330323765346336643664393934376138353266303731613066626231
30313362663430343337663663373132383831333662353034316134623639393938306237396230
65633565643861366434336135313465623537616166356637346137323566333239376361633930
66356664633433306134613166353032363234633961623862646537376365383535623033356639
34336433646430323630313464333631663739343832316362666165623039313535306433643761
61376264616462333839326233613764333633643233663236666562306666353934323636353262
32386333623632326332373438373236646138326537343835383766663463643535383534633633
34343166343362343165333837313362666533353939336533326138386538323961323530626663
63633334663561396565656662363830656133323364383536346163316333356661643261633635
37323536653664613564616466633631616465656266373830373964326566623733396130303031
31663166323863353565353235323235363161366534663539343338633434626232336261356438
38373339383361356565366436376266336234356436363562623431666436626562353065356230
37333737623161303431393461383438643137613962613838386537663564646333356538393337
34373136343361333738396664616436363066323030306438666437333839653336316264343139
38656234623363386561393961363262316535373232353132616136633339613533633333376235
35656632643363386133656438616366643630303337383130386330326532623930346561383739
65636138313566363539323663303631663862323034313062353633646638633463653263333035
66646539393639643834353966396132343437666435626537633336393864326630623961303338
36616663383262363138346331386239653634373135616232656462353562373939356631633433
64356461653336353039643065663538373363666339363231373762363163653762363832373238
37386638373261383533633937613561613961353765363864613031356334303138613766383631
36666433636663343931643462383338613662333233666337363038666531633864333436383737
36363562323936313764386336656639633638643465636131373532313238376565353933336133
37663961623937336230636466643531316463323733626134663331323135396637323231383231
33643736316537646264353662643261666165366562393031613565623630336333353961353661
62633362613863303936623436616136616139363861653233313865343532366465373937306139
61316665653234363033396566316331316164346461353438633864333334653730333065376631
66616238663062666139653062383036636366646364346632396239623233356533343038363733
33316165306130616665326364616231613830313334383961633333303261656131333161323237
38616435633334646533653830393739336363653664373235363863396262623736626435313735
39353065643033343062616137346361646136313265653965313133666130393361306430303638
64643364663335343961373865653564366362396138626531613232313461376463336437336666
30373766666231646264356663626335393233333465386164313630613137303066336430653662
35336531633039633938363430363239653065356230313538323630316561643033623833656164
65366435653063616361366666373561663538373363346264386331316531376262663663383266
38346163653439366430656536666631366534663163396230353531396335663638386261613832
31336339336465393333383761623663383563613930663430626166666635393164366562663063
34323031333939656161643139386532666361663630346632383333373261326134636564393233
38363630633435353730383234663536623166373533333639353963613665353339383837626138
38653730316538626662303636363664613566383033323661363533333032306362346562316464
36303730666531396531386331653466396233623138393763653965323239393237636237303237
66323366343036643765646539366261613062646532306265353430636332386330613962666131
33383063343638616338326533646162306438616434316139313433303636366665336364656534
66353634646563373463633637383766343332346530653033663937613135363233386138306565
39386262663939346432383134333661623637396162623336626137316166613035333138653632
34653364333732366231396637353939653262323934366333373130613932366533346632366164
31313663303034306436393763323361616434306134336231383639346261376439643162643539
35323534366435393531613665333337633365353831326534363737396463363666316639373233
38386431336631363831366261373439376231326465323736356331636136393762383331336265
64356363336639356361326539396234626566653334656561386431616139323433376563303132
32363864663062643065643534623933653165636264653461303262313662663165616463353965
36613134343634393066613533343362393137356464303530653964363031653231663962303037
38633730633766306264613865373736346661623531616563316232393235643931353663383066
39386633383930383732343266326161303663646636383735646332623661303433366161363635
31313839396539343063613166616134636134333639363362343566356435663934646263653061
31616137363031333561656134396534333430613637376465363633663861666262616137336337
38396639333465316431303433653338653338303031313566633330656535666531316235393138
64383332323466653065343765346162343532386438326362626637656130613433393339323564
39633262393336383932306632636563313663336337653164306434393661663265306638636265
37326335616666393262356663326665663561633038333237363234303838636135653861373032
62643332376364386537326336663531613164376261653938333165656632613434343063613565
62623262303632633335303430343164343433646238366137623030323233303661373434666234
62626265623239343634363732623739383536646132656564313032663061383939616162366538
61373738626362386265623739333139613531373738333862396430356635626130663633666439
63333836346364313262613331656531623831646165313036393138636162336138613365396331
63323333366232326365333965363734396565343334613733396437393932616637313738633765
61373437313636666566303032343366376166643864333639326564363935623534386164346365
65373538303765386337383964303937343038663832326665356666646336633835653530396337
63316631333231623861373330303033396138613834623265633263653061623132366235336661
31376633396534613161663462343365653262383133636238613166343366653464636137353165
33623539366339353064373238663838303237343737376131306532333134313561323234336530
64653335653262613738623335343361386261633636616663393035633066653735346431643663
63626331633337363231313835376239636136633262393463643539343333373139616330373634
63366439383232363165356166656664633133313533353236393637386535616536613630616137
37316538653839353932363264663934613936346661343835333666303536363332363234653262
33303530343764316462336634613661663532393864626437353764656564343435613131663339
31616236376563623762326566333933303432646465613138373733363263346337633165616563
61613836363231333966336165393961313930313934323536333334363763393438636138396361
61373264626531393165376538336533303861643663333439653732313337613362346162373931
35333330653531623134396134333938616538666661363737633639643462313034356531633033
37613834356563303639356134323231646461356262636237323061386339306462343035336164
63376664386663306135333635323030396639326639656131333564353265316631336261373562
35663639306361373433633530616162636434373533333263303936343539386439303130366439
37656263363839363339333236333835386537343232636461343338356234616332383330373161
66646533393037353030616264623461626339613538306133393337623264326535343836363165
39626263396162336434323437323133653735663136616266376331633665626234386131393433
34663236623636656237666531353763333861646264646538313964663130353836396236336564
31333132396535653064366430306464363634633032383666313738396265343335663664613662
34626331303834346637383932633832343962666633323838383132323239363965313139373762
36306266313337393235303834356435336138373437306661636535333836396366386336613937
37386136623965373439373433353264393363653534306662316132643761353138303538353037
62316636323338353633313461346161343461613465653463393931363661336638346435666134
38303533633530383466623766393138653564373065373261656165313763663361373235356531
66346663343636633961656639316365396265303632626465666532313338373336376135366138
61303865356436616139373064393939356364623461316266316537363963396263613562613363
37386466633934623062333634303335646232613039633839336365613634613561373436393733
61316162623063636365346236323164333933643662366463313138623561633533653932663065
64363734636663613831663630666432396237303630623234376432316532626165373464333134
30646433646438373961333230343430383232316431313465646136643139353937313761333731
66653335343935636530393361306162616232333935393135623235626238323238303339643863
35373366353365613965656562633633303330383631383736306535366137393638313330343636
35383039313866303239623564666461633161373231303534313466353137643666396133363265
36373133323162666363303862313566613132333739333164663166666565303032306633636632
63366539353663666162633832373264306135646266656563373433376438616530626530393131
36376630306539303865613639643538333134666533643261356662386433626265613431613334
35353634663930653537326666653763393831363637613639653862323730633266323234656662
35666433623739323435626536303561386332313838383432303437633731666435393331353139
32376638343939626237346363623236653639356234633366323464663763366339663536386162
62623831336165393630363263396466643563383330353232373435613364316538613835313332
30623136316437333462333864663164356539656436666539313536653065346337353565656138
30663737333035336137313034626339613631396237326163366364366634346438643831376166
33616238643033313662353031306534666161636133653531303932633231326139326161356564
65323231663562323430633561363838663030346432313930323165313835616230663463316161
63326364376461643035333564643964303030306131396233333439393131336435323134663064
31306162383766333636386633393863663035376562633965666635353939653936626631346534
64656462393335653332646562373361613132643034653536303435343833626433613137346232
65643465323733336162366261336636326136616532343939613363663537336365363966373437
62626566303435356237333238353736383262313933656139326634663934343864373131646461
62653662336162313739663961636430346130666130646364393034636464616362636533353262
62346162386561363239666232306432646336346434636335366638633762666634623737663866
32343339646330353837323665626430646432316163656361323139633336646363643434363731
36616534663330376333623631313332616462383936316238363032363762356531393332343430
34613735633732333762306331666332316265333962343935613936613438353164623031346432
66396264366666383835333538616430326161653839663838313764663664316266623762653463
31383235326363646234306636636564366238663965663331623965373139353064363733363339
34383937303033386566633939366331353333373935353263376235623430336236396135303233
61666434613136396338656334306463666535373364373130343161373866333339333436643036
64346162353530353334343438623835363664396265353762663832303366623735636238313039
37353136626563393231353662633031343435636261616131323833613062643834663634353537
30343430396135363466346236373462356366643539363665373663363932316163346665663935
61343434303039623139323265623538366563373633623065353862303935393434663566303232
37653033343230613766306334316464666533326566386633363835373466326263323861636635
36396437373161346636616664313734343565643330376431633238396462633764386531313165
62666538353239396361653035393636633263613639623038383734326564346261666335383234
36333663303365313066616533333336306639363239663339313766356431316562353836303464
3534373436363831616163383134383266636130316433633635

View File

@@ -0,0 +1,37 @@
$ANSIBLE_VAULT;1.1;AES256
33323463653739626134366261626263396338333966376262313263613131343962326432613263
6430616564313432666436376432383539626231616438330a646161313364353566373833353337
64633361306564646564663736663937303435356332316432666135353863393439663235646462
3136303031383835390a396531366636386133656366653835633833633733326561383066656464
31613933333731643065316130303561383563626636346633396266346332653234373732326535
39663765353938333835646563663633393835633163323435303164663261303661666435306239
34353264633736383565306336633565376436646536623835613330393466363935303031346664
63626465656435383162633761333131393934666632336539386435613362353135383538643836
66373261306139353134393839333539366531393163393266386531613732366431663865343134
64363933616338663966353431396133316561653366396130653232636561343739336265386339
38646238653436663531633465616164303633356233363433623038666465326339656238653233
36323162303233633935646132353835336364303833636563346535316166346533636536656665
64323030616665316133363739393364306462316135636630613262646436643062373138656431
35663334616239623534383564643738616264373762663034376332323637626337306639653830
65386339666465343931303933663561643664313364386662656663643336636264636333666435
66366531613538363233346137383462326334306534333564636232393931393433386664363036
39623134636331646536323531653063326231613363366562643561353939633062663132303035
38303265326136303633666566613966636133666336396133333033643434303138303065666463
36643765316134636133333937396332613233383932663265386264623133633364646237346465
32623965653662336335366639643765393636623236323036396538353666646132393636663536
65646638643236313762373135336430643731643961386264303134366633353934366431333430
34313362633836613166336437323835626537653237666139383230663835626630623933383834
32636136663830643661363663303136393733646133626538333836666135653936323832336433
64396234396430326334656561393264366263313730306631383037643135613765373861356561
37363933383238316232336564363364376637626630373963666262376165343838303530653764
64343937666365646666363939383662313334656236326566373565643637313434616261616635
35646131396432623534396133666239613036386332663038353531313935636139363136666562
62616234663935383262626235313337623332333733383035666633393965336535316234323561
37353563623138343339616565653465633633383563636631356333303435376536393634343031
63653062303432366230643333353634383061313135616533643935316263393366653335353964
36363135356365373064613338393261326265396330323930613538326330663532616163666564
39313631633434353938626637626462376139383536306531633733646331303030333238373161
36336364383939663132366461383264346631366566363638333738386235623264623331343738
34316436393363323165396430343163653837623035626236313663643038336666633535666462
33323566353062653964643362363233346264396365336637376661323730336437333031363830
38303962646561346262

488
files/backups/backup-all.py Normal file
View File

@@ -0,0 +1,488 @@
#!/usr/bin/env python3
"""
Backup script for all applications
Automatically discovers and runs backup scripts for all users,
then creates restic backups and sends notifications.
"""
import itertools
import os
import sys
import subprocess
import logging
import pwd
from abc import ABC
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, List, Optional, Any
import requests
import tomllib
# Default config path
CONFIG_PATH = Path("/etc/backup/config.toml")
# File name to store directories and files to back up
BACKUP_TARGETS_FILE = "backup-targets"
# Default directory fo backups (relative to app dir)
# Used when backup-targets file not exists
BACKUP_DEFAULT_DIR = "backups"
# Configure logging
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.StreamHandler(sys.stdout),
logging.FileHandler("/var/log/backup-all.log"),
],
)
logger = logging.getLogger(__name__)
@dataclass
class Config:
host_name: str
roots: List[Path]
@dataclass
class Application:
path: Path
owner: str
class Storage(ABC):
def backup(self, backup_dirs: List[str]) -> bool:
"""Backup directories"""
raise NotImplementedError()
class ResticStorage(Storage):
TYPE_NAME = "restic"
def __init__(self, name: str, params: Dict[str, Any]):
self.name = name
self.restic_repository = str(params.get("restic_repository", ""))
self.restic_password = str(params.get("restic_password", ""))
self.aws_access_key_id = str(params.get("aws_access_key_id", ""))
self.aws_secret_access_key = str(params.get("aws_secret_access_key", ""))
self.aws_default_region = str(params.get("aws_default_region", ""))
if not all(
[
self.restic_repository,
self.restic_password,
self.aws_access_key_id,
self.aws_secret_access_key,
self.aws_default_region,
]
):
raise ValueError(
f"Missing storage configuration values for backend ResticStorage: '{self.name}'"
)
def backup(self, backup_dirs: List[str]) -> bool:
if not backup_dirs:
logger.warning("No backup directories found")
return True
try:
return self.__backup_internal(backup_dirs)
except Exception as exc: # noqa: BLE001
logger.error("Restic backup process failed: %s", exc)
return False
def __backup_internal(self, backup_dirs: List[str]) -> bool:
logger.info("Starting restic backup")
logger.info("Destination: %s", self.restic_repository)
env = os.environ.copy()
env.update(
{
"RESTIC_REPOSITORY": self.restic_repository,
"RESTIC_PASSWORD": self.restic_password,
"AWS_ACCESS_KEY_ID": self.aws_access_key_id,
"AWS_SECRET_ACCESS_KEY": self.aws_secret_access_key,
"AWS_DEFAULT_REGION": self.aws_default_region,
}
)
backup_cmd = ["restic", "backup", "--verbose"] + backup_dirs
result = subprocess.run(backup_cmd, env=env, capture_output=True, text=True)
if result.returncode != 0:
logger.error("Restic backup failed: %s", result.stderr)
return False
logger.info("Restic backup completed successfully")
check_cmd = ["restic", "check"]
result = subprocess.run(check_cmd, env=env, capture_output=True, text=True)
if result.returncode != 0:
logger.error("Restic check failed: %s", result.stderr)
return False
logger.info("Restic check completed successfully")
forget_cmd = [
"restic",
"forget",
"--compact",
"--prune",
"--keep-daily",
"90",
"--keep-monthly",
"36",
]
result = subprocess.run(forget_cmd, env=env, capture_output=True, text=True)
if result.returncode != 0:
logger.error("Restic forget/prune failed: %s", result.stderr)
return False
logger.info("Restic forget/prune completed successfully")
result = subprocess.run(check_cmd, env=env, capture_output=True, text=True)
if result.returncode != 0:
logger.error("Final restic check failed: %s", result.stderr)
return False
logger.info("Final restic check completed successfully")
return True
class Notifier(ABC):
def send(self, html_message: str):
raise NotImplementedError()
class TelegramNotifier(Notifier):
TYPE_NAME = "telegram"
def __init__(self, name: str, params: Dict[str, Any]):
self.name = name
self.telegram_bot_token = str(params.get("telegram_bot_token", ""))
self.telegram_chat_id = str(params.get("telegram_chat_id", ""))
if not all(
[
self.telegram_bot_token,
self.telegram_chat_id,
]
):
raise ValueError(
f"Missing notification configuration values for backend {name}"
)
def send(self, html_message: str):
url = f"https://api.telegram.org/bot{self.telegram_bot_token}/sendMessage"
data = {
"chat_id": self.telegram_chat_id,
"parse_mode": "HTML",
"text": html_message,
}
response = requests.post(url, data=data, timeout=30)
if response.status_code == 200:
logger.info("Telegram notification sent successfully")
else:
logger.error(
f"Failed to send Telegram notification: {response.status_code} - {response.text}"
)
class BackupManager:
def __init__(
self,
config: Config,
roots: List[Path],
storages: List[Storage],
notifiers: List[Notifier],
):
self.errors: List[str] = []
self.warnings: List[str] = []
self.successful_backups: List[str] = []
self.config = config
self.roots: List[Path] = roots
self.storages = storages
self.notifiers = notifiers
def find_applications(self) -> List[Application]:
"""Get all application directories and their owners."""
applications: List[Application] = []
source_dirs = itertools.chain(*(root.iterdir() for root in self.roots))
for app_dir in source_dirs:
if "lost+found" in str(app_dir):
continue
if app_dir.is_dir():
try:
stat_info = app_dir.stat()
owner = pwd.getpwuid(stat_info.st_uid).pw_name
applications.append(Application(path=app_dir, owner=owner))
except (KeyError, OSError) as e:
logger.warning(f"Could not get owner for {app_dir}: {e}")
return applications
def find_backup_script(self, app_dir: str) -> Optional[str]:
"""Find backup script in user's home directory"""
possible_scripts = [
os.path.join(app_dir, "backup.sh"),
os.path.join(app_dir, "backup"),
]
for script_path in possible_scripts:
if os.path.exists(script_path):
# Check if file is executable
if os.access(script_path, os.X_OK):
return script_path
else:
logger.warning(
f"Backup script {script_path} exists but is not executable"
)
return None
def run_app_backup(self, script_path: str, app_dir: str, username: str) -> bool:
"""Run backup script as the specified user"""
try:
logger.info(f"Running backup script {script_path} (user {username})")
# Use su to run the script as the user
cmd = ["su", "--login", username, "--command", script_path]
result = subprocess.run(
cmd,
cwd=app_dir,
capture_output=True,
text=True,
timeout=3600, # 1 hour timeout
)
if result.returncode == 0:
logger.info(f"Backup script for {username} completed successfully")
self.successful_backups.append(username)
return True
else:
error_msg = f"Backup script {script_path} failed with return code {result.returncode}"
if result.stderr:
error_msg += f": {result.stderr}"
logger.error(error_msg)
self.errors.append(f"App {username}: {error_msg}")
return False
except subprocess.TimeoutExpired:
error_msg = f"Backup script {script_path} timed out"
logger.error(error_msg)
self.errors.append(f"App {username}: {error_msg}")
return False
except Exception as e:
error_msg = f"Failed to run backup script {script_path}: {str(e)}"
logger.error(error_msg)
self.errors.append(f"App {username}: {error_msg}")
return False
def get_backup_directories(self) -> List[str]:
"""Collect backup targets according to backup-targets rules"""
backup_dirs: List[str] = []
applications = self.find_applications()
def parse_targets_file(targets_file: Path) -> List[str]:
"""Parse backup-targets file, skipping comments and empty lines."""
targets: List[str] = []
try:
for raw_line in targets_file.read_text(encoding="utf-8").splitlines():
line = raw_line.strip()
if not line or line.startswith("#"):
continue
targets.append(line)
except OSError as e:
warning_msg = f"Could not read backup targets file {targets_file}: {e}"
logger.warning(warning_msg)
self.warnings.append(warning_msg)
return targets
for app in applications:
app_dir = app.path
targets_file = app_dir / BACKUP_TARGETS_FILE
resolved_targets: List[Path] = []
if targets_file.exists():
# Read custom targets defined by the application.
for target_line in parse_targets_file(targets_file):
target_path = Path(target_line)
if not target_path.is_absolute():
target_path = (app_dir / target_path).resolve()
else:
target_path = target_path.resolve()
if target_path.exists():
resolved_targets.append(target_path)
else:
warning_msg = (
f"Backup target does not exist for {app_dir}: {target_path}"
)
logger.warning(warning_msg)
self.warnings.append(warning_msg)
else:
# Fallback to default backups directory when no list is provided.
default_target = (app_dir / BACKUP_DEFAULT_DIR).resolve()
if default_target.exists():
resolved_targets.append(default_target)
else:
warning_msg = f"Default backup path does not exist for {app_dir}: {default_target}"
logger.warning(warning_msg)
self.warnings.append(warning_msg)
for target in resolved_targets:
target_str = str(target)
if target_str not in backup_dirs:
backup_dirs.append(target_str)
return backup_dirs
def send_notification(self, success: bool) -> None:
"""Send notification to Notifiers"""
if success and not self.errors:
message = f"<b>{self.config.host_name}</b>: бекап успешно завершен!"
if self.successful_backups:
message += f"\n\nУспешные бекапы: {', '.join(self.successful_backups)}"
else:
message = f"<b>{self.config.host_name}</b>: бекап завершен с ошибками!"
if self.successful_backups:
message += (
f"\n\n✅ Успешные бекапы: {', '.join(self.successful_backups)}"
)
if self.warnings:
message += f"\n\n⚠️ Предупреждения:\n" + "\n".join(self.warnings)
if self.errors:
message += f"\n\n❌ Ошибки:\n" + "\n".join(self.errors)
for notificator in self.notifiers:
try:
notificator.send(message)
except Exception as e:
logger.error(f"Failed to send notification: {str(e)}")
def run_backup_process(self) -> bool:
"""Main backup process"""
logger.info("Starting backup process")
# Get all home directories
applications = self.find_applications()
logger.info(f"Found {len(applications)} application directories")
# Process each user's backup
for app in applications:
app_dir = str(app.path)
username = app.owner
logger.info(f"Processing backup for app: {app_dir} (user {username})")
# Find backup script
backup_script = self.find_backup_script(app_dir)
if backup_script is None:
warning_msg = (
f"No backup script found for app: {app_dir} (user {username})"
)
logger.warning(warning_msg)
self.warnings.append(warning_msg)
continue
self.run_app_backup(backup_script, app_dir, username)
# Get backup directories
backup_dirs = self.get_backup_directories()
logger.info(f"Found backup directories: {backup_dirs}")
overall_success = True
for storage in self.storages:
backup_result = storage.backup(backup_dirs)
if not backup_result:
self.errors.append("Restic backup failed")
# Determine overall success
overall_success = overall_success and backup_result
# Send notification
self.send_notification(overall_success)
logger.info("Backup process completed")
if self.errors:
logger.error(f"Backup completed with {len(self.errors)} errors")
return False
elif self.warnings:
logger.warning(f"Backup completed with {len(self.warnings)} warnings")
return True
else:
logger.info("Backup completed successfully")
return True
def initialize(config_path: Path) -> BackupManager:
try:
with config_path.open("rb") as config_file:
raw_config = tomllib.load(config_file)
except OSError as e:
logger.error(f"Failed to read config file {config_path}: {e}")
raise
host_name = str(raw_config.get("host_name", "unknown"))
roots_raw = raw_config.get("roots") or []
if not isinstance(roots_raw, list) or not roots_raw:
raise ValueError("roots must be a non-empty list of paths in config.toml")
roots = [Path(root) for root in roots_raw]
storage_raw = raw_config.get("storage") or {}
storages: List[Storage] = []
for name, params in storage_raw.items():
if not isinstance(params, dict):
raise ValueError(f"Storage config for {name} must be a table")
storage_type = params.get("type", "")
if storage_type == ResticStorage.TYPE_NAME:
storages.append(ResticStorage(name, params))
if not storages:
raise ValueError("At least one storage backend must be configured")
notifications_raw = raw_config.get("notifier") or {}
notifiers: List[Notifier] = []
for name, params in notifications_raw.items():
if not isinstance(params, dict):
raise ValueError(f"Notificator config for {name} must be a table")
notifier_type = params.get("type", "")
if notifier_type == TelegramNotifier.TYPE_NAME:
notifiers.append(TelegramNotifier(name, params))
if not notifiers:
raise ValueError("At least one notification backend must be configured")
config = Config(host_name=host_name, roots=roots)
return BackupManager(
config=config, roots=roots, storages=storages, notifiers=notifiers
)
def main():
try:
backup_manager = initialize(CONFIG_PATH)
success = backup_manager.run_backup_process()
if not success:
sys.exit(1)
except KeyboardInterrupt:
logger.info("Backup process interrupted by user")
sys.exit(130)
except Exception as e:
logger.error(f"Unexpected error in backup process: {str(e)}")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,18 @@
host_name = "{{ notifications_name }}"
roots = [
"{{ application_dir }}"
]
[storage.yandex_cloud_s3]
type = "restic"
restic_repository = "{{ restic_repository }}"
restic_password = "{{ restic_password }}"
aws_access_key_id = "{{ restic_s3_access_key }}"
aws_secret_access_key = "{{ restic_s3_access_secret }}"
aws_default_region = "{{ restic_s3_region }}"
[notifier.server_notifications_channel]
type = "telegram"
telegram_bot_token = "{{ notifications_tg_bot_token }}"
telegram_chat_id = "{{ notifications_tg_chat_id }}"

View File

@@ -1,32 +0,0 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ (backup_directory, "gramps") | path_join }}'
keep: 2
databases:
users:
type: sqlite
path: /home/major/applications/gramps/data/gramps_users/users.sqlite
search_index:
type: sqlite
path: /home/major/applications/gramps/data/gramps_index/search_index.db
sqlite:
type: sqlite
path: /home/major/applications/gramps/data/gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/sqlite.db
undo:
type: sqlite
path: /home/major/applications/gramps/data/gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/undo.db
archive:
includes:
- /home/major/applications/gramps
excludes:
- /home/major/applications/gramps/data/gramps_cache
- /home/major/applications/gramps/data/gramps_thumb_cache
- /home/major/applications/gramps/data/gramps_tmp

View File

@@ -1,34 +0,0 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Backup: perform gitea backup"
su --login gitea -c '/home/gitea/gitea-dump.sh'
mkdir -p {{ backup_directory }}/gitea
mv /home/gitea/backups/* {{ backup_directory }}/gitea
echo "Backup: perform backup with gobackup"
gobackup perform --config={{ backup_gobackup_config }}
echo "Backup: send backups to remote storage with retic"
restic-shell.sh backup --verbose {{ backup_directory }} \
&& restic-shell.sh check \
&& restic-shell.sh forget --compact --prune --keep-daily 90 --keep-monthly 36 \
&& restic-shell.sh check
echo "Backup: send notification"
curl -s -X POST 'https://api.telegram.org/bot{{ notifications_tg_bot_token }}/sendMessage' \
-d 'chat_id={{ notifications_tg_chat_id }}' \
-d 'parse_mode=HTML' \
-d 'text=<b>{{ notifications_name }}</b>: бекап успешно завершен!'
echo -e "\nRemove old files"
keep-files.py {{ backup_directory }}/gitea --keep 2
echo -e "\nBackup: done"

View File

@@ -0,0 +1,136 @@
# -------------------------------------------------------------------
# Global options
# -------------------------------------------------------------------
{
grace_period 15s
admin :2019
# Enable metrics in Prometheus format
# https://caddyserver.com/docs/metrics
metrics
}
# -------------------------------------------------------------------
# Applications
# -------------------------------------------------------------------
vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to homepage_app:80
}
}
auth.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy authelia_app:9091
}
status.vakhrushev.me, :29999 {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy netdata:19999
}
git.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to gitea_app:3000
}
}
outline.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to outline_app:3000
}
}
gramps.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to gramps_app:5000
}
}
miniflux.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to miniflux_app:8080
}
}
wakapi.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to wakapi_app:3000
}
}
wanderer.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to wanderer_web:3000
}
}
memos.vakhrushev.me {
tls anwinged@ya.ru
reverse_proxy {
to memos_app:5230
}
}
wanderbase.vakhrushev.me {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy {
to wanderer_db:8090
}
}
rssbridge.vakhrushev.me {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name
}
reverse_proxy {
to rssbridge_app:80
}
}
dozzle.vakhrushev.me {
tls anwinged@ya.ru
forward_auth authelia_app:9091 {
uri /api/authz/forward-auth
copy_headers Remote-User Remote-Groups Remote-Email Remote-Name Remote-Filter
}
reverse_proxy dozzle_app:8080
}

View File

@@ -0,0 +1,22 @@
services:
{{ service_name }}:
image: caddy:2.10.2
restart: unless-stopped
container_name: {{ service_name }}
ports:
- "80:80"
- "443:443"
- "443:443/udp"
cap_add:
- NET_ADMIN
volumes:
- {{ caddy_file_dir }}:/etc/caddy
- {{ data_dir }}:/data
- {{ config_dir }}:/config
networks:
- "web_proxy_network"
networks:
web_proxy_network:
external: true

View File

@@ -1,25 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
36373937313831396330393762313931643536363765353936333166376465343033376564613538
3235356131646564393664376535646561323435363330660a353632613334633461383562306662
37373439373636383834383464316337656531626663393830323332613136323438313762656435
6338353136306338640a636539363766663030356432663361636438386538323238373235663766
37393035356137653763373364623836346439663062313061346537353634306138376231633635
30363465663836373830366231636265663837646137313764316364623637623333346636363934
33666164343832653536303262663635616632663561633739636561333964653862313131613232
39316239376566633964633064393532613935306161666666323337343130393861306532623666
39653463323532333932646262663862313961393430306663643866623865346666313731366331
32663262636132663238313630373937663936326532643730613161376565653263633935393363
63373163346566363639396432653132646334643031323532613238666531363630353266303139
31613138303131343364343438663762343936393165356235646239343039396637643666653065
31363163623863613533663366303664623134396134393765636435633464373731653563646537
39373766626338646564356463623531373337303861383862613966323132656639326533356533
38346263326361656563386333663531663232623436653866383865393964353363353563653532
65343130383262386262393634636338313732623565666531303636303433333638323230346565
61633837373531343530383238396162373632623135333263323234623833383731336463333063
62656533636237303962653238653934346430366533636436646264306461323639666665623839
32643637623630613863323335666138303538313236343932386461346433656432626433663365
38376666623839393630343637386336623334623064383131316331333564363934636662633630
31363337393339643738306363306538373133626564613765643138666237303330613036666537
61363838353736613531613436313730313936363564303464346661376137303133633062613932
36383631303739306264386663333338666235346339623338333663386663303439363362376239
35626136646634363430

View File

@@ -0,0 +1,23 @@
services:
dozzle_app:
image: amir20/dozzle:v8.14.11
container_name: dozzle_app
restart: unless-stopped
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- "web_proxy_network"
environment:
DOZZLE_HOSTNAME: vakhrushev.me
DOZZLE_AUTH_PROVIDER: forward-proxy
healthcheck:
test: ["CMD", "/dozzle", "healthcheck"]
interval: 3s
timeout: 30s
retries: 5
start_period: 30s
networks:
web_proxy_network:
external: true

View File

@@ -1,3 +0,0 @@
WEB_SERVER_PORT=9494
USER_UID=1000
USER_GID=1000

View File

@@ -1 +0,0 @@
data/

21
files/gitea/backup.sh.j2 Normal file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gitea: backup data with gitea dump"
(cd "{{ base_dir }}" && \
docker compose exec \
-u "{{ user_create_result.uid }}:{{ user_create_result.group }}" \
-w /backups gitea_app \
gitea dump -c /data/gitea/conf/app.ini \
)
echo "Gitea: remove old backups"
keep-files.py "{{ backups_dir }}" --keep 3
echo "Gitea: done."

View File

@@ -1,20 +1,21 @@
services: services:
gitea_web_app: gitea_app:
image: gitea/gitea:1.23.7 image: gitea/gitea:1.25.3
restart: unless-stopped restart: unless-stopped
container_name: gitea_web_app container_name: gitea_app
ports: ports:
- "${WEB_SERVER_PORT}:3000"
- "2222:22" - "2222:22"
volumes: volumes:
- ./data:/data - {{ data_dir }}:/data
- ./backups:/backups - {{ backups_dir }}:/backups
- /etc/timezone:/etc/timezone:ro - /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
networks:
- "web_proxy_network"
environment: environment:
- "USER_UID=${USER_UID}" - "USER_UID={{ user_create_result.uid }}"
- "USER_GID=${USER_GID}" - "USER_GID={{ user_create_result.group }}"
- "GITEA__server__SSH_PORT=2222" - "GITEA__server__SSH_PORT=2222"
# Mailer # Mailer
@@ -25,3 +26,7 @@ services:
- "GITEA__mailer__USER={{ postbox_user }}" - "GITEA__mailer__USER={{ postbox_user }}"
- "GITEA__mailer__PASSWD={{ postbox_pass }}" - "GITEA__mailer__PASSWD={{ postbox_pass }}"
- "GITEA__mailer__FROM=gitea@vakhrushev.me" - "GITEA__mailer__FROM=gitea@vakhrushev.me"
networks:
web_proxy_network:
external: true

View File

@@ -1,13 +0,0 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gitea: backup data with gitea dump"
(cd {{ base_dir }} && docker compose exec -u "{{ user_create_result.uid }}:{{ user_create_result.group }}" -w /backups gitea_web_app gitea dump -c /data/gitea/conf/app.ini)
echo "Gitea: remove old backups"
keep-files.py {{ backups_dir }} --keep 2

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "Gramps: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "Gramps: done."

View File

@@ -3,13 +3,22 @@
services: services:
gramps_app: &gramps_app gramps_app: &gramps_app
image: ghcr.io/gramps-project/grampsweb:v25.4.1 image: ghcr.io/gramps-project/grampsweb:25.11.2
container_name: gramps_app container_name: gramps_app
depends_on: depends_on:
- gramps_redis - gramps_redis
restart: unless-stopped restart: unless-stopped
ports: networks:
- "127.0.0.1:{{ gramps_port }}:5000" # host:docker - "gramps_network"
- "web_proxy_network"
volumes:
- "{{ (data_dir, 'gramps_db') | path_join }}:/root/.gramps/grampsdb" # persist Gramps database
- "{{ (data_dir, 'gramps_users') | path_join }}:/app/users" # persist user database
- "{{ (data_dir, 'gramps_index') | path_join }}:/app/indexdir" # persist search index
- "{{ (data_dir, 'gramps_secret') | path_join }}:/app/secret" # persist flask secret
- "{{ (cache_dir, 'gramps_thumb_cache') | path_join }}:/app/thumbnail_cache" # persist thumbnails
- "{{ (cache_dir, 'gramps_cache') | path_join }}:/app/cache" # persist export and report caches
- "{{ media_dir }}:/app/media" # persist media files
environment: environment:
GRAMPSWEB_TREE: "Gramps" # will create a new tree if not exists GRAMPSWEB_TREE: "Gramps" # will create a new tree if not exists
GRAMPSWEB_SECRET_KEY: "{{ gramps_secret_key }}" GRAMPSWEB_SECRET_KEY: "{{ gramps_secret_key }}"
@@ -18,27 +27,18 @@ services:
GRAMPSWEB_CELERY_CONFIG__broker_url: "redis://gramps_redis:6379/0" GRAMPSWEB_CELERY_CONFIG__broker_url: "redis://gramps_redis:6379/0"
GRAMPSWEB_CELERY_CONFIG__result_backend: "redis://gramps_redis:6379/0" GRAMPSWEB_CELERY_CONFIG__result_backend: "redis://gramps_redis:6379/0"
GRAMPSWEB_RATELIMIT_STORAGE_URI: "redis://gramps_redis:6379/1" GRAMPSWEB_RATELIMIT_STORAGE_URI: "redis://gramps_redis:6379/1"
GUNICORN_NUM_WORKERS: 2
# Email options
GRAMPSWEB_EMAIL_HOST: "{{ postbox_host }}" GRAMPSWEB_EMAIL_HOST: "{{ postbox_host }}"
GRAMPSWEB_EMAIL_PORT: "{{ postbox_port }}" GRAMPSWEB_EMAIL_PORT: "{{ postbox_port }}"
GRAMPSWEB_EMAIL_HOST_USER: "{{ postbox_user }}" GRAMPSWEB_EMAIL_HOST_USER: "{{ postbox_user }}"
GRAMPSWEB_EMAIL_HOST_PASSWORD: "{{ postbox_pass }}" GRAMPSWEB_EMAIL_HOST_PASSWORD: "{{ postbox_pass }}"
GRAMPSWEB_EMAIL_USE_TLS: "false" GRAMPSWEB_EMAIL_USE_TLS: "false"
GRAMPSWEB_DEFAULT_FROM_EMAIL: "gramps@vakhrushev.me" GRAMPSWEB_DEFAULT_FROM_EMAIL: "gramps@vakhrushev.me"
GUNICORN_NUM_WORKERS: 2
# media storage at s3 # media storage
GRAMPSWEB_MEDIA_BASE_DIR: "s3://av-gramps-media-storage" GRAMPSWEB_MEDIA_BASE_DIR: "/app/media"
AWS_ENDPOINT_URL: "{{ gramps_s3_endpoint }}"
AWS_ACCESS_KEY_ID: "{{ gramps_s3_access_key_id }}"
AWS_SECRET_ACCESS_KEY: "{{ gramps_s3_secret_access_key }}"
AWS_DEFAULT_REGION: "{{ gramps_s3_region }}"
volumes:
- ./data/gramps_users:/app/users # persist user database
- ./data/gramps_index:/app/indexdir # persist search index
- ./data/gramps_thumb_cache:/app/thumbnail_cache # persist thumbnails
- ./data/gramps_cache:/app/cache # persist export and report caches
- ./data/gramps_secret:/app/secret # persist flask secret
- ./data/gramps_db:/root/.gramps/grampsdb # persist Gramps database
- ./data/gramps_media:/app/media # persist media files
gramps_celery: gramps_celery:
<<: *gramps_app # YAML merge key copying the entire grampsweb service config <<: *gramps_app # YAML merge key copying the entire grampsweb service config
@@ -47,9 +47,22 @@ services:
- gramps_redis - gramps_redis
restart: unless-stopped restart: unless-stopped
ports: [] ports: []
command: celery -A gramps_webapi.celery worker --loglevel=INFO --concurrency=2 networks:
- "gramps_network"
command: celery -A gramps_webapi.celery worker --loglevel=INFO --concurrency=1
gramps_redis: gramps_redis:
image: valkey/valkey:8.1.1-alpine image: valkey/valkey:9.0-alpine
container_name: gramps_redis container_name: gramps_redis
restart: unless-stopped restart: unless-stopped
networks:
- "gramps_network"
- "monitoring_network"
networks:
gramps_network:
driver: bridge
web_proxy_network:
external: true
monitoring_network:
external: true

View File

@@ -0,0 +1,25 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
users:
type: sqlite
path: "{{ (data_dir, 'gramps_users/users.sqlite') | path_join }}"
search_index:
type: sqlite
path: "{{ (data_dir, 'gramps_index/search_index.db') | path_join }}"
sqlite:
type: sqlite
path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/sqlite.db') | path_join }}"
undo:
type: sqlite
path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/undo.db') | path_join }}"

65
files/gramps/gramps_rename.py Executable file
View File

@@ -0,0 +1,65 @@
#!/usr/bin/env python3.12
import argparse
import sys
from pathlib import Path
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Rename Gramps document files by appending extensions from a list."
)
parser.add_argument("directory", type=Path, help="Directory containing hashed files")
parser.add_argument("names_file", type=Path, help="Text file with target names")
return parser.parse_args()
def read_names(path: Path) -> list[str]:
if not path.is_file():
raise FileNotFoundError(f"Names file not found: {path}")
names = []
for line in path.read_text(encoding="utf-8").splitlines():
name = line.strip()
if name:
names.append(name)
return names
def rename_files(directory: Path, names: list[str]) -> None:
if not directory.is_dir():
raise NotADirectoryError(f"Directory not found: {directory}")
for name in names:
hash_part, dot, _ = name.partition(".")
if not dot:
print(f"Skipping invalid entry (missing extension): {name}", file=sys.stderr)
continue
source = directory / hash_part
target = directory / name
if target.exists():
print(f"Target already exists, skipping: {target}", file=sys.stderr)
continue
if not source.exists():
print(f"Source not found, skipping: {source}", file=sys.stderr)
continue
source.rename(target)
print(f"Renamed {source.name} -> {target.name}")
def main() -> None:
args = parse_args()
try:
names = read_names(args.names_file)
rename_files(args.directory, names)
except Exception as exc: # noqa: BLE001
print(str(exc), file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,13 @@
services:
homepage_app:
# noinspection ComposeUnknownValues
image: "{{ registry_homepage_nginx_image }}"
container_name: homepage_app
restart: unless-stopped
networks:
- "web_proxy_network"
networks:
web_proxy_network:
external: true

View File

@@ -1,6 +0,0 @@
services:
homepage-web:
image: "${WEB_SERVICE_IMAGE}"
ports:
- "127.0.0.1:${WEB_SERVICE_PORT}:80"
restart: unless-stopped

View File

@@ -5,10 +5,13 @@ import argparse
def main(): def main():
parser = argparse.ArgumentParser(description='Retain specified number of files in a directory sorted by name, delete others.') parser = argparse.ArgumentParser(
parser.add_argument('directory', type=str, help='Path to target directory') description="Retain specified number of files in a directory sorted by name, delete others."
parser.add_argument('--keep', type=int, default=2, )
help='Number of files to retain (default: 2)') parser.add_argument("directory", type=str, help="Path to target directory")
parser.add_argument(
"--keep", type=int, default=2, help="Number of files to retain (default: 2)"
)
args = parser.parse_args() args = parser.parse_args()
# Validate arguments # Validate arguments

10
files/memos/backup.sh.j2 Normal file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "{{ app_name }}: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "{{ app_name }}: done."

View File

@@ -0,0 +1,23 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
memos_app:
image: neosmemo/memos:0.25.3
container_name: memos_app
restart: unless-stopped
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
networks:
- "web_proxy_network"
volumes:
- "{{ data_dir }}:/var/opt/memos"
environment:
- MEMOS_MODE=prod
- MEMOS_PORT=5230
- MEMOS_STORAGE_TYPE=local
- MEMOS_STORAGE_PATH=assets/{uuid}
- MEMOS_MAX_FILE_SIZE=52428800
networks:
web_proxy_network:
external: true

View File

@@ -0,0 +1,16 @@
# https://gobackup.github.io/configuration
models:
memos:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
users:
type: sqlite
path: "{{ (data_dir, 'memos_prod.db') | path_join }}"

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="miniflux_postgres_${TIMESTAMP}.sql.gz"
echo "miniflux: backing up postgresql database"
docker compose --file "{{ base_dir }}/docker-compose.yml" exec \
miniflux_postgres \
pg_dump \
-U "{{ miniflux_postgres_user }}" \
"{{ miniflux_postgres_database }}" \
| gzip > "{{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "miniflux: PostgreSQL backup saved to {{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "miniflux: removing old backups"
# Keep only the 3 most recent backups
keep-files.py "{{ postgres_backups_dir }}" --keep 3
echo "miniflux: backup completed successfully."

View File

@@ -0,0 +1,63 @@
# See sample https://miniflux.app/docs/docker.html#docker-compose
# See env https://miniflux.app/docs/configuration.html
services:
miniflux_app:
image: miniflux/miniflux:2.2.10
container_name: miniflux_app
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
depends_on:
miniflux_postgres:
condition: service_healthy
restart: 'unless-stopped'
networks:
- "miniflux_network"
- "web_proxy_network"
volumes:
- "{{ secrets_dir }}:/secrets:ro"
environment:
- DATABASE_URL_FILE=/secrets/miniflux_database_url
- RUN_MIGRATIONS=1
- CREATE_ADMIN=1
- ADMIN_USERNAME_FILE=/secrets/miniflux_admin_user
- ADMIN_PASSWORD_FILE=/secrets/miniflux_admin_password
- BASE_URL=https://miniflux.vakhrushev.me
- DISABLE_LOCAL_AUTH=1
- OAUTH2_OIDC_DISCOVERY_ENDPOINT=https://auth.vakhrushev.me
- OAUTH2_CLIENT_ID_FILE=/secrets/miniflux_oidc_client_id
- OAUTH2_CLIENT_SECRET_FILE=/secrets/miniflux_oidc_client_secret
- OAUTH2_OIDC_PROVIDER_NAME=Authelia
- OAUTH2_PROVIDER=oidc
- OAUTH2_REDIRECT_URL=https://miniflux.vakhrushev.me/oauth2/oidc/callback
- OAUTH2_USER_CREATION=1
- METRICS_COLLECTOR=1
- METRICS_ALLOWED_NETWORKS=0.0.0.0/0
miniflux_postgres:
image: postgres:16.3-bookworm
container_name: miniflux_postgres
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
restart: 'unless-stopped'
environment:
- POSTGRES_USER={{ miniflux_postgres_user }}
- POSTGRES_PASSWORD_FILE=/secrets/miniflux_postgres_password
- POSTGRES_DB={{ miniflux_postgres_database }}
networks:
- "miniflux_network"
- "monitoring_network"
volumes:
- "/etc/passwd:/etc/passwd:ro"
- "{{ secrets_dir }}:/secrets:ro"
- "{{ postgres_data_dir }}:/var/lib/postgresql/data"
healthcheck:
test: ["CMD", "pg_isready", "--username={{ miniflux_postgres_user }}", "--dbname={{ miniflux_postgres_database }}"]
interval: 10s
start_period: 30s
networks:
miniflux_network:
driver: bridge
web_proxy_network:
external: true
monitoring_network:
external: true

View File

@@ -0,0 +1,43 @@
services:
netdata:
image: netdata/netdata:v2.8.4
container_name: netdata
restart: unless-stopped
cap_add:
- SYS_PTRACE
- SYS_ADMIN
security_opt:
- apparmor:unconfined
networks:
- "web_proxy_network"
- "monitoring_network"
volumes:
- "{{ config_dir }}:/etc/netdata"
- "{{ (data_dir, 'lib') | path_join }}:/var/lib/netdata"
- "{{ (data_dir, 'cache') | path_join }}:/var/cache/netdata"
# Netdata system volumes
- "/:/host/root:ro,rslave"
- "/etc/group:/host/etc/group:ro"
- "/etc/hostname:/host/etc/hostname:ro"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/os-release:/host/etc/os-release:ro"
- "/etc/passwd:/host/etc/passwd:ro"
- "/proc:/host/proc:ro"
- "/run/dbus:/run/dbus:ro"
- "/sys:/host/sys:ro"
- "/var/log:/host/var/log:ro"
- "/var/run:/host/var/run:ro"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
environment:
PGID: "{{ netdata_docker_group_output.stdout | default(999) }}"
NETDATA_EXTRA_DEB_PACKAGES: "fail2ban"
networks:
web_proxy_network:
external: true
monitoring_network:
external: true

View File

@@ -0,0 +1,3 @@
jobs:
- name: fail2ban
update_every: 60 # Collect Fail2Ban jails statistics every N seconds

View File

@@ -0,0 +1,9 @@
update_every: 60
jobs:
- name: outline_db
dsn: 'postgresql://netdata:{{ netdata_postgres_password }}@outline_postgres:5432/outline'
- name: miniflux_db
dsn: 'postgresql://netdata:{{ netdata_postgres_password }}@miniflux_postgres:5432/miniflux'

View File

@@ -0,0 +1,24 @@
update_every: 15
jobs:
- name: caddyproxy
url: http://caddyproxy:2019/metrics
selector:
allow:
- "caddy_http_*"
- name: authelia
url: http://authelia_app:9959/metrics
selector:
allow:
- "authelia_*"
- name: miniflux
url: http://miniflux_app:8080/metrics
selector:
allow:
- "miniflux_*"
- name: transcriber
url: http://transcriber_app:8080/metrics

View File

@@ -0,0 +1,702 @@
# netdata configuration
#
# You can download the latest version of this file, using:
#
# wget -O /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
# or
# curl -o /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
#
# You can uncomment and change any of the options below.
# The value shown in the commented settings, is the default value.
#
# global netdata configuration
[global]
# run as user = netdata
# host access prefix = /host
# pthread stack size = 8MiB
# cpu cores = 2
# libuv worker threads = 16
# profile = standalone
# hostname = rivendell-v2
# glibc malloc arena max for plugins = 1
# glibc malloc arena max for netdata = 1
# crash reports = all
# timezone = Etc/UTC
# OOM score = 0
# process scheduling policy = keep
# is ephemeral node = no
# has unstable connection = no
[db]
#| >>> [db].update every <<<
#| datatype: duration (seconds), default value: 1s
update every = 10s
# enable replication = yes
# replication period = 1d
# replication step = 1h
# replication threads = 1
# replication prefetch = 10
# db = dbengine
# memory deduplication (ksm) = auto
# cleanup orphan hosts after = 1h
# cleanup ephemeral hosts after = off
# cleanup obsolete charts after = 1h
# gap when lost iterations above = 1
# dbengine page type = gorilla
# dbengine page cache size = 32MiB
# dbengine extent cache size = off
# dbengine enable journal integrity check = no
# dbengine use all ram for caches = no
# dbengine out of memory protection = 391.49MiB
# dbengine use direct io = yes
# dbengine journal v2 unmount time = 2m
# dbengine pages per extent = 109
# storage tiers = 3
# dbengine tier backfill = new
# dbengine tier 1 update every iterations = 60
# dbengine tier 2 update every iterations = 60
# dbengine tier 0 retention size = 1024MiB
# dbengine tier 0 retention time = 14d
# dbengine tier 1 retention size = 1024MiB
# dbengine tier 1 retention time = 3mo
# dbengine tier 2 retention size = 1024MiB
# dbengine tier 2 retention time = 2y
# extreme cardinality protection = yes
# extreme cardinality keep instances = 1000
# extreme cardinality min ephemerality = 50
[directories]
# config = /etc/netdata
# stock config = /usr/lib/netdata/conf.d
# log = /var/log/netdata
# web = /usr/share/netdata/web
# cache = /var/cache/netdata
# lib = /var/lib/netdata
# cloud.d = /var/lib/netdata/cloud.d
# plugins = "/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"
# registry = /var/lib/netdata/registry
# home = /etc/netdata
# stock health config = /usr/lib/netdata/conf.d/health.d
# health config = /etc/netdata/health.d
[logs]
# facility = daemon
# logs flood protection period = 1m
# logs to trigger flood protection = 1000
# level = info
# debug = /var/log/netdata/debug.log
# daemon = /var/log/netdata/daemon.log
# collector = /var/log/netdata/collector.log
# access = /var/log/netdata/access.log
# health = /var/log/netdata/health.log
# debug flags = 0x0000000000000000
[environment variables]
# PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin
# PYTHONPATH =
# TZ = :/etc/localtime
[cloud]
# conversation log = no
# scope = full
# query threads = 6
# proxy = env
[ml]
# enabled = auto
# training window = 6h
# min training window = 15m
# max training vectors = 1440
# max samples to smooth = 3
# train every = 3h
# number of models per dimension = 18
# delete models older than = 7d
# num samples to diff = 1
# num samples to lag = 5
# maximum number of k-means iterations = 1000
# dimension anomaly score threshold = 0.99000
# host anomaly rate threshold = 1.00000
# anomaly detection grouping method = average
# anomaly detection grouping duration = 5m
# num training threads = 1
# flush models batch size = 256
# dimension anomaly rate suppression window = 15m
# dimension anomaly rate suppression threshold = 450
# enable statistics charts = yes
# hosts to skip from training = !*
# charts to skip from training = netdata.*
# stream anomaly detection charts = yes
[health]
# silencers file = /var/lib/netdata/health.silencers.json
# enabled = yes
# enable stock health configuration = yes
# use summary for notifications = yes
# default repeat warning = off
# default repeat critical = off
# in memory max health log entries = 1000
# health log retention = 5d
# script to execute on alarm = /usr/libexec/netdata/plugins.d/alarm-notify.sh
# enabled alarms = *
# run at least every = 10s
# postpone alarms during hibernation for = 1m
[web]
#| >>> [web].default port <<<
#| migrated from: [global].default port
# default port = 19999
# ssl key = /etc/netdata/ssl/key.pem
# ssl certificate = /etc/netdata/ssl/cert.pem
# tls version = 1.3
# tls ciphers = none
# ses max tg_des_window = 15
# des max tg_des_window = 15
# mode = static-threaded
# listen backlog = 4096
# bind to = *
# bearer token protection = no
# disconnect idle clients after = 1m
# timeout for first request = 1m
# accept a streaming request every = off
# respect do not track policy = no
# x-frame-options response header =
# allow connections from = localhost *
# allow connections by dns = heuristic
# allow dashboard from = localhost *
# allow dashboard by dns = heuristic
# allow badges from = *
# allow badges by dns = heuristic
# allow streaming from = *
# allow streaming by dns = heuristic
# allow netdata.conf from = localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN
# allow netdata.conf by dns = no
# allow management from = localhost
# allow management by dns = heuristic
# enable gzip compression = yes
# gzip compression strategy = default
# gzip compression level = 3
# ssl skip certificate verification = no
# web server threads = 6
# web server max sockets = 131072
[registry]
# enabled = no
# registry db file = /var/lib/netdata/registry/registry.db
# registry log file = /var/lib/netdata/registry/registry-log.db
# registry save db every new entries = 1000000
# registry expire idle persons = 1y
# registry domain =
# registry to announce = https://registry.my-netdata.io
# registry hostname = rivendell-v2
# verify browser cookies support = yes
# enable cookies SameSite and Secure = yes
# max URL length = 1024
# max URL name length = 50
# netdata management api key file = /var/lib/netdata/netdata.api.key
# allow from = *
# allow by dns = heuristic
[pulse]
# extended = no
# update every = 10s
[plugins]
#| >>> [plugins].perf <<<
#| datatype: yes or no, default value: yes
perf = no
#| >>> [plugins].python.d <<<
#| datatype: yes or no, default value: yes
python.d = no
#| >>> [plugins].charts.d <<<
#| datatype: yes or no, default value: yes
charts.d = no
#| >>> [plugins].otel <<<
#| datatype: yes or no, default value: yes
otel = no
#| >>> [plugins].statsd <<<
#| datatype: yes or no, default value: yes
statsd = no
# idlejitter = yes
# netdata pulse = yes
# profile = no
# tc = yes
# diskspace = yes
# proc = yes
# cgroups = yes
# timex = yes
# enable running new plugins = yes
# check for new plugins every = 1m
# slabinfo = no
# freeipmi = no
# debugfs = yes
# ioping = yes
# network-viewer = yes
# apps = yes
# go.d = yes
# systemd-units = yes
# systemd-journal = yes
[statsd]
# update every (flushInterval) = 10s
# udp messages to process at once = 10
# create private charts for metrics matching = *
# max private charts hard limit = 1000
# set charts as obsolete after = off
# decimal detail = 1000
# disconnect idle tcp clients after = 10m
# private charts hidden = no
# histograms and timers percentile (percentThreshold) = 95.00000
# dictionaries max unique dimensions = 200
# add dimension for number of events received = no
# gaps on gauges (deleteGauges) = no
# gaps on counters (deleteCounters) = no
# gaps on meters (deleteMeters) = no
# gaps on sets (deleteSets) = no
# gaps on histograms (deleteHistograms) = no
# gaps on timers (deleteTimers) = no
# gaps on dictionaries (deleteDictionaries) = no
# statsd server max TCP sockets = 131072
[plugin:idlejitter]
# loop time = 20ms
[plugin:timex]
# update every = 10s
# clock synchronization state = yes
# time offset = yes
[plugin:proc]
# /proc/net/dev = yes
# /proc/pagetypeinfo = no
# /proc/stat = yes
# /proc/uptime = yes
# /proc/loadavg = yes
# /proc/sys/fs/file-nr = yes
# /proc/sys/kernel/random/entropy_avail = yes
# /run/reboot_required = yes
# /proc/pressure = yes
# /proc/interrupts = yes
# /proc/softirqs = yes
# /proc/vmstat = yes
# /proc/meminfo = yes
# /sys/kernel/mm/ksm = yes
# /sys/block/zram = yes
# /sys/devices/system/edac/mc = yes
# /sys/devices/pci/aer = yes
# /sys/devices/system/node = yes
# /proc/net/wireless = yes
# /proc/net/sockstat = yes
# /proc/net/sockstat6 = yes
# /proc/net/netstat = yes
# /proc/net/sctp/snmp = yes
# /proc/net/softnet_stat = yes
# /proc/net/ip_vs/stats = yes
# /sys/class/infiniband = yes
# /proc/net/stat/conntrack = yes
# /proc/net/stat/synproxy = yes
# /proc/diskstats = yes
# /proc/mdstat = yes
# /proc/net/rpc/nfsd = yes
# /proc/net/rpc/nfs = yes
# /proc/spl/kstat/zfs/arcstats = yes
# /sys/fs/btrfs = yes
# ipc = yes
# /sys/class/power_supply = yes
# /sys/class/drm = yes
[plugin:cgroups]
#| >>> [plugin:cgroups].update every <<<
#| datatype: duration (seconds), default value: 10s
update every = 20s
#| >>> [plugin:cgroups].check for new cgroups every <<<
#| datatype: duration (seconds), default value: 10s
check for new cgroups every = 20s
# use unified cgroups = auto
# max cgroups to allow = 1000
# max cgroups depth to monitor = 0
# enable by default cgroups matching = !*/init.scope !/system.slice/run-*.scope *user.slice/docker-* !*user.slice* *.scope !/machine.slice/*/.control !/machine.slice/*/payload* !/machine.slice/*/supervisor /machine.slice/*.service */kubepods/pod*/* */kubepods/*/pod*/* */*-kubepods-pod*/* */*-kubepods-*-pod*/* !*kubepods* !*kubelet* !*/vcpu* !*/emulator !*.mount !*.partition !*.service !*.service/udev !*.socket !*.slice !*.swap !*.user !/ !/docker !*/libvirt !/lxc !/lxc/*/* !/lxc.monitor* !/lxc.pivot !/lxc.payload !*lxcfs.service/.control !/machine !/qemu !/system !/systemd !/user *
# enable by default cgroups names matching = *
# search for cgroups in subpaths matching = !*/init.scope !*-qemu !*.libvirt-qemu !/init.scope !/system !/systemd !/user !/lxc/*/* !/lxc.monitor !/lxc.payload/*/* !/lxc.payload.* *
# script to get cgroup names = /usr/libexec/netdata/plugins.d/cgroup-name.sh
# script to get cgroup network interfaces = /usr/libexec/netdata/plugins.d/cgroup-network
# run script to rename cgroups matching = !/ !*.mount !*.socket !*.partition /machine.slice/*.service !*.service !*.slice !*.swap !*.user !init.scope !*.scope/vcpu* !*.scope/emulator *.scope *docker* *lxc* *qemu* */kubepods/pod*/* */kubepods/*/pod*/* */*-kubepods-pod*/* */*-kubepods-*-pod*/* !*kubepods* !*kubelet* *.libvirt-qemu *
# cgroups to match as systemd services = !/system.slice/*/*.service /system.slice/*.service
[plugin:proc:diskspace]
#| >>> [plugin:proc:diskspace].update every <<<
#| datatype: duration (seconds), default value: 10s
update every = 1m
# remove charts of unmounted disks = yes
# check for new mount points every = 15s
# exclude space metrics on paths = /dev /dev/shm /proc/* /sys/* /var/run/user/* /run/lock /run/user/* /snap/* /var/lib/docker/* /var/lib/containers/storage/* /run/credentials/* /run/containerd/* /rpool /rpool/*
# exclude space metrics on filesystems = *gvfs *gluster* *s3fs *ipfs *davfs2 *httpfs *sshfs *gdfs *moosefs fusectl autofs cgroup cgroup2 hugetlbfs devtmpfs fuse.lxcfs
# exclude inode metrics on filesystems = msdosfs msdos vfat overlayfs aufs* *unionfs
# space usage for all disks = auto
# inodes usage for all disks = auto
[plugin:tc]
# script to run to get tc values = /usr/libexec/netdata/plugins.d/tc-qos-helper.sh
[plugin:go.d]
# update every = 10s
# command options =
[plugin:apps]
# update every = 10s
# command options =
[plugin:systemd-journal]
# update every = 10s
# command options =
[plugin:network-viewer]
# update every = 10s
# command options =
[plugin:debugfs]
# update every = 10s
# command options =
[plugin:ioping]
# update every = 10s
# command options =
[plugin:proc:/proc/net/dev]
# compressed packets for all interfaces = no
# disable by default interfaces matching = lo fireqos* *-ifb fwpr* fwbr* fwln* ifb4*
[plugin:proc:/proc/stat]
# cpu utilization = yes
# per cpu core utilization = no
# cpu interrupts = yes
# context switches = yes
# processes started = yes
# processes running = yes
# keep per core files open = yes
# keep cpuidle files open = yes
# core_throttle_count = auto
# package_throttle_count = no
# cpu frequency = yes
# cpu idle states = no
# core_throttle_count filename to monitor = /host/sys/devices/system/cpu/%s/thermal_throttle/core_throttle_count
# package_throttle_count filename to monitor = /host/sys/devices/system/cpu/%s/thermal_throttle/package_throttle_count
# scaling_cur_freq filename to monitor = /host/sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq
# time_in_state filename to monitor = /host/sys/devices/system/cpu/%s/cpufreq/stats/time_in_state
# schedstat filename to monitor = /host/proc/schedstat
# cpuidle name filename to monitor = /host/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/name
# cpuidle time filename to monitor = /host/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/time
# filename to monitor = /host/proc/stat
[plugin:proc:/proc/uptime]
# filename to monitor = /host/proc/uptime
[plugin:proc:/proc/loadavg]
# filename to monitor = /host/proc/loadavg
# enable load average = yes
# enable total processes = yes
[plugin:proc:/proc/sys/fs/file-nr]
# filename to monitor = /host/proc/sys/fs/file-nr
[plugin:proc:/proc/sys/kernel/random/entropy_avail]
# filename to monitor = /host/proc/sys/kernel/random/entropy_avail
[plugin:proc:/proc/pressure]
# base path of pressure metrics = /proc/pressure
# enable cpu some pressure = yes
# enable cpu full pressure = no
# enable memory some pressure = yes
# enable memory full pressure = yes
# enable io some pressure = yes
# enable io full pressure = yes
# enable irq some pressure = no
# enable irq full pressure = yes
[plugin:proc:/proc/interrupts]
# interrupts per core = no
# filename to monitor = /host/proc/interrupts
[plugin:proc:/proc/softirqs]
# interrupts per core = no
# filename to monitor = /host/proc/softirqs
[plugin:proc:/proc/vmstat]
# filename to monitor = /host/proc/vmstat
# swap i/o = auto
# disk i/o = yes
# memory page faults = yes
# out of memory kills = yes
# system-wide numa metric summary = auto
# transparent huge pages = auto
# zswap i/o = auto
# memory ballooning = auto
# kernel same memory = auto
[plugin:proc:/sys/devices/system/node]
# directory to monitor = /host/sys/devices/system/node
# enable per-node numa metrics = auto
[plugin:proc:/proc/meminfo]
# system ram = yes
# system swap = auto
# hardware corrupted ECC = auto
# committed memory = yes
# writeback memory = yes
# kernel memory = yes
# slab memory = yes
# hugepages = auto
# transparent hugepages = auto
# memory reclaiming = yes
# high low memory = yes
# cma memory = auto
# direct maps = yes
# filename to monitor = /host/proc/meminfo
[plugin:proc:/sys/kernel/mm/ksm]
# /sys/kernel/mm/ksm/pages_shared = /host/sys/kernel/mm/ksm/pages_shared
# /sys/kernel/mm/ksm/pages_sharing = /host/sys/kernel/mm/ksm/pages_sharing
# /sys/kernel/mm/ksm/pages_unshared = /host/sys/kernel/mm/ksm/pages_unshared
# /sys/kernel/mm/ksm/pages_volatile = /host/sys/kernel/mm/ksm/pages_volatile
[plugin:proc:/sys/devices/system/edac/mc]
# directory to monitor = /host/sys/devices/system/edac/mc
[plugin:proc:/sys/class/pci/aer]
# enable root ports = no
# enable pci slots = no
[plugin:proc:/proc/net/wireless]
# filename to monitor = /host/proc/net/wireless
# status for all interfaces = auto
# quality for all interfaces = auto
# discarded packets for all interfaces = auto
# missed beacon for all interface = auto
[plugin:proc:/proc/net/sockstat]
# ipv4 sockets = auto
# ipv4 TCP sockets = auto
# ipv4 TCP memory = auto
# ipv4 UDP sockets = auto
# ipv4 UDP memory = auto
# ipv4 UDPLITE sockets = auto
# ipv4 RAW sockets = auto
# ipv4 FRAG sockets = auto
# ipv4 FRAG memory = auto
# update constants every = 1m
# filename to monitor = /host/proc/net/sockstat
[plugin:proc:/proc/net/sockstat6]
# ipv6 TCP sockets = auto
# ipv6 UDP sockets = auto
# ipv6 UDPLITE sockets = auto
# ipv6 RAW sockets = auto
# ipv6 FRAG sockets = auto
# filename to monitor = /host/proc/net/sockstat6
[plugin:proc:/proc/net/netstat]
# bandwidth = auto
# input errors = auto
# multicast bandwidth = auto
# broadcast bandwidth = auto
# multicast packets = auto
# broadcast packets = auto
# ECN packets = auto
# TCP reorders = auto
# TCP SYN cookies = auto
# TCP out-of-order queue = auto
# TCP connection aborts = auto
# TCP memory pressures = auto
# TCP SYN queue = auto
# TCP accept queue = auto
# filename to monitor = /host/proc/net/netstat
[plugin:proc:/proc/net/snmp]
# ipv4 packets = auto
# ipv4 fragments sent = auto
# ipv4 fragments assembly = auto
# ipv4 errors = auto
# ipv4 TCP connections = auto
# ipv4 TCP packets = auto
# ipv4 TCP errors = auto
# ipv4 TCP opens = auto
# ipv4 TCP handshake issues = auto
# ipv4 UDP packets = auto
# ipv4 UDP errors = auto
# ipv4 ICMP packets = auto
# ipv4 ICMP messages = auto
# ipv4 UDPLite packets = auto
# filename to monitor = /host/proc/net/snmp
[plugin:proc:/proc/net/snmp6]
# ipv6 packets = auto
# ipv6 fragments sent = auto
# ipv6 fragments assembly = auto
# ipv6 errors = auto
# ipv6 UDP packets = auto
# ipv6 UDP errors = auto
# ipv6 UDPlite packets = auto
# ipv6 UDPlite errors = auto
# bandwidth = auto
# multicast bandwidth = auto
# broadcast bandwidth = auto
# multicast packets = auto
# icmp = auto
# icmp redirects = auto
# icmp errors = auto
# icmp echos = auto
# icmp group membership = auto
# icmp router = auto
# icmp neighbor = auto
# icmp mldv2 = auto
# icmp types = auto
# ect = auto
# filename to monitor = /host/proc/net/snmp6
[plugin:proc:/proc/net/sctp/snmp]
# established associations = auto
# association transitions = auto
# fragmentation = auto
# packets = auto
# packet errors = auto
# chunk types = auto
# filename to monitor = /host/proc/net/sctp/snmp
[plugin:proc:/proc/net/softnet_stat]
# softnet_stat per core = no
# filename to monitor = /host/proc/net/softnet_stat
[plugin:proc:/proc/net/ip_vs_stats]
# IPVS bandwidth = yes
# IPVS connections = yes
# IPVS packets = yes
# filename to monitor = /host/proc/net/ip_vs_stats
[plugin:proc:/sys/class/infiniband]
# dirname to monitor = /host/sys/class/infiniband
# bandwidth counters = yes
# packets counters = yes
# errors counters = yes
# hardware packets counters = auto
# hardware errors counters = auto
# monitor only active ports = auto
# disable by default interfaces matching =
# refresh ports state every = 30s
[plugin:proc:/proc/net/stat/nf_conntrack]
# filename to monitor = /host/proc/net/stat/nf_conntrack
# netfilter new connections = no
# netfilter connection changes = no
# netfilter connection expectations = no
# netfilter connection searches = no
# netfilter errors = no
# netfilter connections = yes
[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max]
# filename to monitor = /host/proc/sys/net/netfilter/nf_conntrack_max
# read every seconds = 10
[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_count]
# filename to monitor = /host/proc/sys/net/netfilter/nf_conntrack_count
[plugin:proc:/proc/net/stat/synproxy]
# SYNPROXY cookies = auto
# SYNPROXY SYN received = auto
# SYNPROXY connections reopened = auto
# filename to monitor = /host/proc/net/stat/synproxy
[plugin:proc:/proc/diskstats]
# enable new disks detected at runtime = yes
# performance metrics for physical disks = auto
# performance metrics for virtual disks = auto
# performance metrics for partitions = no
# bandwidth for all disks = auto
# operations for all disks = auto
# merged operations for all disks = auto
# i/o time for all disks = auto
# queued operations for all disks = auto
# utilization percentage for all disks = auto
# extended operations for all disks = auto
# backlog for all disks = auto
# bcache for all disks = auto
# bcache priority stats update every = off
# remove charts of removed disks = yes
# path to get block device = /host/sys/block/%s
# path to get block device bcache = /host/sys/block/%s/bcache
# path to get virtual block device = /host/sys/devices/virtual/block/%s
# path to get block device infos = /host/sys/dev/block/%lu:%lu/%s
# path to device mapper = /host/dev/mapper
# path to /dev/disk = /host/dev/disk
# path to /sys/block = /host/sys/block
# path to /dev/disk/by-label = /host/dev/disk/by-label
# path to /dev/disk/by-id = /host/dev/disk/by-id
# path to /dev/vx/dsk = /host/dev/vx/dsk
# name disks by id = no
# preferred disk ids = *
# exclude disks = loop* ram*
# filename to monitor = /host/proc/diskstats
# performance metrics for disks with major 253 = yes
[plugin:proc:/proc/mdstat]
# faulty devices = yes
# nonredundant arrays availability = yes
# mismatch count = auto
# disk stats = yes
# operation status = yes
# make charts obsolete = yes
# filename to monitor = /host/proc/mdstat
# mismatch_cnt filename to monitor = /host/sys/block/%s/md/mismatch_cnt
[plugin:proc:/proc/net/rpc/nfsd]
# filename to monitor = /host/proc/net/rpc/nfsd
[plugin:proc:/proc/net/rpc/nfs]
# filename to monitor = /host/proc/net/rpc/nfs
[plugin:proc:/proc/spl/kstat/zfs/arcstats]
# filename to monitor = /host/proc/spl/kstat/zfs/arcstats
[plugin:proc:/sys/fs/btrfs]
# path to monitor = /host/sys/fs/btrfs
# check for btrfs changes every = 1m
# physical disks allocation = auto
# data allocation = auto
# metadata allocation = auto
# system allocation = auto
# commit stats = auto
# error stats = auto
[plugin:proc:ipc]
# message queues = yes
# semaphore totals = yes
# shared memory totals = yes
# msg filename to monitor = /host/proc/sysvipc/msg
# shm filename to monitor = /host/proc/sysvipc/shm
# max dimensions in memory allowed = 50
[plugin:proc:/sys/class/power_supply]
# battery capacity = yes
# battery power = yes
# battery charge = no
# battery energy = no
# power supply voltage = no
# keep files open = auto
# directory to monitor = /host/sys/class/power_supply
[plugin:proc:/sys/class/drm]
# directory to monitor = /host/sys/class/drm
[plugin:systemd-units]
# update every = 10s
# command options =

View File

@@ -0,0 +1,25 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="outline_postgres_${TIMESTAMP}.sql.gz"
echo "Outline: backing up PostgreSQL database"
docker compose --file "{{ base_dir }}/docker-compose.yml" exec \
outline_postgres \
pg_dump \
-U "{{ outline_postgres_user }}" \
"{{ outline_postgres_database }}" \
| gzip > "{{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "Outline: PostgreSQL backup saved to {{ postgres_backups_dir }}/${BACKUP_FILE}"
echo "Outline: removing old backups"
# Keep only the 3 most recent backups
keep-files.py "{{ postgres_backups_dir }}" --keep 3
echo "Outline: backup completed successfully."

View File

@@ -0,0 +1,88 @@
services:
# See sample https://github.com/outline/outline/blob/main/.env.sample
outline_app:
image: outlinewiki/outline:1.1.0
container_name: outline_app
restart: unless-stopped
depends_on:
- outline_postgres
- outline_redis
networks:
- "outline_network"
- "web_proxy_network"
environment:
NODE_ENV: 'production'
URL: 'https://outline.vakhrushev.me'
FORCE_HTTPS: 'true'
SECRET_KEY: '{{ outline_secret_key }}'
UTILS_SECRET: '{{ outline_utils_secret }}'
DATABASE_URL: 'postgres://{{ outline_postgres_user }}:{{ outline_postgres_password }}@outline_postgres:5432/{{ outline_postgres_database }}' # yamllint disable-line rule:line-length
PGSSLMODE: 'disable'
REDIS_URL: 'redis://outline_redis:6379'
FILE_STORAGE: 's3'
FILE_STORAGE_UPLOAD_MAX_SIZE: '262144000'
AWS_ACCESS_KEY_ID: '{{ outline_s3_access_key }}'
AWS_SECRET_ACCESS_KEY: '{{ outline_s3_secret_key }}'
AWS_REGION: '{{ outline_s3_region }}'
AWS_S3_ACCELERATE_URL: ''
AWS_S3_UPLOAD_BUCKET_URL: '{{ outline_s3_url }}'
AWS_S3_UPLOAD_BUCKET_NAME: '{{ outline_s3_bucket }}'
AWS_S3_FORCE_PATH_STYLE: 'true'
AWS_S3_ACL: 'private'
OIDC_CLIENT_ID: '{{ outline_oidc_client_id | replace("$", "$$") }}'
OIDC_CLIENT_SECRET: '{{ outline_oidc_client_secret | replace("$", "$$") }}'
OIDC_AUTH_URI: 'https://auth.vakhrushev.me/api/oidc/authorization'
OIDC_TOKEN_URI: 'https://auth.vakhrushev.me/api/oidc/token'
OIDC_USERINFO_URI: 'https://auth.vakhrushev.me/api/oidc/userinfo'
OIDC_LOGOUT_URI: 'https://auth.vakhrushev.me/logout'
OIDC_USERNAME_CLAIM: 'email'
OIDC_SCOPES: 'openid profile email'
OIDC_DISPLAY_NAME: 'Authelia'
SMTP_HOST: '{{ postbox_host }}'
SMTP_PORT: '{{ postbox_port }}'
SMTP_USERNAME: '{{ postbox_user }}'
SMTP_PASSWORD: '{{ postbox_pass }}'
SMTP_FROM_EMAIL: 'outline@vakhrushev.me'
SMTP_TLS_CIPHERS: 'TLSv1.2'
SMTP_SECURE: 'false'
outline_redis:
image: valkey/valkey:9.0-alpine
container_name: outline_redis
restart: unless-stopped
networks:
- "outline_network"
- "monitoring_network"
outline_postgres:
image: postgres:16.3-bookworm
container_name: outline_postgres
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
restart: unless-stopped
volumes:
- "/etc/passwd:/etc/passwd:ro"
- "{{ postgres_data_dir }}:/var/lib/postgresql/data"
environment:
POSTGRES_USER: '{{ outline_postgres_user }}'
POSTGRES_PASSWORD: '{{ outline_postgres_password }}'
POSTGRES_DB: '{{ outline_postgres_database }}'
networks:
- "outline_network"
- "monitoring_network"
healthcheck:
test: ["CMD", "pg_isready", "--username={{ outline_postgres_user }}", "--dbname={{ outline_postgres_database }}"]
interval: 10s
start_period: 30s
networks:
outline_network:
driver: bridge
web_proxy_network:
external: true
monitoring_network:
external: true

View File

@@ -1,26 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
66626231663733396232343163306138366434663364373937396137313134373033626539356166
3038316664383731623635336233393566636234636532630a393234336561613133373662383161
33653330663364363832346331653037663363643238326334326431336331373936666162363561
3064656630666431330a626430353063313866663730663236343437356661333164653636376538
62303164393766363933336163386663333030336132623661346565333861313537333566346563
32666436383335353866396539663936376134653762613137343035376639376135616334326161
62343366313032306664303030323433666230333665386630383635633863303366313639616462
38643466356666653337383833366565633932613539666563653634643063663166623337303865
64303365373932346233653237626363363964366431663966393937343966633735356563373735
66366464346436303036383161316466323639396162346537653134626663303662326462656563
63343065323636643266396532333331333137303131373633653233333837656665346635373564
62613733613634356335636663336634323463376266373665306232626330363132313362373032
30613366626563383236636262656135613431343639633339336135353362373665326264633438
65306539663166623533336531356639306235346566313764343835643437663963613639326430
36303031346339366561366166386532373838623635663837663466643032653930613635666237
38313235343662623733613637616164366134613635343135646439623464623233303330333361
62623166376337343838636564383633646432653436646236363262316438613333616236656532
37336539343130343133626262616634303561326631363564353064336130613666353531646237
66373036363764653435326638313036653135396362666439623431313930633539613965333263
39383937616165333962366134343936323930386233356662303864643236396562313339313739
64303934336164333563623263323236663531613265383833336239306435333735396666633666
30663566653361343238306133613839333962373838623633363138353331616264363064316433
36663233643134353333623264643238396438366633376530336134313365323832346663316535
66653436323338636565303133316637353338346366633564306230386632373235653836626338
3935

View File

@@ -0,0 +1,12 @@
services:
rssbridge_app:
image: rssbridge/rss-bridge:2025-08-05
container_name: rssbridge_app
restart: unless-stopped
networks:
- "web_proxy_network"
networks:
web_proxy_network:
external: true

View File

@@ -0,0 +1,44 @@
$ANSIBLE_VAULT;1.1;AES256
33396537353265633634336630353330653337623861373731613734663938633837613437366537
3439383366633266623463366530626662346338393165630a663539313066663061353635666366
61393437393131333166626165306563366661353338363138633239666566313330363331666537
3763356535396334380a386362383436363732353234333033613133383264643934306432313335
34646164323664636532663835306230386633316539373564383163346663376666633564326134
30666135626637343963383766383836653135633739636261353666303666633566346562643962
63376165636434343066306539653637343736323437653465656436323533636237643333326438
35626239323530643066363533323039393237333338316135313838643464306161646635313062
36386565626435373333393566393831366538363864313737306565343162316536353539333864
63376264643566613266373665666363366662643262616634333132386535383731396462633430
32343738343039616139343833366661303430383766376139636434616565356161396433643035
37363165383935373937346464343738643430333764336264373931616332393964346566636638
39303434343461326464623363323937396663376335316237373166306134636432376435663033
34346436623435626363636237373965633139343661623135633764303862353465306235666563
66653764666635636462636434663264646665383236343166643133613966366334653030653262
38326437313939616332636638323033346139343732653933356239306132613665376163646164
30316663643666633334653133613764396165646533636534613931663138666366316235396466
61313964396264626339306135376635633133366433303033633363396132303938363638346333
66326466326134313535393831343262363862663065323135643630316431336531373833316363
64376338653366353031333836643137333736363534363164306331313337353663653961623665
64626562366637336637353433303261303964633236356162363139396339396136393237643935
34316266326561663834353762343766363933313463313263393063343562613933393361653861
38363635323231666438366536626435373365323733663139666534636564623666356436346539
63326436386436356636633637373738343032353664323736653939346234643165313461643833
35666439613136396264313033336539313537613238393262306365656238396464373936616538
64316365616464386638313331653030346330393665353539393834346135643434363736323135
37663433326439356663633162616435313061353662373766633731636439636266666466613363
39343930386534376330663230623832643933336235636166626534366664366562356165373764
63343432323864366162376263656565646661633536666336643030363039616666343063386165
37343238303034313832393538313632396261316232376635633732656663396631323261363433
38373738363833323934353739643538376237316535623035383965613965636337646537326537
64663837643632666334393634323264613139353332306263613165383733386662366333316139
63373839346265366166333331353231663763306163323063613138323835313831303666306561
39316666343761303464333535336361333462623363633333383363303134336139356436666165
62616364373030613837353939363636653537373965613531636130383266643637333233316137
39353866366239643265366162663031346439663234363935353138323739393337313835313062
33373263326565383735366364316461323930336437623834356132346633636364313732383661
66346634613762613037386238656334616430633037343066623463313035646339313638653137
65643166316664626236633332326136303235623934306462643636373437373630346435633835
66346364393236393563623032306631396561623263653236393939313333373635303365316638
66373037333565323733656331636337336665363038353635383531386366633632363031623430
31356461663438653736316464363231303938653932613561633139316361633461626361383132
396436303634613135383839396566393135

View File

@@ -0,0 +1,24 @@
services:
transcriber_app:
# noinspection ComposeUnknownValues
image: "{{ registry_transcriber_image }}"
container_name: transcriber_app
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
restart: unless-stopped
volumes:
- "{{ config_file }}:/config/config.toml:ro"
- "{{ data_dir }}:/data"
networks:
- "web_proxy_network"
- "monitoring_network"
environment:
- "USER_UID={{ user_create_result.uid }}"
- "USER_GID={{ user_create_result.group }}"
command: ./transcriber --config=/config/config.toml
networks:
web_proxy_network:
external: true
monitoring_network:
external: true

10
files/wakapi/backup.sh.j2 Normal file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "{{ app_name }}: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "{{ app_name }}: done."

View File

@@ -0,0 +1,36 @@
# See versions: https://github.com/gramps-project/gramps-web/pkgs/container/grampsweb
services:
wakapi_app:
image: ghcr.io/muety/wakapi:2.16.1
container_name: wakapi_app
restart: unless-stopped
user: '{{ user_create_result.uid }}:{{ user_create_result.group }}'
networks:
- "web_proxy_network"
volumes:
- "{{ data_dir }}:/data"
environment:
WAKAPI_PUBLIC_URL: "https://wakapi.vakhrushev.me"
WAKAPI_PASSWORD_SALT: "{{ wakapi_password_salt }}"
WAKAPI_ALLOW_SIGNUP: "false"
WAKAPI_DISABLE_FRONTPAGE: "true"
WAKAPI_COOKIE_MAX_AGE: 31536000
# OIDC
# WAKAPI_OIDC_PROVIDER_NAME: "authelia"
# WAKAPI_OIDC_PROVIDER_CLIENT_ID: "{{ wakapi_oidc_client_id }}"
# WAKAPI_OIDC_PROVIDER_CLIENT_SECRET: "{{ wakapi_oidc_client_secret }}"
# WAKAPI_OIDC_PROVIDER_ENDPOINT: "https://auth.vakhrushev.me/.well-known/openid-configuration"
# Mail
WAKAPI_MAIL_SENDER: "Wakapi <wakapi@vakhrushev.me>"
WAKAPI_MAIL_PROVIDER: "smtp"
WAKAPI_MAIL_SMTP_HOST: "{{ postbox_host }}"
WAKAPI_MAIL_SMTP_PORT: "{{ postbox_port }}"
WAKAPI_MAIL_SMTP_USER: "{{ postbox_user }}"
WAKAPI_MAIL_SMTP_PASS: "{{ postbox_pass }}"
WAKAPI_MAIL_SMTP_TLS: "false"
networks:
web_proxy_network:
external: true

View File

@@ -0,0 +1,16 @@
# https://gobackup.github.io/configuration
models:
gramps:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
databases:
wakapi:
type: sqlite
path: "{{ (data_dir, 'wakapi.db') | path_join }}"

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
echo "{{ app_name }}: backup data with gobackups"
(cd "{{ base_dir }}" && gobackup perform --config "{{ gobackup_config }}")
echo "{{ app_name }}: done."

View File

@@ -0,0 +1,109 @@
x-common-env: &cenv
MEILI_URL: http://wanderer_search:7700
MEILI_MASTER_KEY: "{{ wanderer_melli_master_key }}"
services:
wanderer_search:
container_name: wanderer_search
image: getmeili/meilisearch:v1.20.0
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
environment:
<<: *cenv
MEILI_NO_ANALYTICS: "true"
ports:
- "127.0.0.1:7700:7700"
networks:
- wanderer_network
volumes:
- ./data/ms_data:/meili_data
restart: unless-stopped
healthcheck:
test: curl --fail http://localhost:7700/health || exit 1
interval: 15s
retries: 10
start_period: 20s
timeout: 10s
wanderer_db:
container_name: wanderer_db
image: "flomp/wanderer-db:{{ wanderer_version }}"
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
depends_on:
wanderer_search:
condition: service_healthy
environment:
<<: *cenv
POCKETBASE_ENCRYPTION_KEY: "{{ wanderer_pocketbase_enc_key }}"
ORIGIN: "{{ wanderer_origin }}"
ports:
- "127.0.0.1:8090:8090"
networks:
- wanderer_network
- web_proxy_network
restart: unless-stopped
volumes:
- ./data/pb_data:/pb_data
healthcheck:
test: wget --spider -q http://localhost:8090/health || exit 1
interval: 15s
retries: 10
start_period: 20s
timeout: 10s
wanderer_web:
container_name: wanderer_web
image: "flomp/wanderer-web:{{ wanderer_version }}"
user: "{{ user_create_result.uid }}:{{ user_create_result.group }}"
depends_on:
wanderer_search:
condition: service_healthy
wanderer_db:
condition: service_healthy
environment:
<<: *cenv
ORIGIN: "{{ wanderer_origin }}"
BODY_SIZE_LIMIT: Infinity
PUBLIC_POCKETBASE_URL: http://wanderer_db:8090
PUBLIC_DISABLE_SIGNUP: "true"
UPLOAD_FOLDER: /app/uploads
UPLOAD_USER:
UPLOAD_PASSWORD:
PUBLIC_VALHALLA_URL: https://valhalla1.openstreetmap.de
PUBLIC_NOMINATIM_URL: https://nominatim.openstreetmap.org
volumes:
- ./data/uploads:/app/uploads
# - ./data/about.md:/app/build/client/md/about.md
ports:
- "127.0.0.1:3000:3000"
networks:
- wanderer_network
- web_proxy_network
restart: unless-stopped
healthcheck:
test: curl --fail http://localhost:3000/ || exit 1
interval: 15s
retries: 10
start_period: 20s
timeout: 10s
# valhalla:
# image: ghcr.io/gis-ops/docker-valhalla/valhalla:latest
# ports:
# - "8002:8002"
# volumes:
# - ./data/valhalla:/custom_files
# environment:
# - tile_urls=https://download.geofabrik.de/europe/germany/bayern/oberbayern-latest.osm.pbf
# - use_tiles_ignore_pbf=True
# - force_rebuild=False
# - force_rebuild_elevation=False
# - build_elevation=True
# - build_admins=True
# - build_time_zones=True
networks:
wanderer_network:
driver: bridge
web_proxy_network:
external: true

View File

@@ -0,0 +1,32 @@
# https://gobackup.github.io/configuration
models:
application:
compress_with:
type: 'tgz'
storages:
local:
type: 'local'
path: '{{ backups_dir }}'
keep: 3
# databases:
# users:
# type: sqlite
# path: "{{ (data_dir, 'gramps_users/users.sqlite') | path_join }}"
# search_index:
# type: sqlite
# path: "{{ (data_dir, 'gramps_index/search_index.db') | path_join }}"
# sqlite:
# type: sqlite
# path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/sqlite.db') | path_join }}"
# undo:
# type: sqlite
# path: "{{ (data_dir, 'gramps_db/59a0f3d6-1c3d-4410-8c1d-1c9c6689659f/undo.db') | path_join }}"
archive:
includes:
- "{{ data_dir }}"
# excludes:
# - "{{ (data_dir, 'gramps_cache') | path_join }}"
# - "{{ (data_dir, 'gramps_thumb_cache') | path_join }}"
# - "{{ (data_dir, 'gramps_tmp') | path_join }}"

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env sh #!/usr/bin/env sh
# Must be executed for every user
# See https://cloud.yandex.ru/docs/container-registry/tutorials/run-docker-on-vm#run # See https://cloud.yandex.ru/docs/container-registry/tutorials/run-docker-on-vm#run
set -eu set -eu

View File

@@ -1 +0,0 @@
192.168.50.10

25
lefthook.yml Normal file
View File

@@ -0,0 +1,25 @@
# Refer for explanation to following link:
# https://lefthook.dev/configuration/
glob_matcher: doublestar
templates:
av-hooks-dir: "/home/av/projects/private/git-hooks"
pre-commit:
jobs:
- name: "gitleaks"
run: "gitleaks git --staged"
- name: "check secret files"
run: "python3 {av-hooks-dir}/pre-commit/check-secrets-encrypted-with-ansible-vault.py"
- name: "format python"
glob: "**/*.py"
run: "black --quiet {staged_files}"
stage_fixed: true
- name: "mypy"
glob: "**/*.py"
run: "mypy {staged_files}"

View File

@@ -0,0 +1,48 @@
---
- name: 'Configure netdata'
ansible.builtin.import_playbook: playbook-netdata.yml
#
- name: 'Configure dozzle'
ansible.builtin.import_playbook: playbook-dozzle.yml
- name: 'Configure gitea'
ansible.builtin.import_playbook: playbook-gitea.yml
- name: 'Configure gramps'
ansible.builtin.import_playbook: playbook-gramps.yml
- name: 'Configure memos'
ansible.builtin.import_playbook: playbook-memos.yml
- name: 'Configure miniflux'
ansible.builtin.import_playbook: playbook-miniflux.yml
- name: 'Configure outline'
ansible.builtin.import_playbook: playbook-outline.yml
- name: 'Configure rssbridge'
ansible.builtin.import_playbook: playbook-rssbridge.yml
- name: 'Configure wakapi'
ansible.builtin.import_playbook: playbook-wakapi.yml
- name: 'Configure wanderer'
ansible.builtin.import_playbook: playbook-wanderer.yml
#
- name: 'Configure homepage'
ansible.builtin.import_playbook: playbook-homepage.yml
- name: 'Configure transcriber'
ansible.builtin.import_playbook: playbook-transcriber.yml
#
- name: 'Configure authelia'
ansible.builtin.import_playbook: playbook-authelia.yml
- name: 'Configure caddy proxy'
ansible.builtin.import_playbook: playbook-caddyproxy.yml

12
playbook-all-setup.yml Normal file
View File

@@ -0,0 +1,12 @@
---
- name: 'Configure system'
ansible.builtin.import_playbook: playbook-system.yml
- name: 'Configure docker'
ansible.builtin.import_playbook: playbook-docker.yml
- name: 'Configure eget applications'
ansible.builtin.import_playbook: playbook-eget.yml
- name: 'Configure backups'
ansible.builtin.import_playbook: playbook-backups.yml

View File

@@ -1,64 +0,0 @@
---
- name: "Deploy homepage application"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
vars:
app_name: "homepage"
base_dir: "/home/major/applications/{{ app_name }}/"
docker_registry_prefix: "cr.yandex/crplfk0168i4o8kd7ade"
homepage_web_image: "{{ homepage_web_image | default(omit) }}"
tasks:
- name: "Check is web service imape passed"
ansible.builtin.assert:
that:
- "homepage_web_image is defined"
fail_msg: 'You must pass variable "homepage_web_image"'
- name: "Create full image name with container registry"
ansible.builtin.set_fact:
registry_homepage_web_image: "{{ (docker_registry_prefix, homepage_web_image) | path_join }}"
- name: "Push web service image to remote registry"
community.docker.docker_image:
state: present
source: local
name: "{{ homepage_web_image }}"
repository: "{{ registry_homepage_web_image }}"
push: true
delegate_to: 127.0.0.1
- name: "Create application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
mode: "0755"
loop:
- "{{ base_dir }}"
- name: "Copy application files"
ansible.builtin.copy:
src: "{{ item }}"
dest: "{{ base_dir }}"
mode: "0644"
loop:
- "./files/{{ app_name }}/docker-compose.yml"
- name: "Set up environment variables for application"
ansible.builtin.template:
src: "env.j2"
dest: '{{ (base_dir, ".env") | path_join }}'
mode: "0644"
vars:
env_dict:
WEB_SERVICE_IMAGE: "{{ registry_homepage_web_image }}"
WEB_SERVICE_PORT: "{{ homepage_port }}"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"

96
playbook-authelia.yml Normal file
View File

@@ -0,0 +1,96 @@
---
- name: "Configure authelia application"
hosts: all
vars_files:
- vars/secrets.yml
- files/authelia/secrets.yml
vars:
app_name: "authelia"
app_user: "{{ app_name }}"
app_owner_uid: 1011
app_owner_gid: 1012
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0700"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ config_dir }}"
- "{{ backups_dir }}"
- name: "Copy users file"
ansible.builtin.copy:
src: "files/{{ app_name }}/users.secrets.yml"
dest: "{{ (config_dir, 'users.yml') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
- name: "Copy configuration file"
ansible.builtin.template:
src: "files/{{ app_name }}/configuration.template.yml"
dest: "{{ (config_dir, 'configuration.yml') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "files/{{ app_name }}/gobackup.template.yml"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.template.sh"
dest: "{{ (base_dir, 'backup.sh') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"
tags:
- run-app

View File

@@ -3,36 +3,52 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/vars.yml
- vars/secrets.yml - vars/secrets.yml
vars:
backup_config_dir: "/etc/backup"
backup_config_file: "{{ (backup_config_dir, 'config.toml') | path_join }}"
restic_shell_script: "{{ (bin_prefix, 'restic-shell.sh') | path_join }}"
backup_all_script: "{{ (bin_prefix, 'backup-all.py') | path_join }}"
tasks: tasks:
- name: "Create backup config directory"
ansible.builtin.file:
path: "{{ backup_config_dir }}"
state: "directory"
owner: root
group: root
mode: "0755"
- name: "Create backup config file"
ansible.builtin.template:
src: "files/backups/config.template.toml"
dest: "{{ backup_config_file }}"
owner: root
group: root
mode: "0640"
- name: "Allow user to run the backup script without a password"
ansible.builtin.lineinfile:
path: /etc/sudoers
state: present
line: "{{ primary_user }} ALL=(ALL) NOPASSWD: {{ backup_all_script }}"
validate: /usr/sbin/visudo -cf %s # ВАЖНО: проверка синтаксиса перед сохранением
create: no # Файл уже должен существовать
- name: "Copy restic shell script" - name: "Copy restic shell script"
ansible.builtin.template: ansible.builtin.template:
src: "files/backups/restic-shell.sh.j2" src: "files/backups/restic-shell.sh.j2"
dest: "{{ bin_prefix }}/restic-shell.sh" dest: "{{ restic_shell_script }}"
owner: root owner: root
group: root group: root
mode: "0700" mode: "0700"
- name: "Copy restic backup script" - name: "Copy backup all script"
ansible.builtin.template: ansible.builtin.copy:
src: "files/backups/restic-backup.sh.j2" src: "files/backups/backup-all.py"
dest: "{{ bin_prefix }}/restic-backup.sh" dest: "{{ backup_all_script }}"
owner: root
group: root
mode: "0700"
- name: "Create gobackup config directory"
ansible.builtin.file:
path: "{{ backup_gobackup_config | dirname }}"
state: directory
mode: "0755"
- name: "Copy gobackup config files"
ansible.builtin.template:
src: "files/backups/gobackup.yml.j2"
dest: "{{ backup_gobackup_config }}"
owner: root owner: root
group: root group: root
mode: "0700" mode: "0700"
@@ -58,6 +74,6 @@
name: "restic backup" name: "restic backup"
minute: "0" minute: "0"
hour: "1" hour: "1"
job: "/usr/local/bin/restic-backup.sh 2>&1 | logger -t backup" job: "{{ backup_all_script }} 2>&1 | logger -t backup"
cron_file: "ansible_restic_backup" cron_file: "ansible_restic_backup"
user: "root" user: "root"

View File

@@ -1,26 +0,0 @@
---
- name: "Install and configure Caddy server"
hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
tasks:
- name: "Ensure networkd service is started (required by Caddy)."
ansible.builtin.systemd:
name: systemd-networkd
state: started
enabled: true
- name: "Install and configure Caddy server"
ansible.builtin.import_role:
name: caddy_ansible.caddy_ansible
vars:
caddy_github_token: "{{ caddy_vars.github_token }}"
caddy_config: '{{ lookup("template", "templates/Caddyfile.j2") }}'
caddy_setcap: true
caddy_systemd_capabilities_enabled: true
caddy_systemd_capabilities: "CAP_NET_BIND_SERVICE"
# Поменяй на true, чтобы обновить Caddy
caddy_update: false

81
playbook-caddyproxy.yml Normal file
View File

@@ -0,0 +1,81 @@
---
- name: "Configure caddy reverse proxy service"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "caddyproxy"
app_user: "{{ app_name }}"
app_owner_uid: 1010
app_owner_gid: 1011
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
caddy_file_dir: "{{ (base_dir, 'caddy_file') | path_join }}"
service_name: "{{ app_name }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ config_dir }}"
- "{{ caddy_file_dir }}"
- name: "Copy caddy file"
ansible.builtin.template:
src: "./files/{{ app_name }}/Caddyfile.j2"
dest: "{{ (caddy_file_dir, 'Caddyfile') | path_join }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app
# - name: "Reload caddy"
# community.docker.docker_compose_v2_exec:
# project_src: '{{ base_dir }}'
# service: "{{ service_name }}"
# command: caddy reload --config /etc/caddy/Caddyfile
# tags:
# - run-app
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"
tags:
- run-app

View File

@@ -1,79 +0,0 @@
---
- hosts: all
vars_files:
- vars/ports.yml
- vars/vars.yml
tasks:
# Applications
- ansible.builtin.import_role:
name: docker-app
vars:
username: keycloak
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: keycloak
DOCKER_PREFIX: keycloak
IMAGE_PREFIX: keycloak
CONTAINER_PREFIX: keycloak
WEB_SERVER_PORT: "127.0.0.1:{{ keycloak_port }}"
KEYCLOAK_ADMIN: "{{ keycloak.admin_login }}"
KEYCLOAK_ADMIN_PASSWORD: "{{ keycloak.admin_password }}"
USER_UID: "{{ uc_result.uid }}"
USER_GID: "{{ uc_result.group }}"
tags:
- apps
- ansible.builtin.import_role:
name: docker-app
vars:
username: outline
extra_groups:
- docker
ssh_keys:
- '{{ lookup("file", "files/av_id_rsa.pub") }}'
env:
PROJECT_NAME: outline
DOCKER_PREFIX: outline
IMAGE_PREFIX: outline
CONTAINER_PREFIX: outline
WEB_SERVER_PORT: "127.0.0.1:{{ outline_port }}"
USER_UID: "{{ uc_result.uid }}"
USER_GID: "{{ uc_result.group }}"
# Postgres
POSTGRES_USER: "{{ outline.postgres_user }}"
POSTGRES_PASSWORD: "{{ outline.postgres_password }}"
POSTGRES_DB: "outline"
# See sample https://github.com/outline/outline/blob/main/.env.sample
NODE_ENV: "production"
SECRET_KEY: "{{ outline.secret_key }}"
UTILS_SECRET: "{{ outline.utils_secret }}"
DATABASE_URL: "postgres://{{ outline.postgres_user }}:{{ outline.postgres_password }}@postgres:5432/outline"
PGSSLMODE: "disable"
REDIS_URL: "redis://redis:6379"
URL: "https://outline.vakhrushev.me"
FILE_STORAGE: "s3"
AWS_ACCESS_KEY_ID: "{{ outline.s3_access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ outline.s3_secret_key }}"
AWS_REGION: "ru-central1"
AWS_S3_ACCELERATE_URL: ""
AWS_S3_UPLOAD_BUCKET_URL: "https://storage.yandexcloud.net"
AWS_S3_UPLOAD_BUCKET_NAME: "av-outline-wiki"
AWS_S3_FORCE_PATH_STYLE: "true"
AWS_S3_ACL: "private"
OIDC_CLIENT_ID: "{{ outline.oidc_client_id }}"
OIDC_CLIENT_SECRET: "{{ outline.oidc_client_secret }}"
OIDC_AUTH_URI: "https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/auth"
OIDC_TOKEN_URI: "https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/token"
OIDC_USERINFO_URI: "https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/userinfo"
OIDC_LOGOUT_URI: "https://kk.vakhrushev.me/realms/outline/protocol/openid-connect/logout"
OIDC_USERNAME_CLAIM: "email"
OIDC_DISPLAY_NAME: "KK"
tags:
- apps

View File

@@ -3,13 +3,12 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
tasks: tasks:
- name: "Install python docker lib from pip" # - name: "Install python docker lib from pip"
ansible.builtin.pip: # ansible.builtin.pip:
name: docker # name: docker
- name: "Install docker" - name: "Install docker"
ansible.builtin.import_role: ansible.builtin.import_role:
@@ -21,8 +20,21 @@
- "docker-{{ docker_edition }}-cli" - "docker-{{ docker_edition }}-cli"
- "docker-{{ docker_edition }}-rootless-extras" - "docker-{{ docker_edition }}-rootless-extras"
docker_users: docker_users:
- major - "{{ primary_user }}"
- name: "Login to yandex docker registry." - name: Create a network for web proxy
ansible.builtin.script: community.docker.docker_network:
cmd: "files/yandex-docker-registry-auth.sh" name: "web_proxy_network"
driver: "bridge"
- name: Create a network for monitoring
community.docker.docker_network:
name: "monitoring_network"
driver: "bridge"
- name: "Schedule docker image prune"
ansible.builtin.cron:
name: "docker image prune"
minute: "0"
hour: "3"
job: "/usr/bin/docker image prune -af"

49
playbook-dozzle.yml Normal file
View File

@@ -0,0 +1,49 @@
---
- name: "Configure dozzle application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "dozzle"
app_user: "{{ app_name }}"
app_owner_uid: 1016
app_owner_gid: 1017
base_dir: "{{ (application_dir, app_name) | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ base_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

View File

@@ -3,8 +3,7 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
# See: https://github.com/zyedidia/eget/releases # See: https://github.com/zyedidia/eget/releases
@@ -22,25 +21,42 @@
- name: "Install rclone" - name: "Install rclone"
ansible.builtin.command: ansible.builtin.command:
cmd: "{{ eget_bin_path }} rclone/rclone --quiet --upgrade-only --to {{ eget_install_dir }} --asset zip --tag v1.69.2" cmd: >
changed_when: false {{ eget_bin_path }} rclone/rclone --quiet --upgrade-only --to {{ eget_install_dir }} --asset zip
--tag v1.72.0
- name: "Install btop"
ansible.builtin.command:
cmd: "{{ eget_bin_path }} aristocratos/btop --quiet --upgrade-only --to {{ eget_install_dir }} --tag v1.4.2"
changed_when: false changed_when: false
- name: "Install restic" - name: "Install restic"
ansible.builtin.command: ansible.builtin.command:
cmd: "{{ eget_bin_path }} restic/restic --quiet --upgrade-only --to {{ eget_install_dir }} --tag v0.18.0" cmd: >
{{ eget_bin_path }} restic/restic --quiet --upgrade-only --to {{ eget_install_dir }}
--tag v0.18.1
changed_when: false
- name: "Install btop"
ansible.builtin.command:
cmd: >
{{ eget_bin_path }} aristocratos/btop --quiet --upgrade-only --to {{ eget_install_dir }}
--tag v1.4.5
changed_when: false changed_when: false
- name: "Install gobackup" - name: "Install gobackup"
ansible.builtin.command: ansible.builtin.command:
cmd: "{{ eget_bin_path }} gobackup/gobackup --quiet --upgrade-only --to {{ eget_install_dir }} --tag v2.14.0" cmd: >
{{ eget_bin_path }} gobackup/gobackup --quiet --upgrade-only --to {{ eget_install_dir }}
--tag v2.17.0
changed_when: false changed_when: false
- name: "Install task" - name: "Install task"
ansible.builtin.command: ansible.builtin.command:
cmd: "{{ eget_bin_path }} go-task/task --quiet --upgrade-only --to {{ eget_install_dir }} --asset tar.gz --tag v3.43.3" cmd: >
{{ eget_bin_path }} go-task/task --quiet --upgrade-only --to {{ eget_install_dir }} --asset tar.gz
--tag v3.45.5
changed_when: false
- name: 'Install dust'
ansible.builtin.command:
cmd: >
{{ bin_prefix }}/eget bootandy/dust --quiet --upgrade-only --to {{ bin_prefix }} --asset gnu
--tag v1.2.3
changed_when: false changed_when: false

View File

@@ -3,13 +3,15 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
vars: vars:
app_name: "gitea" app_name: "gitea"
app_user: "{{ app_name }}" app_user: "{{ app_name }}"
base_dir: "/home/{{ app_name }}" app_owner_uid: 1005
app_owner_gid: 1006
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}" backups_dir: "{{ (base_dir, 'backups') | path_join }}"
tasks: tasks:
@@ -18,18 +20,9 @@
name: owner name: owner
vars: vars:
owner_name: "{{ app_user }}" owner_name: "{{ app_user }}"
owner_extra_groups: owner_uid: "{{ app_owner_uid }}"
- "docker" owner_gid: "{{ app_owner_gid }}"
owner_ssh_keys: owner_extra_groups: ["docker"]
- "{{ lookup('file', 'files/av_id_rsa.pub') }}"
owner_env:
PROJECT_NAME: "{{ app_name }}"
DOCKER_PREFIX: "{{ app_name }}"
IMAGE_PREFIX: "{{ app_name }}"
CONTAINER_PREFIX: "{{ app_name }}"
WEB_SERVER_PORT: "127.0.0.1:{{ gitea_port }}"
USER_UID: "{{ user_create_result.uid }}"
USER_GID: "{{ user_create_result.group }}"
- name: "Create internal application directories" - name: "Create internal application directories"
ansible.builtin.file: ansible.builtin.file:
@@ -37,15 +30,16 @@
state: "directory" state: "directory"
owner: "{{ app_user }}" owner: "{{ app_user }}"
group: "{{ app_user }}" group: "{{ app_user }}"
mode: "0775" mode: "0770"
loop: loop:
- "{{ (base_dir, 'data') | path_join }}" - "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ backups_dir }}" - "{{ backups_dir }}"
- name: "Copy gitea-dump script" - name: "Copy backup script"
ansible.builtin.template: ansible.builtin.template:
src: "files/{{ app_name }}/gitea-dump.sh.j2" src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/gitea-dump.sh" dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}" owner: "{{ app_user }}"
group: "{{ app_user }}" group: "{{ app_user }}"
mode: "0750" mode: "0750"
@@ -56,10 +50,12 @@
dest: "{{ base_dir }}/docker-compose.yml" dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}" owner: "{{ app_user }}"
group: "{{ app_user }}" group: "{{ app_user }}"
mode: "0644" mode: "0640"
- name: "Run application with docker compose" - name: "Run application with docker compose"
community.docker.docker_compose_v2: community.docker.docker_compose_v2:
project_src: "{{ base_dir }}" project_src: "{{ base_dir }}"
state: "present" state: "present"
remove_orphans: true remove_orphans: true
tags:
- run-app

View File

@@ -3,35 +3,93 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
vars: vars:
app_name: "gramps" app_name: "gramps"
base_dir: "/home/{{ primary_user }}/applications/{{ app_name }}/" app_user: "{{ app_name }}"
app_owner_uid: 1009
app_owner_gid: 1010
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
media_dir: "{{ (base_dir, 'media') | path_join }}"
cache_dir: "{{ (base_dir, 'cache') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks: tasks:
- name: "Create application directories" - name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file: ansible.builtin.file:
path: "{{ item }}" path: "{{ item }}"
state: "directory" state: "directory"
owner: "{{ primary_user }}" owner: "{{ app_user }}"
group: "{{ primary_user }}" group: "{{ app_user }}"
mode: "0755" mode: "0750"
loop: loop:
- "{{ base_dir }}" - "{{ base_dir }}"
- '{{ (base_dir, "data") | path_join }}' - "{{ data_dir }}"
- "{{ media_dir }}"
- "{{ cache_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.template.yml"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.template.sh"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Create backup targets file"
ansible.builtin.lineinfile:
path: "{{ base_dir }}/backup-targets"
line: "{{ item }}"
create: true
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- "{{ media_dir }}"
- "{{ backups_dir }}"
- name: "Copy rename script"
ansible.builtin.copy:
src: "files/{{ app_name }}/gramps_rename.py"
dest: "{{ base_dir }}/gramps_rename.py"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file" - name: "Copy docker compose file"
ansible.builtin.template: ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2" src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml" dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ primary_user }}" owner: "{{ app_user }}"
group: "{{ primary_user }}" group: "{{ app_user }}"
mode: "0644" mode: "0640"
- name: "Run application with docker compose" - name: "Run application with docker compose"
community.docker.docker_compose_v2: community.docker.docker_compose_v2:
project_src: "{{ base_dir }}" project_src: "{{ base_dir }}"
state: "present" state: "present"
remove_orphans: true remove_orphans: true
tags:
- run-app

View File

@@ -0,0 +1,20 @@
---
- name: "Upload local homepage images to registry"
hosts: all
gather_facts: false
vars_files:
- vars/secrets.yml
- vars/homepage.yml
- vars/homepage.images.yml
tasks:
- name: "Push web service image to remote registry"
community.docker.docker_image:
state: present
source: local
name: "{{ homepage_nginx_image }}"
repository: "{{ registry_homepage_nginx_image }}"
push: true
delegate_to: 127.0.0.1

48
playbook-homepage.yml Normal file
View File

@@ -0,0 +1,48 @@
---
- name: "Setup and deploy homepage service"
hosts: all
vars_files:
- vars/secrets.yml
- vars/homepage.yml
- vars/homepage.images.yml
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ base_dir }}"
- name: "Login to yandex docker registry."
ansible.builtin.script:
cmd: "files/yandex-docker-registry-auth.sh"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

82
playbook-memos.yml Normal file
View File

@@ -0,0 +1,82 @@
---
- name: "Configure memos application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "memos"
app_user: "{{ app_name }}"
app_owner_uid: 1019
app_owner_gid: 1020
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.yml.j2"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Create backup targets file"
ansible.builtin.lineinfile:
path: "{{ base_dir }}/backup-targets"
line: "{{ item }}"
create: true
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

81
playbook-miniflux.yml Normal file
View File

@@ -0,0 +1,81 @@
---
- name: "Configure miniflux application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "miniflux"
app_user: "{{ app_name }}"
app_owner_uid: 1013
app_owner_gid: 1014
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
secrets_dir: "{{ (base_dir, 'secrets') | path_join }}"
postgres_data_dir: "{{ (base_dir, 'data', 'postgres') | path_join }}"
postgres_backups_dir: "{{ (base_dir, 'backups', 'postgres') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ secrets_dir }}"
- "{{ postgres_data_dir }}"
- "{{ postgres_backups_dir }}"
- name: "Copy secrets"
ansible.builtin.import_role:
name: secrets
vars:
secrets_dest: "{{ secrets_dir }}"
secrets_user: "{{ app_user }}"
secrets_group: "{{ app_user }}"
secrets_vars:
- "miniflux_database_url"
- "miniflux_admin_user"
- "miniflux_admin_password"
- "miniflux_oidc_client_id"
- "miniflux_oidc_client_secret"
- "miniflux_postgres_password"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "./files/{{ app_name }}/backup.template.sh"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
recreate: "always"
remove_orphans: true
tags:
- run-app

View File

@@ -3,15 +3,106 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
vars:
app_name: "netdata"
app_user: "{{ app_name }}"
app_owner_uid: 1012
app_owner_gid: 1013
base_dir: "{{ (application_dir, app_name) | path_join }}"
config_dir: "{{ (base_dir, 'config') | path_join }}"
config_go_d_dir: "{{ (config_dir, 'go.d') | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
tasks: tasks:
- name: "Install Netdata from role" - name: "Create user and environment"
ansible.builtin.import_role: ansible.builtin.import_role:
name: netdata name: owner
vars: vars:
netdata_version: "v2.4.0" owner_name: "{{ app_user }}"
netdata_exposed_port: "{{ netdata_port }}" owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ config_dir }}"
- "{{ config_go_d_dir }}"
- name: "Copy netdata config file"
ansible.builtin.template:
src: "files/{{ app_name }}/netdata.template.conf"
dest: "{{ config_dir }}/netdata.conf"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Find all go.d plugin config files"
ansible.builtin.find:
paths: "files/{{ app_name }}/go.d"
file_type: file
delegate_to: localhost
register: go_d_source_files
- name: "Template all go.d plugin config files"
ansible.builtin.template:
src: "{{ item.path }}"
dest: "{{ config_go_d_dir }}/{{ item.path | basename }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
loop: "{{ go_d_source_files.files }}"
- name: "Find existing go.d config files on server"
ansible.builtin.find:
paths: "{{ config_go_d_dir }}"
file_type: file
register: go_d_existing_files
- name: "Remove go.d config files that don't exist in source"
ansible.builtin.file:
path: "{{ item.path }}"
state: absent
loop: "{{ go_d_existing_files.files }}"
when: (item.path | basename) not in (go_d_source_files.files | map(attribute='path') | map('basename') | list)
- name: "Grab docker group id."
ansible.builtin.shell:
cmd: |
set -o pipefail
grep docker /etc/group | cut -d ':' -f 3
executable: /bin/bash
register: netdata_docker_group_output
changed_when: netdata_docker_group_output.rc != 0
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags: tags:
- monitoring - run-app
- name: "Restart application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "restarted"
tags:
- run-app

63
playbook-outline.yml Normal file
View File

@@ -0,0 +1,63 @@
---
- name: "Configure outline application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "outline"
app_user: "{{ app_name }}"
app_owner_uid: 1007
app_owner_gid: 1008
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
postgres_data_dir: "{{ (base_dir, 'data', 'postgres') | path_join }}"
postgres_backups_dir: "{{ (base_dir, 'backups', 'postgres') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ postgres_data_dir }}"
- "{{ postgres_backups_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "./files/{{ app_name }}/backup.template.sh"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

View File

@@ -3,8 +3,7 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
vars: vars:
user_name: "<put-name-here>" user_name: "<put-name-here>"
@@ -25,3 +24,8 @@
ansible.builtin.file: ansible.builtin.file:
path: "/var/www/{{ user_name }}" path: "/var/www/{{ user_name }}"
state: absent state: absent
- name: "Remove home dir"
ansible.builtin.file:
path: "/home/{{ user_name }}"
state: absent

49
playbook-rssbridge.yml Normal file
View File

@@ -0,0 +1,49 @@
---
- name: "Configure rssbridge application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "rssbridge"
app_user: "{{ app_name }}"
app_owner_uid: 1014
app_owner_gid: 1015
base_dir: "{{ (application_dir, app_name) | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create internal application directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0770"
loop:
- "{{ base_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

View File

@@ -3,8 +3,7 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
vars: vars:
apt_packages: apt_packages:
@@ -25,21 +24,13 @@
name: "{{ apt_packages }}" name: "{{ apt_packages }}"
update_cache: true update_cache: true
- name: "Configure timezone"
ansible.builtin.import_role:
name: yatesr.timezone
vars:
timezone: UTC
tags:
- skip_ansible_lint
- name: "Configure security settings" - name: "Configure security settings"
ansible.builtin.import_role: ansible.builtin.import_role:
name: geerlingguy.security name: geerlingguy.security
vars: vars:
security_ssh_permit_root_login: "yes" security_ssh_permit_root_login: "yes"
security_autoupdate_enabled: "no" security_autoupdate_enabled: "no"
security_fail2ban_enabled: "yes" security_fail2ban_enabled: true
- name: "Copy keep files script" - name: "Copy keep files script"
ansible.builtin.copy: ansible.builtin.copy:
@@ -48,3 +39,20 @@
owner: root owner: root
group: root group: root
mode: "0755" mode: "0755"
- name: 'Create directory for mount'
ansible.builtin.file:
path: '/mnt/applications'
state: 'directory'
mode: '0755'
tags:
- mount-storage
- name: 'Mount external storages'
ansible.posix.mount:
path: '/mnt/applications'
src: 'UUID=3942bffd-8328-4536-8e88-07926fb17d17'
fstype: ext4
state: mounted
tags:
- mount-storage

View File

@@ -0,0 +1,20 @@
---
- name: "Upload local transcriber images to registry"
hosts: all
gather_facts: false
vars_files:
- vars/secrets.yml
- vars/transcriber.yml
- vars/transcriber.images.yml
tasks:
- name: "Push web service image to remote registry"
community.docker.docker_image:
state: present
source: local
name: "{{ transcriber_image }}"
repository: "{{ registry_transcriber_image }}"
push: true
delegate_to: 127.0.0.1

59
playbook-transcriber.yml Normal file
View File

@@ -0,0 +1,59 @@
---
- name: "Deploy transcriber application"
hosts: all
vars_files:
- vars/secrets.yml
- vars/transcriber.yml
- vars/transcriber.images.yml
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ base_dir }}"
- "{{ config_dir }}"
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy configuration files (templates)"
ansible.builtin.copy:
src: "files/{{ app_name }}/config.secrets.toml"
dest: "{{ config_file }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0600"
- name: "Login to yandex docker registry."
ansible.builtin.script:
cmd: "files/yandex-docker-registry-auth.sh"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

View File

@@ -3,8 +3,7 @@
hosts: all hosts: all
vars_files: vars_files:
- vars/ports.yml - vars/secrets.yml
- vars/vars.yml
tasks: tasks:
- name: Perform an upgrade of packages - name: Perform an upgrade of packages
@@ -25,3 +24,18 @@
- name: Remove dependencies that are no longer required - name: Remove dependencies that are no longer required
ansible.builtin.apt: ansible.builtin.apt:
autoremove: true autoremove: true
- name: Check if Docker is available
ansible.builtin.stat:
path: /usr/bin/docker
register: docker_exists
- name: Clean up unnecessary Docker data
ansible.builtin.command:
cmd: docker system prune --all --force
register: docker_prune_result
when: docker_exists.stat.exists
failed_when:
- docker_prune_result.rc is defined
- docker_prune_result.rc != 0
changed_when: "'Total reclaimed space' in docker_prune_result.stdout"

70
playbook-wakapi.yml Normal file
View File

@@ -0,0 +1,70 @@
---
- name: "Configure wakapi application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "wakapi"
app_user: "{{ app_name }}"
app_owner_uid: 1015
app_owner_gid: 1016
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.yml.j2"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Copy backup script"
ansible.builtin.template:
src: "files/{{ app_name }}/backup.sh.j2"
dest: "{{ base_dir }}/backup.sh"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.yml.j2"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

92
playbook-wanderer.yml Normal file
View File

@@ -0,0 +1,92 @@
---
- name: "Configure gramps application"
hosts: all
vars_files:
- vars/secrets.yml
vars:
app_name: "wanderer"
app_user: "{{ app_name }}"
app_owner_uid: 1018
app_owner_gid: 1019
base_dir: "{{ (application_dir, app_name) | path_join }}"
data_dir: "{{ (base_dir, 'data') | path_join }}"
backups_dir: "{{ (base_dir, 'backups') | path_join }}"
gobackup_config: "{{ (base_dir, 'gobackup.yml') | path_join }}"
wanderer_version: "v0.18.3"
wanderer_origin: "https://wanderer.vakhrushev.me"
tasks:
- name: "Create user and environment"
ansible.builtin.import_role:
name: owner
vars:
owner_name: "{{ app_user }}"
owner_uid: "{{ app_owner_uid }}"
owner_gid: "{{ app_owner_gid }}"
owner_extra_groups: ["docker"]
- name: "Create application internal directories"
ansible.builtin.file:
path: "{{ item }}"
state: "directory"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ base_dir }}"
- "{{ data_dir }}"
- "{{ (data_dir, 'pb_data') | path_join }}"
- "{{ (data_dir, 'uploads') | path_join }}"
- "{{ (data_dir, 'ms_data') | path_join }}"
- "{{ backups_dir }}"
- name: "Copy gobackup config"
ansible.builtin.template:
src: "./files/{{ app_name }}/gobackup.template.yml"
dest: "{{ gobackup_config }}"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
# - name: "Copy backup script"
# ansible.builtin.template:
# src: "files/{{ app_name }}/backup.template.sh"
# dest: "{{ base_dir }}/backup.sh"
# owner: "{{ app_user }}"
# group: "{{ app_user }}"
# mode: "0750"
- name: "Disable backup script"
ansible.builtin.file:
dest: "{{ base_dir }}/backup.sh"
state: absent
- name: "Create backup targets file"
ansible.builtin.lineinfile:
path: "{{ base_dir }}/backup-targets"
line: "{{ item }}"
create: true
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0750"
loop:
- "{{ data_dir }}"
- name: "Copy docker compose file"
ansible.builtin.template:
src: "./files/{{ app_name }}/docker-compose.template.yml"
dest: "{{ base_dir }}/docker-compose.yml"
owner: "{{ app_user }}"
group: "{{ app_user }}"
mode: "0640"
- name: "Run application with docker compose"
community.docker.docker_compose_v2:
project_src: "{{ base_dir }}"
state: "present"
remove_orphans: true
tags:
- run-app

View File

@@ -2,6 +2,6 @@
ungrouped: ungrouped:
hosts: hosts:
server: server:
ansible_host: '158.160.46.255' ansible_host: "158.160.46.255"
ansible_user: 'major' ansible_user: "major"
ansible_become: true ansible_become: true

View File

@@ -3,10 +3,7 @@
version: 1.2.2 version: 1.2.2
- src: geerlingguy.security - src: geerlingguy.security
version: 2.4.0 version: 3.0.0
- src: geerlingguy.docker - src: geerlingguy.docker
version: 7.4.3 version: 7.4.7
- src: caddy_ansible.caddy_ansible
version: v3.2.0

View File

@@ -1,18 +0,0 @@
---
- name: 'Create owner.'
import_role:
name: owner
vars:
owner_name: '{{ username }}'
owner_group: '{{ username }}'
owner_extra_groups: '{{ extra_groups | default([]) }}'
owner_ssh_keys: '{{ ssh_keys | default([]) }}'
owner_env: '{{ env | default({}) }}'
- name: 'Create web dir.'
file:
path: '/var/www/{{ username }}'
state: directory
owner: '{{ username }}'
group: '{{ username }}'
recurse: True

View File

@@ -1,8 +1,8 @@
--- ---
# defaults file for eget # defaults file for eget
eget_version: '1.3.4' eget_version: "1.3.4"
eget_download_url: 'https://github.com/zyedidia/eget/releases/download/v{{ eget_version }}/eget-{{ eget_version }}-linux_amd64.tar.gz' eget_download_url: "https://github.com/zyedidia/eget/releases/download/v{{ eget_version }}/eget-{{ eget_version }}-linux_amd64.tar.gz" # yamllint disable-line rule:line-length
eget_install_path: '/usr/bin/eget' eget_install_path: "/usr/bin/eget"
eget_download_dest: '/tmp/{{ eget_download_url | split("/") | last }}' eget_download_dest: '/tmp/{{ eget_download_url | split("/") | last }}'
eget_unarchive_dest: '{{ eget_download_dest | regex_replace("(\.tar\.gz|\.zip)$", "") }}' eget_unarchive_dest: '{{ eget_download_dest | regex_replace("(\.tar\.gz|\.zip)$", "") }}'

View File

@@ -1,6 +1,7 @@
---
galaxy_info: galaxy_info:
author: 'Anton Vakhrushev' author: "Anton Vakhrushev"
description: 'Role for installation eget utility' description: "Role for installation eget utility"
# If the issue tracker for your role is not on github, uncomment the # If the issue tracker for your role is not on github, uncomment the
# next line and provide a value # next line and provide a value
@@ -13,9 +14,9 @@ galaxy_info:
# - GPL-3.0-only # - GPL-3.0-only
# - Apache-2.0 # - Apache-2.0
# - CC-BY-4.0 # - CC-BY-4.0
license: 'MIT' license: "MIT"
min_ansible_version: '2.1' min_ansible_version: "2.1"
# If this a Container Enabled role, provide the minimum Ansible Container version. # If this a Container Enabled role, provide the minimum Ansible Container version.
# min_ansible_container_version: # min_ansible_container_version:

View File

@@ -1,30 +1,30 @@
--- ---
- name: 'Download eget from url "{{ eget_download_url }}"' - name: 'Download eget from url "{{ eget_download_url }}"'
ansible.builtin.get_url: ansible.builtin.get_url:
url: '{{ eget_download_url }}' url: "{{ eget_download_url }}"
dest: '{{ eget_download_dest }}' dest: "{{ eget_download_dest }}"
mode: '0600' mode: "0600"
- name: 'Unarchive eget' - name: "Unarchive eget"
ansible.builtin.unarchive: ansible.builtin.unarchive:
src: '{{ eget_download_dest }}' src: "{{ eget_download_dest }}"
dest: '/tmp' dest: "/tmp"
list_files: true list_files: true
remote_src: true remote_src: true
- name: 'Install eget binary' - name: "Install eget binary"
ansible.builtin.copy: ansible.builtin.copy:
src: '{{ (eget_unarchive_dest, "eget") | path_join }}' src: '{{ (eget_unarchive_dest, "eget") | path_join }}'
dest: '{{ eget_install_path }}' dest: "{{ eget_install_path }}"
mode: '0755' mode: "0755"
remote_src: true remote_src: true
- name: 'Remove temporary files' - name: "Remove temporary files"
ansible.builtin.file: ansible.builtin.file:
path: '{{ eget_download_dest }}' path: "{{ eget_download_dest }}"
state: absent state: absent
- name: 'Remove temporary directories' - name: "Remove temporary directories"
ansible.builtin.file: ansible.builtin.file:
path: '{{ eget_unarchive_dest }}' path: "{{ eget_unarchive_dest }}"
state: absent state: absent

View File

@@ -1,24 +1,24 @@
--- ---
# tasks file for eget # tasks file for eget
- name: 'Check if eget installed' - name: "Check if eget installed"
ansible.builtin.command: ansible.builtin.command:
cmd: '{{ eget_install_path }} --version' cmd: "{{ eget_install_path }} --version"
register: eget_installed_output register: eget_installed_output
ignore_errors: true ignore_errors: true
changed_when: false changed_when: false
- name: 'Check eget installed version' - name: "Check eget installed version"
ansible.builtin.set_fact: ansible.builtin.set_fact:
eget_need_install: '{{ not (eget_installed_output.rc == 0 and eget_version in eget_installed_output.stdout) }}' eget_need_install: "{{ not (eget_installed_output.rc == 0 and eget_version in eget_installed_output.stdout) }}"
- name: 'Assert that installation flag is defined' - name: "Assert that installation flag is defined"
ansible.builtin.assert: ansible.builtin.assert:
that: that:
- eget_need_install is defined - eget_need_install is defined
- eget_need_install is boolean - eget_need_install is boolean
- name: 'Download eget and install eget' - name: "Download eget and install eget"
ansible.builtin.include_tasks: ansible.builtin.include_tasks:
file: 'install.yml' file: "install.yml"
when: eget_need_install when: eget_need_install

Some files were not shown because too many files have changed in this diff Show More