550 Commits

Author SHA1 Message Date
563f85bc6b add matrix deployment 2024-10-15 17:52:03 +02:00
804adb989e Merge pull request 'Update Helm release grafana to v8.5.4' (#210) from renovate/grafana-8.x into main
Reviewed-on: #210
2024-10-14 19:09:38 +00:00
721e3e2c72 Update Helm release grafana to v8.5.4 2024-10-14 16:31:25 +00:00
aeb54dd2c5 Merge pull request 'Update ghcr.io/advplyr/audiobookshelf Docker tag to v2.15.0' (#209) from renovate/ghcr.io-advplyr-audiobookshelf-2.x into main
Reviewed-on: #209
2024-10-13 10:07:01 +00:00
36aa358613 Update ghcr.io/advplyr/audiobookshelf Docker tag to v2.15.0 2024-10-12 22:03:48 +00:00
62d03494e6 Merge pull request 'Update Helm release traefik to v32.1.1' (#208) from renovate/traefik-32.x into main
Reviewed-on: #208
2024-10-12 09:27:17 +00:00
645c347667 Update Helm release traefik to v32.1.1 2024-10-11 15:31:22 +00:00
0287c5eb0e Merge pull request 'Update Helm release authelia to v0.9.9' (#205) from renovate/authelia-0.x into main
Reviewed-on: #205
2024-10-08 22:06:32 +00:00
eace4c4f28 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.10.1' (#207) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #207
2024-10-08 22:05:39 +00:00
c81bbac2c5 Update actualbudget/actual-server Docker tag to v24.10.1 2024-10-08 18:04:02 +00:00
88e9ebc916 update immich 2024-10-08 11:02:42 +02:00
06b7b25ef7 Update Helm release authelia to v0.9.9 2024-10-08 09:01:22 +00:00
ee9334e753 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.10.0' (#202) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #202
2024-10-08 08:56:38 +00:00
dc30937c5e Merge pull request 'Update ghcr.io/advplyr/audiobookshelf Docker tag to v2.14.0' (#206) from renovate/ghcr.io-advplyr-audiobookshelf-2.x into main
Reviewed-on: #206
2024-10-08 08:56:10 +00:00
4a6d126f8e oauth for paperless 2024-10-06 14:57:24 +02:00
968303ea38 oauth for gitea 2024-10-06 13:47:43 +02:00
5148aca7ef Update ghcr.io/advplyr/audiobookshelf Docker tag to v2.14.0 2024-10-06 10:33:51 +00:00
3b4b9ae7c5 Merge pull request 'Update Helm release traefik to v32.1.0' (#204) from renovate/traefik-32.x into main
Reviewed-on: #204
2024-10-06 10:08:59 +00:00
dc59884e66 Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.10' (#198) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #198
2024-10-06 10:03:37 +00:00
bb133d1061 Merge pull request 'Update Helm release grafana to v8.5.2' (#196) from renovate/grafana-8.x into main
Reviewed-on: #196
2024-10-06 10:03:13 +00:00
ad9dedb009 Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.53' (#201) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #201
2024-10-06 10:02:53 +00:00
a44e84b8cb sso for argocd 2024-10-05 17:28:25 +02:00
37532f10ce smtp for authelia 2024-10-05 14:41:46 +02:00
888bd97c97 some more fixes 2024-10-04 18:48:52 +02:00
73feabe55c make gitea use cnpg cluster 2024-10-04 17:45:47 +02:00
8fc72e3164 add cnpg to gitea 2024-10-04 17:27:04 +02:00
7b392ac739 add mealie to sso 2024-10-04 16:06:47 +02:00
a94389bdcc use authelia as login source 2024-10-04 12:46:51 +02:00
3fa6e211fd Update Helm release traefik to v32.1.0 2024-10-04 10:04:30 +00:00
acd8c0e26a Update actualbudget/actual-server Docker tag to v24.10.0 2024-10-03 18:04:23 +00:00
7e989229ce Update adguard/adguardhome Docker tag to v0.107.53 2024-10-03 13:04:47 +00:00
3d4319377e Update homeassistant/home-assistant Docker tag to v2024.10 2024-10-02 18:34:17 +00:00
e1024cadba Update Helm release grafana to v8.5.2 2024-10-02 08:01:23 +00:00
140aca08da allow paperless to process signed documents 2024-10-01 12:11:15 +02:00
39de895f4c Merge pull request 'Update Helm release traefik to v32' (#194) from renovate/traefik-32.x into main
Reviewed-on: #194
2024-10-01 08:23:21 +00:00
0011cdb33a Merge pull request 'Update octodns/octodns Docker tag to v2024.09' (#188) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #188
2024-10-01 08:18:10 +00:00
a85bbd0400 Merge pull request 'Update owncloud/ocis Docker tag to v5.0.8' (#195) from renovate/owncloud-ocis-5.x into main
Reviewed-on: #195
2024-10-01 08:17:50 +00:00
0be3ea17ca Update owncloud/ocis Docker tag to v5.0.8 2024-09-30 16:01:28 +00:00
21cef5b45a more backup fixes 2024-09-30 15:39:34 +02:00
07c3a0f086 update backup naming 2024-09-30 15:31:30 +02:00
4f3e35acf8 add paperless ingress 2024-09-30 15:22:20 +02:00
b81eee425e add paperless deployment 2024-09-30 15:21:24 +02:00
1a8f52cc58 update immich 2024-09-30 15:17:02 +02:00
4fb7234df8 switch to backblaze for backups 2024-09-30 15:15:24 +02:00
ba4900c257 Update Helm release traefik to v32 2024-09-27 09:31:33 +00:00
9f939b16bc update immich 2024-09-26 16:56:36 +02:00
173f7a319c Merge pull request 'Update Helm release immich to v0.7.2' (#181) from renovate/immich-0.x into main
Reviewed-on: #181
2024-09-24 10:32:51 +00:00
284dff3040 Merge pull request 'Update Helm release gitea to v10.4.1' (#189) from renovate/gitea-10.x into main
Reviewed-on: #189
2024-09-24 10:32:39 +00:00
b4529f52fe Merge pull request 'Update Helm release traefik to v31.1.1' (#193) from renovate/traefik-31.x into main
Reviewed-on: #193
2024-09-24 10:32:27 +00:00
6eac191db3 Merge pull request 'Update ghcr.io/advplyr/audiobookshelf Docker tag to v2.13.4' (#192) from renovate/ghcr.io-advplyr-audiobookshelf-2.x into main
Reviewed-on: #192
2024-09-24 10:32:10 +00:00
ed53eeef71 Update Helm release traefik to v31.1.1 2024-09-20 08:33:30 +00:00
b10aced1e1 update grafana sealedsecret 2024-09-19 18:59:12 +02:00
6fcd66ff71 Update ghcr.io/advplyr/audiobookshelf Docker tag to v2.13.4 2024-09-17 12:01:48 +00:00
60077df128 add audiobookshelf 2024-09-17 13:42:33 +02:00
dacb84ee59 allow prune to work with stale locks 2024-09-17 10:24:10 +02:00
40146b69d8 better immich postgres-vectors handling 2024-09-15 19:25:54 +02:00
1a3cd7febd reseal secrets 2024-09-13 15:08:51 +02:00
fad28554bb bump traefik crds 2024-09-13 11:49:13 +02:00
d921738728 Update Helm release gitea to v10.4.1 2024-09-11 13:31:20 +00:00
f012b6979c Update octodns/octodns Docker tag to v2024.09 2024-09-10 16:01:20 +00:00
3bb863dd07 bump immich version 2024-09-09 17:49:39 +02:00
11ab97db50 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.11' (#187) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #187
2024-09-09 10:50:15 +00:00
59bc6540c6 Update jellyfin/jellyfin Docker tag to v10.9.11 2024-09-07 22:31:09 +00:00
fd6e5f50de Merge pull request 'Update Helm release cloudnative-pg to v0.22.0' (#178) from renovate/cloudnative-pg-0.x into main
Reviewed-on: #178
2024-09-07 11:07:05 +00:00
bc0a4186b3 Merge pull request 'Update Helm release traefik to v31' (#182) from renovate/traefik-31.x into main
Reviewed-on: #182
2024-09-05 18:42:17 +00:00
730f8b5121 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.9.0' (#183) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #183
2024-09-05 18:41:41 +00:00
86911f133f Merge pull request 'Update Helm release grafana to v8.5.1' (#184) from renovate/grafana-8.x into main
Reviewed-on: #184
2024-09-05 18:41:29 +00:00
de9ac31dbe Update Helm release grafana to v8.5.1 2024-09-05 18:36:19 +00:00
73b9e609dd Merge pull request 'Update owncloud/ocis Docker tag to v5.0.7' (#186) from renovate/owncloud-ocis-5.x into main
Reviewed-on: #186
2024-09-05 18:33:05 +00:00
ae94d3a9a7 Update owncloud/ocis Docker tag to v5.0.7 2024-09-04 21:31:33 +00:00
d077b8fdd8 Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.9' (#185) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #185
2024-09-04 20:04:20 +00:00
122e219397 Update homeassistant/home-assistant Docker tag to v2024.9 2024-09-04 18:31:40 +00:00
49073861bc Update actualbudget/actual-server Docker tag to v24.9.0 2024-09-03 17:31:18 +00:00
7ba629e826 Update Helm release traefik to v31 2024-09-03 15:01:23 +00:00
7a872b76f8 bump immich version 2024-09-03 10:35:07 +02:00
e5fa3f2072 Update Helm release immich to v0.7.2 2024-08-30 11:31:07 +00:00
9d1160208f Merge pull request 'Update Helm release grafana to v8.5.0' (#179) from renovate/grafana-8.x into main
Reviewed-on: #179
2024-08-29 09:25:47 +00:00
232952b63e Update Helm release grafana to v8.5.0 2024-08-29 09:25:34 +00:00
79aee6b145 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.10' (#180) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #180
2024-08-29 09:25:20 +00:00
a88968f192 Update jellyfin/jellyfin Docker tag to v10.9.10 2024-08-25 07:01:23 +00:00
8316e39ff7 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.12.0' (#177) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #177
2024-08-23 11:09:01 +00:00
61802b7ec0 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.12.0 2024-08-23 11:08:35 +00:00
87ea82b16d Merge pull request 'Update Helm release grafana to v8.4.7' (#176) from renovate/grafana-8.x into main
Reviewed-on: #176
2024-08-23 11:08:00 +00:00
2596d698d4 Update Helm release cloudnative-pg to v0.22.0 2024-08-22 16:01:28 +00:00
f7b046844e Update Helm release grafana to v8.4.7 2024-08-22 01:31:12 +00:00
b0a802bffc Merge pull request 'Update Helm release cloudnative-pg to v0.21.6' (#161) from renovate/cloudnative-pg-0.x into main
Reviewed-on: #161
2024-08-15 12:12:46 +00:00
b1e3288b94 Update Helm release cloudnative-pg to v0.21.6 2024-08-15 12:11:02 +00:00
02bb4d9f76 Merge pull request 'Update octodns/octodns Docker tag to v2024.08' (#170) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #170
2024-08-15 11:57:32 +00:00
86ac349c5d Update octodns/octodns Docker tag to v2024.08 2024-08-15 11:57:18 +00:00
686525eeff Merge pull request 'Update quay.io/thanos/thanos Docker tag to v0.36.1' (#165) from renovate/quay.io-thanos-thanos-0.x into main
Reviewed-on: #165
2024-08-15 11:57:03 +00:00
39d351e8a1 Update quay.io/thanos/thanos Docker tag to v0.36.1 2024-08-15 11:56:48 +00:00
c152fd117d Merge pull request 'Update Helm release grafana to v8.4.4' (#171) from renovate/grafana-8.x into main
Reviewed-on: #171
2024-08-15 08:27:44 +00:00
6958253c96 Update Helm release grafana to v8.4.4 2024-08-10 06:31:11 +00:00
16074c2026 Merge pull request 'Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.27.1' (#151) from renovate/docker.io-bitnami-sealed-secrets-controller-0.x into main
Reviewed-on: #151
2024-08-07 22:51:47 +00:00
fd00dbf893 Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.27.1 2024-08-07 22:51:34 +00:00
513b845de1 Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.8' (#169) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #169
2024-08-07 22:51:15 +00:00
a96472553b Update homeassistant/home-assistant Docker tag to v2024.8 2024-08-07 19:01:09 +00:00
55ef4aa6df Merge pull request 'Update actualbudget/actual-server Docker tag to v24.8.0' (#167) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #167
2024-08-05 09:58:11 +00:00
b0a6e5fa08 Update actualbudget/actual-server Docker tag to v24.8.0 2024-08-05 09:57:57 +00:00
ab63d1b819 Merge pull request 'Update Helm release grafana to v8.4.1' (#166) from renovate/grafana-8.x into main
Reviewed-on: #166
2024-08-05 09:57:43 +00:00
f3a1e927ff Merge branch 'main' into renovate/grafana-8.x 2024-08-05 09:57:33 +00:00
6f29475d25 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.9' (#168) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #168
2024-08-05 09:57:22 +00:00
e988f55ba8 Update jellyfin/jellyfin Docker tag to v10.9.9 2024-08-05 02:31:25 +00:00
bb259be422 Update Helm release grafana to v8.4.1 2024-08-02 18:01:15 +00:00
ac45bb0958 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.11.0' (#164) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #164
2024-07-31 10:49:04 +00:00
e3580c6170 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.11.0 2024-07-31 10:48:50 +00:00
a801d8ffa8 Merge pull request 'Update Helm release grafana to v8.4.0' (#160) from renovate/grafana-8.x into main
Reviewed-on: #160
2024-07-31 10:48:32 +00:00
53d6029e84 Update Helm release grafana to v8.4.0 2024-07-31 10:01:13 +00:00
239e2fdf49 fix traefik deployment 2024-07-30 18:49:47 +02:00
ae45a87b8a update immich 2024-07-30 17:52:39 +02:00
9cabd42c53 Merge pull request 'Update Helm release metallb to v0.14.8' (#149) from renovate/metallb-0.x into main
Reviewed-on: #149
2024-07-29 09:39:49 +00:00
d45374fe4a Update Helm release metallb to v0.14.8 2024-07-29 09:39:34 +00:00
e350de1a3e Merge pull request 'Update renovate/renovate Docker tag to v38' (#157) from renovate/renovate-renovate-38.x into main
Reviewed-on: #157
2024-07-29 09:39:16 +00:00
8eb64ff444 Merge pull request 'Update Helm release traefik to v30' (#156) from renovate/traefik-30.x into main
Reviewed-on: #156
2024-07-26 08:13:07 +00:00
e8b786e210 Update renovate/renovate Docker tag to v38 2024-07-25 14:01:06 +00:00
37dfd07ea9 Update Helm release traefik to v30 2024-07-24 14:01:26 +00:00
0f872ec949 Merge pull request 'Update owncloud/ocis Docker tag to v5.0.6' (#150) from renovate/owncloud-ocis-5.x into main
Reviewed-on: #150
2024-07-24 08:57:01 +00:00
3b1ab8e595 Update owncloud/ocis Docker tag to v5.0.6 2024-07-24 08:30:56 +00:00
e35da6fc63 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.8' (#154) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #154
2024-07-23 13:12:02 +00:00
da4363262c Merge pull request 'Update Helm release grafana to v8.3.6' (#148) from renovate/grafana-8.x into main
Reviewed-on: #148
2024-07-23 13:11:45 +00:00
ebc787030f Merge pull request 'Update Helm release gitea to v10.4.0' (#155) from renovate/gitea-10.x into main
Reviewed-on: #155
2024-07-23 13:11:28 +00:00
5b2cc939a5 Update Helm release gitea to v10.4.0 2024-07-21 12:01:30 +00:00
f45faf4509 Update jellyfin/jellyfin Docker tag to v10.9.8 2024-07-21 05:31:09 +00:00
7433dd17f4 Update Helm release grafana to v8.3.6 2024-07-20 19:01:02 +00:00
055d091447 redis is required after all 2024-07-18 18:44:04 +02:00
1aa86ef16c better kustomization using remote git refs (instead of git submodules) 2024-07-16 19:08:39 +02:00
dd5e738cab special label for gitea 2024-07-14 12:22:07 +02:00
7e5a1afb90 use nfs-provisioner 2024-07-14 12:11:09 +02:00
175817190c tighter security for deployments, no erronous submodules 2024-07-14 11:37:47 +02:00
31141c6ef1 Merge pull request 'Update Helm release grafana to v8.3.3' (#147) from renovate/grafana-8.x into main
Reviewed-on: #147
2024-07-13 08:57:57 +00:00
e581c3a488 Update Helm release grafana to v8.3.3 2024-07-12 19:30:46 +00:00
4ce4e816c1 Merge pull request 'Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.27.0' (#108) from renovate/docker.io-bitnami-sealed-secrets-controller-0.x into main
Reviewed-on: #108
2024-07-12 15:58:46 +00:00
f50a2a61fc Merge pull request 'Update Helm release traefik to v29' (#145) from renovate/traefik-29.x into main
Reviewed-on: #145
2024-07-12 15:58:24 +00:00
ee6e4f1e32 Merge pull request 'Update Helm release gitea to v10.3.0' (#146) from renovate/gitea-10.x into main
Reviewed-on: #146
2024-07-12 15:57:24 +00:00
40454d871f update immich 2024-07-12 17:52:10 +02:00
e503ae6d30 bump immich version 2024-07-11 14:51:37 +02:00
5233956a09 Update Helm release traefik to v29 2024-07-09 09:30:56 +00:00
e7118e9182 Merge pull request 'Update Helm release cloudnative-pg to v0.21.5' (#82) from renovate/cloudnative-pg-0.x into main
Reviewed-on: #82
2024-07-09 07:21:52 +00:00
e79da15d16 home assistant dashboard improvements 2024-07-09 09:20:41 +02:00
1bcaafd14e Update Helm release gitea to v10.3.0 2024-07-07 13:00:59 +00:00
6a10c8a908 Merge pull request 'Update Helm release grafana to v8.3.2' (#138) from renovate/grafana-8.x into main
Reviewed-on: #138
2024-07-05 09:03:47 +00:00
7f61158564 Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.52' (#144) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #144
2024-07-05 09:03:34 +00:00
2f17e6d47a Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.7' (#143) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #143
2024-07-05 09:03:20 +00:00
466d58b26b Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.10.2' (#142) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #142
2024-07-05 09:03:03 +00:00
03f873ecf4 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.7.0' (#141) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #141
2024-07-05 09:02:52 +00:00
56cca145b4 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.10.2 2024-07-05 06:31:10 +00:00
3ecd55787a Update adguard/adguardhome Docker tag to v0.107.52 2024-07-04 16:01:01 +00:00
45e46cf6e9 Update Helm release grafana to v8.3.2 2024-07-04 13:00:59 +00:00
c19d6d8244 Update homeassistant/home-assistant Docker tag to v2024.7 2024-07-03 18:01:23 +00:00
c5250c5a45 Update actualbudget/actual-server Docker tag to v24.7.0 2024-07-02 21:01:19 +00:00
e70c1c9685 actually, as a job makes more sense. And is reschedulable 2024-07-02 18:48:14 +02:00
b5d6f28178 use a pod that is allowed to stop 2024-07-02 17:03:23 +02:00
14a54e691d add even higher limits for minecraft 2024-07-02 15:15:13 +02:00
d6eb7b8f84 Merge pull request 'Update Helm release grafana to v8.2.1' (#137) from renovate/grafana-8.x into main
Reviewed-on: #137
2024-07-01 12:07:37 +00:00
025e0c4ff1 Update Helm release grafana to v8.2.1 2024-07-01 10:01:05 +00:00
d76455787a more generous limits for minecraft 2024-07-01 10:08:08 +02:00
252b732bd8 remove homepage 2024-07-01 10:00:16 +02:00
93ca89060c msinomer 2024-06-30 22:37:40 +02:00
8e043fdd58 cleanup 2024-06-29 12:45:55 +02:00
d87b8bcff2 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.7' (#136) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #136
2024-06-25 08:36:08 +00:00
4be1c00592 Update jellyfin/jellyfin Docker tag to v10.9.7 2024-06-25 01:01:04 +00:00
9b1303d10e update dashboards 2024-06-20 23:33:41 +02:00
36f2596dfb Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.27.0 2024-06-20 11:31:01 +00:00
abf59c480f make servicemonitor be discoverable 2024-06-19 18:06:10 +02:00
c521a23a16 Merge pull request 'Update quay.io/thanos/thanos Docker tag to v0.35.1' (#87) from renovate/quay.io-thanos-thanos-0.x into main
Reviewed-on: #87
2024-06-18 18:44:56 +00:00
b646968c16 Update apps/immich/kustomization.yaml
bump immich version
2024-06-18 18:44:22 +00:00
a1afc7d736 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.9.0' (#135) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #135
2024-06-18 18:42:56 +00:00
799d084471 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.9.0 2024-06-18 10:01:00 +00:00
511ed7e78d Update Helm release cloudnative-pg to v0.21.5 2024-06-13 14:30:47 +00:00
0d1d10a103 slim down jellyfin 2024-06-13 13:14:44 +02:00
de667a31ad immich update 2024-06-13 00:21:48 +02:00
ef2b1d393d Merge pull request 'Update Helm release grafana to v8.0.2' (#131) from renovate/grafana-8.x into main
Reviewed-on: #131
2024-06-12 22:09:44 +00:00
0402d54fda Merge pull request 'Update octodns/octodns Docker tag to v2024.06' (#127) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #127
2024-06-12 22:09:28 +00:00
d80dfc35fd Update Helm release grafana to v8.0.2 2024-06-12 08:30:50 +00:00
9d47443573 Update octodns/octodns Docker tag to v2024.06 2024-06-10 17:00:57 +00:00
806b42874c update thanos 2024-06-10 00:34:01 +02:00
3c71ac8411 Merge pull request 'Update Helm release grafana to v8.0.1' (#125) from renovate/grafana-8.x into main
Reviewed-on: #125
2024-06-09 21:42:40 +00:00
c2db5eb712 Merge pull request 'Update alpine/git Docker tag to v2.45.2' (#126) from renovate/alpine-git-2.x into main
Reviewed-on: #126
2024-06-09 21:42:14 +00:00
040771494a Merge pull request 'Update Helm release traefik to v28' (#85) from renovate/traefik-28.x into main
Reviewed-on: #85
2024-06-09 21:38:58 +00:00
57c57b7620 changes according to migration docs 2024-06-09 23:38:34 +02:00
a41ec520a2 Update alpine/git Docker tag to v2.45.2 2024-06-09 04:31:00 +00:00
9057768561 Update Helm release grafana to v8.0.1 2024-06-07 21:00:56 +00:00
db3dc9a8af Merge pull request 'Update Helm release gitea to v10.2.0' (#124) from renovate/gitea-10.x into main
Reviewed-on: #124
2024-06-07 16:59:35 +00:00
31a968ef87 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.6' (#123) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #123
2024-06-07 16:59:00 +00:00
9778d796a9 Update Helm release gitea to v10.2.0 2024-06-06 21:01:06 +00:00
7a44938d6d Update jellyfin/jellyfin Docker tag to v10.9.6 2024-06-06 19:00:59 +00:00
689038a808 Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.51' (#122) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #122
2024-06-06 17:19:47 +00:00
88ca15d995 Update adguard/adguardhome Docker tag to v0.107.51 2024-06-06 15:00:56 +00:00
249b335ccb Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.5' (#120) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #120
2024-06-06 09:22:20 +00:00
8c33c50457 Merge pull request 'Update ghcr.io/gethomepage/homepage Docker tag to v0.9.2' (#121) from renovate/ghcr.io-gethomepage-homepage-0.x into main
Reviewed-on: #121
2024-06-06 09:22:08 +00:00
4f1cbbabe6 Update ghcr.io/gethomepage/homepage Docker tag to v0.9.2 2024-06-06 03:30:44 +00:00
4f4e6bdf13 Update jellyfin/jellyfin Docker tag to v10.9.5 2024-06-05 22:30:46 +00:00
ebbece048e Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.6' (#119) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #119
2024-06-05 21:58:04 +00:00
9987aa9d0b Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.8.0' (#118) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #118
2024-06-05 21:57:12 +00:00
14cc093e51 Merge pull request 'Update alpine/git Docker tag to v2.45.1' (#110) from renovate/alpine-git-2.x into main
Reviewed-on: #110
2024-06-05 21:56:57 +00:00
18576ff7f2 Update homeassistant/home-assistant Docker tag to v2024.6 2024-06-05 19:31:05 +00:00
bee9243407 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.8.0 2024-06-05 19:31:01 +00:00
8223b336ed Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.4' (#112) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #112
2024-06-05 19:28:21 +00:00
1fd0da6778 Merge pull request 'Update Helm release grafana to v8' (#115) from renovate/grafana-8.x into main
Reviewed-on: #115
2024-06-05 19:26:21 +00:00
6be344fc8d Merge pull request 'Update actualbudget/actual-server Docker tag to v24.6.0' (#114) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #114
2024-06-05 19:25:58 +00:00
d46ee3894e Merge pull request 'Update ghcr.io/gethomepage/homepage Docker tag to v0.9.1' (#117) from renovate/ghcr.io-gethomepage-homepage-0.x into main
Reviewed-on: #117
2024-06-05 19:25:24 +00:00
b282f363ce Update ghcr.io/gethomepage/homepage Docker tag to v0.9.1 2024-06-03 20:35:37 +00:00
4b494642f5 Update Helm release grafana to v8 2024-06-03 16:01:41 +00:00
08c508862f Update actualbudget/actual-server Docker tag to v24.6.0 2024-06-03 10:31:05 +00:00
3d63498b25 Update jellyfin/jellyfin Docker tag to v10.9.4 2024-06-01 23:01:04 +00:00
4ef6b01a92 Update Helm release traefik to v28 2024-05-31 08:31:11 +00:00
7cf2c9c479 Update quay.io/thanos/thanos Docker tag to v0.35.1 2024-05-28 14:31:04 +00:00
a11f3e24f8 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.3' (#111) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #111
2024-05-27 17:39:48 +00:00
adff6180ea Update jellyfin/jellyfin Docker tag to v10.9.3 2024-05-27 00:30:59 +00:00
99dd81531e Update alpine/git Docker tag to v2.45.1 2024-05-25 23:01:08 +00:00
4f18adf1da try once more 2024-05-25 13:12:23 +02:00
7e3f8a2764 and undo because it doesn't work 2024-05-25 12:39:33 +02:00
3a94d7a7b7 add docker builder using kubernetes natively 2024-05-25 12:32:15 +02:00
9f8ae4b0fa gitea revert to dind runner 2024-05-25 11:24:55 +02:00
d53ee0079e Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.7.0' (#106) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #106
2024-05-24 19:18:11 +00:00
f844eb8caa Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.50' (#107) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #107
2024-05-23 21:38:05 +00:00
fb645058ac Update adguard/adguardhome Docker tag to v0.107.50 2024-05-23 15:31:10 +00:00
261790e329 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.7.0 2024-05-23 11:14:10 +00:00
645c8edde7 Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.49' (#102) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #102
2024-05-23 11:10:18 +00:00
c7b52155ac allow spindown of minecraft server 2024-05-23 13:08:48 +02:00
46a2c8998e Merge pull request 'Update alpine/git Docker tag to v2.43.4' (#101) from renovate/alpine-git-2.x into main
Reviewed-on: #101
2024-05-23 09:42:19 +00:00
fbba22cb07 Merge pull request 'Update owncloud/ocis Docker tag to v5.0.5' (#103) from renovate/owncloud-ocis-5.x into main
Reviewed-on: #103
2024-05-23 09:42:00 +00:00
f03c76c53b Update owncloud/ocis Docker tag to v5.0.5 2024-05-22 14:30:56 +00:00
c7f5cb8773 Update adguard/adguardhome Docker tag to v0.107.49 2024-05-21 15:30:48 +00:00
206f8e4c50 try k8s-native actions once more 2024-05-21 12:14:48 +02:00
03df5e4663 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.2' (#100) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #100
2024-05-20 19:12:17 +00:00
72906d205b with certs 2024-05-20 12:22:56 +02:00
c6f7471ebb try fixing the labels 2024-05-20 12:15:56 +02:00
a3550d10cb add wireguard 2024-05-19 12:31:50 +02:00
f22d25b101 add minecraft without autosync 2024-05-19 11:22:21 +02:00
b7b9afa1a5 Update alpine/git Docker tag to v2.43.4 2024-05-19 04:30:42 +00:00
835f05866c different gitea runner strategy 2024-05-18 17:19:14 +02:00
1aa2e55f22 try a better gitea actions runner 2024-05-18 13:57:26 +02:00
3c777a92c0 Update jellyfin/jellyfin Docker tag to v10.9.2 2024-05-17 21:00:55 +00:00
7d893d27ec bump immich version 2024-05-16 10:13:19 +02:00
d0fcf951cc bump immich version 2024-05-16 09:51:57 +02:00
1e9959e3d1 better minecraft deployment 2024-05-16 09:51:16 +02:00
ce821b6abe Merge pull request 'Update binwiederhier/ntfy Docker tag to v2.11.0' (#98) from renovate/binwiederhier-ntfy-2.x into main
Reviewed-on: #98
2024-05-16 07:39:12 +00:00
1de224ea77 Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.1' (#95) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #95
2024-05-16 07:37:41 +00:00
103f4c8a9f Merge pull request 'Update owncloud/ocis Docker tag to v5.0.4' (#99) from renovate/owncloud-ocis-5.x into main
Reviewed-on: #99
2024-05-16 07:37:20 +00:00
124881d3a8 Update owncloud/ocis Docker tag to v5.0.4 2024-05-14 13:31:01 +00:00
0b5d2a5fe6 Update jellyfin/jellyfin Docker tag to v10.9.1 2024-05-14 09:01:22 +00:00
332082c9fc Update binwiederhier/ntfy Docker tag to v2.11.0 2024-05-13 20:31:02 +00:00
0eaa9fe774 empty line removed 2024-05-13 14:26:53 +02:00
192e2e869f minecraft 2024-05-13 14:25:49 +02:00
0fd9936db5 gitea runner improvements 2024-05-13 14:25:49 +02:00
1a9d0fc00c Merge pull request 'Update jellyfin/jellyfin Docker tag to v10.9.0' (#94) from renovate/jellyfin-jellyfin-10.x into main
Reviewed-on: #94
2024-05-12 11:07:57 +00:00
a8dfca3c43 Update jellyfin/jellyfin Docker tag to v10.9.0 2024-05-11 19:01:08 +00:00
42e2bc35a5 Merge pull request 'Update ghcr.io/gethomepage/homepage Docker tag to v0.8.13' (#90) from renovate/ghcr.io-gethomepage-homepage-0.x into main
Reviewed-on: #90
2024-05-10 08:46:45 +00:00
7e2e5a56db Merge branch 'main' into renovate/ghcr.io-gethomepage-homepage-0.x 2024-05-10 08:45:47 +00:00
01279dd023 Merge pull request 'Update octodns/octodns Docker tag to v2024.05' (#91) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #91
2024-05-08 13:29:51 +00:00
d6ce07a8a0 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.6.0' (#92) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #92
2024-05-08 13:28:59 +00:00
6eb617086a Update ghcr.io/mealie-recipes/mealie Docker tag to v1.6.0 2024-05-07 12:00:58 +00:00
8137bf8f1b Update apps/immich/kustomization.yaml 2024-05-06 17:59:00 +00:00
5f1dcaabba Update octodns/octodns Docker tag to v2024.05 2024-05-06 15:30:45 +00:00
37bdb32f43 Update ghcr.io/gethomepage/homepage Docker tag to v0.8.13 2024-05-06 05:30:44 +00:00
ca15a6497c Add apps/media/ingress.yaml 2024-05-04 12:10:12 +00:00
095d2d6392 remove limits 2024-05-04 12:47:10 +02:00
b2993c9395 Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.5' (#86) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #86
2024-05-04 09:06:57 +00:00
d7b0f658de Merge pull request 'Update actualbudget/actual-server Docker tag to v24.5.0' (#89) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #89
2024-05-04 09:06:26 +00:00
391c71729b Update actualbudget/actual-server Docker tag to v24.5.0 2024-05-03 17:00:53 +00:00
bee5dd0c0b Update owncloud/ocis Docker tag to v5.0.3 2024-05-02 16:30:46 +00:00
25ab46e69a change base image for k8s conformity 2024-05-02 17:26:36 +02:00
123412e073 small naming mistake 2024-05-02 17:15:30 +02:00
39818887fa add gitea actions 2024-05-02 17:12:43 +02:00
0700609568 Update homeassistant/home-assistant Docker tag to v2024.5 2024-05-01 19:30:44 +00:00
198b24132e Update Helm release metallb to v0.14.5 2024-04-27 09:13:11 +00:00
f6e45d089b Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.26.2 2024-04-27 09:12:35 +00:00
23eab57208 Update Helm release cloudnative-pg to v0.21.0 2024-04-25 12:01:00 +00:00
a94521f197 update ocis 2024-04-21 12:59:19 +02:00
38f58d86c9 new versiob 2024-04-21 12:26:44 +02:00
76d1c51157 improve thanos/prometheus retention 2024-04-20 19:04:44 +02:00
7aaeeded89 update and improve grafana 2024-04-20 18:37:19 +02:00
9b93016f93 bump immich version 2024-04-20 18:34:52 +02:00
aaf624bb42 bump immich version 2024-04-20 18:27:01 +02:00
8536d91288 Update Helm release immich to v0.6.0 2024-04-20 16:26:29 +00:00
3f62bee199 reduce gitea load by ditching redis 2024-04-20 18:22:16 +02:00
f9f39818a1 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.5.1' (#77) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #77
2024-04-17 16:29:25 +00:00
a73e6dc4db Update ghcr.io/mealie-recipes/mealie Docker tag to v1.5.1 2024-04-17 14:01:07 +00:00
1df7abf987 Merge pull request 'Update ghcr.io/gethomepage/homepage Docker tag to v0.8.12' (#76) from renovate/ghcr.io-gethomepage-homepage-0.x into main
Reviewed-on: #76
2024-04-17 12:54:36 +00:00
0e1bb58c24 Update ghcr.io/gethomepage/homepage Docker tag to v0.8.12 2024-04-17 12:54:36 +00:00
fcd2d2eaa2 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.5.0' (#75) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #75
2024-04-17 12:54:14 +00:00
455790d3c6 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.5.0 2024-04-16 16:31:00 +00:00
cdbcdba25d Update Helm release gitea to v10.1.4 2024-04-16 10:58:27 +00:00
9dcb06678b remove old filesync deployments (nextcloud) 2024-04-16 12:56:54 +02:00
a4fe0a7fe4 add homepage as a deployment 2024-04-16 12:43:33 +02:00
ece9faa60c Merge pull request 'Update octodns/octodns Docker tag to v2024.04' (#72) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #72
2024-04-16 07:54:27 +00:00
d4bea2994c Merge pull request 'Update Helm release traefik to v27' (#66) from renovate/traefik-27.x into main
Reviewed-on: #66
2024-04-16 07:53:47 +00:00
0ec3bf9ea8 Update Helm release traefik to v27 2024-04-12 08:01:16 +00:00
0c5760b22b Update octodns/octodns Docker tag to v2024.04 2024-04-10 16:30:48 +00:00
e144722d59 fix cnpg syncing issues 2024-04-10 14:01:57 +02:00
bf6e7aa10c mabye like that? 2024-04-06 14:33:57 +02:00
ae53c44428 fix servicemonitors 2024-04-06 14:24:06 +02:00
05d5b02347 Update actualbudget/actual-server Docker tag to v24.4.0 2024-04-06 12:22:05 +00:00
337237a0f8 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.4.0 2024-04-06 12:21:39 +00:00
ccc4b13c35 Update adguard/adguardhome Docker tag to v0.107.48 2024-04-06 12:13:15 +00:00
a6a9c7c217 update home assistant and fix prometheus 2024-04-06 14:12:07 +02:00
bc0f29f028 update immich 2024-04-03 14:11:08 +02:00
e2c9d73728 update to dashboards 2024-04-01 13:23:46 +02:00
442c07f031 bad configmap 2024-04-01 13:11:11 +02:00
8fd9fa6f11 better dashboards 2024-04-01 12:21:50 +02:00
516d7e8e09 like that? 2024-04-01 11:57:06 +02:00
acf9d34b10 Merge branch 'main' of ssh://git.kluster.moll.re:2222/remoll/k3s-infra 2024-04-01 11:47:11 +02:00
3ffead0a14 try fixing grafana 2024-04-01 11:47:01 +02:00
b6bdc09efc Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.26.1 2024-04-01 09:33:23 +00:00
49b21cde52 proper backup config 2024-03-31 19:37:18 +02:00
deed24aa01 try fixing homeassistant again 2024-03-31 19:28:19 +02:00
9cfb98248d update immich 2024-03-31 19:08:14 +02:00
7bc4beefce Update Helm release cloudnative-pg to v0.20.2 2024-03-31 15:19:09 +00:00
ce9ff68c26 Update binwiederhier/ntfy Docker tag to v2.10.0 2024-03-31 15:18:06 +00:00
8249e7ef01 Update adguard/adguardhome Docker tag to v0.107.46 2024-03-31 15:15:00 +00:00
14e65df483 Update Helm release metallb to v0.14.4 2024-03-31 15:14:18 +00:00
f6fef4278b enable wal for grafana? 2024-03-29 00:55:34 +01:00
ef50df8386 slight mistake 2024-03-28 19:45:27 +01:00
b6df7604ed add missing references 2024-03-28 19:22:59 +01:00
a03d869d0c added dashboards 2024-03-28 19:20:28 +01:00
1063349fbe use sealedsecret 2024-03-28 19:17:19 +01:00
b88c212b57 now with correct secret 2024-03-28 19:10:01 +01:00
38a522a8d6 cleaner monitoring 2024-03-28 19:07:42 +01:00
046936f8f6 fix 2024-03-28 14:04:07 +01:00
309cbc08f5 so? 2024-03-28 13:55:57 +01:00
08b4c7eb5e switch ocis to nfs-provisioner 2024-03-28 13:52:44 +01:00
58e632e0b8 migrate mealie pvc 2024-03-28 13:21:50 +01:00
30d02edebc update rss 2024-03-28 13:19:53 +01:00
e30bfe64ae dum dum 2024-03-28 12:59:51 +01:00
764a3eafb7 switch some apps over to nfs-client 2024-03-28 12:40:48 +01:00
eff07665de add nfs-provisioner with sensible path template 2024-03-28 12:29:16 +01:00
571aebe78d now? 2024-03-27 14:15:13 +01:00
91a2ae5fe8 annoying 2024-03-27 14:13:22 +01:00
f12c21ef18 update vikunja 2024-03-27 14:03:55 +01:00
2a96b288bf or like that? 2024-03-27 09:39:58 +01:00
6f3a5aeab2 okey 2024-03-27 09:37:51 +01:00
b001bd3efc maybe like that? 2024-03-27 09:36:22 +01:00
b54794df35 dum-dum 2024-03-27 09:19:00 +01:00
51c8f7c092 fix the db location 2024-03-27 09:15:25 +01:00
cfb1a87a5b now with correct api path 2024-03-27 09:07:01 +01:00
10483431c6 trying todos like that 2024-03-27 09:04:40 +01:00
3a9450da9d now? 2024-03-27 08:34:48 +01:00
374e23ba1e trying to fix immich 2024-03-27 08:32:46 +01:00
66f703f5e1 update to correct location 2024-03-27 08:25:53 +01:00
4b05b53d72 small fixes 2024-03-27 00:38:34 +01:00
cfbc7fcd0d disable typesense 2024-03-27 00:31:41 +01:00
ffed2aea50 add media back 2024-03-27 00:27:57 +01:00
e674bf5b94 slim down the file sync 2024-03-27 00:12:50 +01:00
133af74ae0 missing namespace resource 2024-03-27 00:05:55 +01:00
f648064304 remove nfs-client 2024-03-26 23:50:27 +01:00
c7180f793a trying like that 2024-03-26 22:58:17 +01:00
4fcdaad297 move prometheus to its own config 2024-03-26 22:13:02 +01:00
f4b99ca037 now perhaps? 2024-03-26 11:16:33 +01:00
588bf774f9 or like that? 2024-03-26 10:58:44 +01:00
e18c661dbd typo 2024-03-26 10:57:18 +01:00
7d65ffea6a remove ocis:// 2024-03-26 10:56:34 +01:00
e460b5324a try differently configured todos 2024-03-26 10:55:25 +01:00
6fe166e60c manage todos 2024-03-24 15:31:59 +01:00
6ceb3816fb cleanup with regards to upcoming migration 2024-03-23 11:45:11 +01:00
19b63263e6 whoopsie 2024-03-22 14:57:17 +01:00
20d46d89d2 also manage ocis 2024-03-22 14:54:30 +01:00
7aee6c7cf0 basic auth maybe? 2024-03-22 14:53:29 +01:00
443da20ff9 steps towards a completely managed cluster 2024-03-20 23:45:08 +01:00
84a47b15b6 increase renovate frequency 2024-03-12 21:28:35 +01:00
40259ee57e Update apps/immich/kustomization.yaml 2024-03-12 14:01:08 +00:00
619368a2fd Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.3' (#54) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #54
2024-03-12 09:04:37 +00:00
3288966b95 Merge pull request 'Update octodns/octodns Docker tag to v2024.03' (#55) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #55
2024-03-12 09:04:16 +00:00
d12d50b906 Update apps/immich/kustomization.yaml 2024-03-12 09:03:55 +00:00
c7f0221062 Update octodns/octodns Docker tag to v2024.03 2024-03-12 09:02:04 +00:00
7819867091 Update homeassistant/home-assistant Docker tag to v2024.3 2024-03-12 09:01:41 +00:00
dd4c3d7a36 Update apps/immich/kustomization.yaml 2024-03-12 08:37:11 +00:00
e66905402e Merge pull request 'Update Helm release immich to v0.4.0' (#47) from renovate/immich-0.x into main
Reviewed-on: #47
2024-03-12 08:35:56 +00:00
1bdb4522c3 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.3.2' (#53) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #53
2024-03-12 08:32:10 +00:00
b5845479c2 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.3.2 2024-03-10 19:01:42 +00:00
f2f31c4f4e Merge pull request 'Update binwiederhier/ntfy Docker tag to v2.9.0' (#52) from renovate/binwiederhier-ntfy-2.x into main
Reviewed-on: #52
2024-03-10 09:57:10 +00:00
ded829500c Update binwiederhier/ntfy Docker tag to v2.9.0 2024-03-09 11:04:03 +00:00
f762f5451b Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.45' (#51) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #51
2024-03-08 07:42:50 +00:00
709f21998e Update adguard/adguardhome Docker tag to v0.107.45 2024-03-07 18:01:21 +00:00
47f091be83 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.3.0' (#48) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #48
2024-03-07 17:31:17 +00:00
da8be916bf fix bad naming 2024-03-07 13:21:05 +01:00
ad67acb9e7 again 2024-03-07 13:17:50 +01:00
5a7b5a82d7 maybe the service was misconfigured 2024-03-07 13:16:14 +01:00
2c32db61ec why? 2024-03-07 13:13:54 +01:00
141b80d15c man... 2024-03-07 13:11:08 +01:00
bf1d4badbe or directly use the dns name 2024-03-07 13:08:29 +01:00
be48049e22 fix bad syntax 2024-03-07 13:01:21 +01:00
3a629284f3 perhaps now 2024-03-07 12:59:04 +01:00
28c92e727f last chance 2024-03-06 14:48:14 +01:00
9a65c531f1 now? 2024-03-06 14:37:23 +01:00
52a086df73 come on 2024-03-06 14:34:19 +01:00
b728e21a15 expose grpc of store 2024-03-06 14:31:04 +01:00
da32c9c2ce neew 2024-03-06 14:25:47 +01:00
846390600e let's try with query as well 2024-03-06 14:24:07 +01:00
18d7a6b4cb or maybe like that? 2024-03-06 11:34:15 +01:00
31c8e91502 actually don't specify data 2024-03-06 11:31:15 +01:00
f0adf6b5db change user of prometheus to make thanos happy 2024-03-06 08:14:55 +01:00
b24ae9c698 with correct image 2024-03-05 16:44:42 +01:00
f3c108e362 fix 2024-03-05 16:41:54 +01:00
d2a8d92864 also use thanos object store 2024-03-05 16:39:15 +01:00
10816c4bd9 Update actualbudget/actual-server Docker tag to v24.3.0 2024-03-03 20:01:34 +00:00
aca0d4ba21 Update Helm release immich to v0.4.0 2024-03-03 20:01:27 +00:00
1ad56fd27e Merge pull request 'Update Helm release traefik to v26.1.0' (#42) from renovate/traefik-26.x into main
Reviewed-on: #42
2024-03-03 19:33:13 +00:00
773a155627 Update Helm release traefik to v26.1.0 2024-03-03 19:33:13 +00:00
61945b3507 Merge pull request 'Update Helm release metallb to v0.14.3' (#34) from renovate/metallb-0.x into main
Reviewed-on: #34
2024-03-03 19:32:16 +00:00
4aa21cb0cd Update Helm release metallb to v0.14.3 2024-03-03 19:32:16 +00:00
d233ab96eb Merge pull request 'Update Helm release gitea to v10.1.3' (#46) from renovate/gitea-10.x into main
Reviewed-on: #46
2024-03-03 19:31:04 +00:00
df581e0110 Update Helm release gitea to v10.1.3 2024-03-03 19:31:04 +00:00
8a114b9384 remove homarr 2024-03-03 20:30:06 +01:00
ab6506f4f2 update immich 2024-02-21 18:35:13 +01:00
87242d293a Merge pull request 'Update Helm release homarr to v1.0.6' (#38) from renovate/homarr-1.x into main
Reviewed-on: #38
2024-02-13 10:34:15 +00:00
11d46ec295 Merge pull request 'Update Helm release gitea to v10.1.1' (#35) from renovate/gitea-10.x into main
Reviewed-on: #35
2024-02-13 10:33:42 +00:00
1b3702c4c8 Update Helm release gitea to v10.1.1 2024-02-13 10:33:42 +00:00
9b68b4a915 lets be more generous with memory 2024-02-11 18:15:11 +01:00
18889d7391 add other recipes 2024-02-11 11:28:30 +01:00
a38ad1d7e6 bye bye 2024-02-10 19:35:22 +01:00
edcb9158f5 what now? 2024-02-10 19:21:04 +01:00
71b1c252f3 turns out it was important 2024-02-10 19:17:28 +01:00
b30f44d2c6 last chance 2024-02-10 19:16:08 +01:00
85abf0fda6 with services? 2024-02-10 19:04:08 +01:00
5e21ceaad3 lets try this 2024-02-10 18:58:20 +01:00
3f5c1a5a5c add configmap 2024-02-10 10:56:59 +01:00
0195833fc3 service account not needed 2024-02-10 10:54:41 +01:00
64835e16de slight fix 2024-02-10 10:53:20 +01:00
4e11a33855 correct backend 2024-02-10 10:46:38 +01:00
bad024861a add recipes 2024-02-10 10:45:53 +01:00
fe5d6a9014 Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.44' (#39) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #39
2024-02-08 09:24:43 +00:00
f2898d7e0b Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.2' (#40) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #40
2024-02-08 09:24:05 +00:00
f67f0c8889 Update homeassistant/home-assistant Docker tag to v2024.2 2024-02-07 21:02:14 +00:00
0ccb17d8e1 Update adguard/adguardhome Docker tag to v0.107.44 2024-02-07 11:01:45 +00:00
bb6d417937 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.2.0' (#36) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #36
2024-02-07 10:09:46 +00:00
4e2ebe2540 Merge pull request 'Update octodns/octodns Docker tag to v2024' (#37) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #37
2024-02-07 10:09:26 +00:00
c5310b0f00 Update Helm release homarr to v1.0.6 2024-02-04 17:01:35 +00:00
46ef973f70 Update octodns/octodns Docker tag to v2024 2024-02-03 22:02:18 +00:00
c12d2dc7a6 whoopsie 2024-02-03 22:27:29 +01:00
e28c6ffd52 add physics 2024-02-03 22:19:09 +01:00
7ba6860ea0 Update actualbudget/actual-server Docker tag to v24.2.0 2024-02-03 21:01:51 +00:00
33c23ee42b Merge pull request 'Update ghcr.io/immich-app/immich-machine-learning Docker tag to v1.94.1' (#31) from renovate/ghcr.io-immich-app-immich-machine-learning-1.x into main
Reviewed-on: #31
2024-02-03 20:58:07 +00:00
b2f8c8bced Merge branch 'main' into renovate/ghcr.io-immich-app-immich-machine-learning-1.x 2024-02-03 20:57:54 +00:00
d5277d3d6a Merge pull request 'Update ghcr.io/immich-app/immich-server Docker tag to v1.94.1' (#32) from renovate/ghcr.io-immich-app-immich-server-1.x into main
Reviewed-on: #32
2024-02-03 20:56:19 +00:00
e3c90f5ede Merge branch 'main' into renovate/ghcr.io-immich-app-immich-server-1.x 2024-02-03 20:55:47 +00:00
eb5bda63db Merge pull request 'Update Helm release grafana to v7.3.0' (#26) from renovate/grafana-7.x into main
Reviewed-on: #26
2024-02-03 20:54:45 +00:00
a10a216f0e Update ghcr.io/immich-app/immich-server Docker tag to v1.94.1 2024-01-31 20:01:05 +00:00
3cf9fd0b87 Update ghcr.io/immich-app/immich-machine-learning Docker tag to v1.94.1 2024-01-31 20:01:03 +00:00
ea1fa1637f Update Helm release grafana to v7.3.0 2024-01-30 15:00:50 +00:00
96abe2a0f5 auto admin 2024-01-23 18:16:40 +01:00
9623f33b59 Merge pull request 'Update Helm release gitea to v10' (#16) from renovate/gitea-10.x into main
Reviewed-on: #16
2024-01-22 10:30:17 +00:00
b065fc7e59 idioto 2024-01-22 11:27:58 +01:00
617ed5601c allow renovate to fetch release notes 2024-01-22 11:11:34 +01:00
7e21ce4181 Update Helm release gitea to v10 2024-01-22 10:00:35 +00:00
eeaed091ab Merge pull request 'Update Helm release metallb to v0.13.12' (#30) from renovate/metallb-0.x into main
Reviewed-on: #30
2024-01-16 08:59:45 +00:00
ee52d2b777 Update Helm release metallb to v0.13.12 2024-01-15 19:00:31 +00:00
384e9fbaec no service account needed 2024-01-15 19:12:19 +01:00
606aded35f argo manage metallb 2024-01-15 19:03:49 +01:00
a3aa8888e9 or like that? 2024-01-14 17:31:24 +01:00
aaeb43e9c3 let's check if we get ips like that 2024-01-14 17:27:37 +01:00
a9b1d02a7e keeping some ips here 2024-01-14 17:22:57 +01:00
76b49270eb fix type 2024-01-14 12:58:42 +01:00
9b57715f92 bad yaml 2024-01-14 12:56:23 +01:00
85a96cf87b bump version 2024-01-14 12:54:33 +01:00
78b4be8fbd next try 2024-01-14 12:51:14 +01:00
7bc10b57ce lets try adding thanos 2024-01-14 12:41:03 +01:00
de26a052e8 QOL improvements 2024-01-11 22:05:05 +01:00
28ff769757 Deploy full on octodns 2024-01-11 21:57:02 +01:00
6a58ea337e forgot secret 2024-01-11 21:38:24 +01:00
2af279c161 still crashes, now due to auth 2024-01-11 21:37:29 +01:00
c26997ff83 single run only 2024-01-11 18:39:13 +01:00
a354464f6e try with local directory 2024-01-11 18:26:37 +01:00
268a9f3a7a correct env vars and labels 2024-01-11 18:12:12 +01:00
4ddeaf6c99 try this 2024-01-11 18:08:35 +01:00
b6f9a818af Execute 2nd command as well 2024-01-11 18:04:55 +01:00
f4670aa471 Add ddns 2024-01-11 17:59:56 +01:00
72a2914c24 correct git target 2024-01-11 17:52:29 +01:00
1d5bc8a9c1 why? 2024-01-11 17:51:01 +01:00
892c412fd9 let's tune it down 2024-01-11 17:46:25 +01:00
b6f7ead955 whoopsie 2024-01-11 17:44:58 +01:00
f033ba16eb correct version 2024-01-11 17:43:31 +01:00
f3ae2c424b use octodns 2024-01-11 17:42:35 +01:00
36035ee84d bump immich version 2024-01-11 10:08:12 +01:00
50679b400a Merge pull request 'Update actualbudget/actual-server Docker tag to v24' (#28) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #28
2024-01-10 16:08:35 +00:00
a68fb5f0a7 Update actualbudget/actual-server Docker tag to v24 2024-01-10 13:00:43 +00:00
5792367b8b Add finance to auto deploy 2024-01-10 13:15:42 +01:00
3699b79f1a let's try these monitorings 2024-01-08 15:48:38 +01:00
e473abda12 Merge pull request 'Update Helm release grafana to v7.0.21' (#25) from renovate/grafana-7.x into main
Reviewed-on: #25
2024-01-08 13:01:14 +00:00
f67f586006 Update Helm release grafana to v7.0.21 2024-01-08 10:00:33 +00:00
61e1276f02 maybe like that 2024-01-07 12:30:51 +01:00
111fd35fc3 needed? 2024-01-07 12:18:06 +01:00
cc4148fb8a correct crds 2024-01-07 12:16:47 +01:00
f1e624985f come on 2024-01-07 12:15:10 +01:00
c8d7d3c854 use traefik 2024-01-07 12:12:46 +01:00
4880503609 Is actually a token 2024-01-07 12:06:53 +01:00
f905ce1611 maybe it wes a token actually? 2024-01-07 12:05:42 +01:00
ecfc65ecdd try like this? 2024-01-07 11:59:41 +01:00
7da1d705a4 update authorization 2024-01-07 11:51:20 +01:00
299cbea97e change ingress slightly 2024-01-07 11:41:05 +01:00
b633d61920 update whoami 2024-01-07 11:39:10 +01:00
bfb8244e59 made a dum dum 2024-01-07 11:37:38 +01:00
33c2df9fa3 add external dns 2024-01-07 11:35:52 +01:00
3d84d6bed1 does servicemonitor accept this? 2024-01-04 18:29:18 +01:00
cf6a931097 fix port names 2024-01-04 18:27:03 +01:00
53c3865072 fix label syntax 2024-01-04 18:23:32 +01:00
d09a3509af trying to monitor syncthing 2024-01-04 18:21:26 +01:00
8c0abc16c4 Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024' (#24) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #24
2024-01-04 08:45:45 +00:00
399969677f Merge pull request 'Update Helm release immich to v0.3.1' (#22) from renovate/immich-0.x into main
Reviewed-on: #22
2024-01-04 08:44:55 +00:00
762756310a Update homeassistant/home-assistant Docker tag to v2024 2024-01-03 21:00:38 +00:00
ec964be7c3 whoopsie 2023-12-31 18:49:54 +01:00
0603da76b2 update gitea metric collection 2023-12-31 18:40:57 +01:00
a437c4228e update some scraping config 2023-12-31 18:26:45 +01:00
d5aab95186 try as a string 2023-12-31 17:58:15 +01:00
3acb329730 try again 2023-12-31 17:55:22 +01:00
73ce4e340f try again 2023-12-31 17:44:42 +01:00
0d4b6f4605 remove label requiremetns 2023-12-31 17:37:51 +01:00
deeb35bbb6 test monitoring 2023-12-31 17:34:11 +01:00
d4c658a28c match all servicemonitors? 2023-12-31 17:13:58 +01:00
1fcebe033b fix annotations 2023-12-31 17:06:13 +01:00
8fe51863f4 fix tag 2023-12-30 10:48:46 +01:00
c4eda4e75d fix tag 2023-12-30 10:45:23 +01:00
9490015728 maybe like that? 2023-12-30 10:42:23 +01:00
a641df167f remove port names 2023-12-30 10:39:55 +01:00
21d100fb62 update service config 2023-12-30 10:38:59 +01:00
26b06c553a deploy syncthing 2023-12-30 10:30:05 +01:00
d51bfcf7db Merge pull request 'Update Helm release homarr to v1.0.4' (#23) from renovate/homarr-1.x into main
Reviewed-on: #23
2023-12-27 17:27:57 +00:00
788c2436fc Update Helm release homarr to v1.0.4 2023-12-27 17:00:32 +00:00
c9e6d08dcd temporary home page 2023-12-26 14:56:57 +01:00
6b2e9f7165 small updates 2023-12-26 14:54:49 +01:00
8618468534 more ddns verbosity 2023-12-26 14:52:09 +01:00
94d6c0f523 update to match bash syntax 2023-12-26 14:37:43 +01:00
9aca8e9e0b add automatic dns updates 2023-12-26 14:34:57 +01:00
72b7734535 postgres metrics 2023-12-24 14:42:04 +01:00
28f33f8ff7 update misconfigs 2023-12-24 14:09:11 +01:00
4cf26679c6 add prometheus monitoring 2023-12-24 13:44:22 +01:00
1cd4df8b8f update prom cfg 2023-12-24 11:33:32 +01:00
adeb333954 add svc 2023-12-23 20:40:34 +01:00
e6bd080c6e switch to prometheus operator 2023-12-23 20:20:27 +01:00
c9f883eaa6 Update Helm release immich to v0.3.1 2023-12-23 16:00:31 +00:00
014309bad6 add prometheus 2023-12-23 15:39:03 +01:00
c61698fad9 correct vector version 2023-12-22 01:21:35 +01:00
8c21d58529 vectors finally 2023-12-22 00:51:38 +01:00
722b7c3fb6 correct pg version 2023-12-22 00:34:44 +01:00
b852da0321 try bumping the version 2023-12-22 00:19:32 +01:00
9c5affeff6 update immich 2023-12-22 00:06:00 +01:00
b6c2f57acf new db 2023-12-22 00:03:18 +01:00
2e4e033c36 local postgres 2023-12-22 00:00:30 +01:00
285a7541ca fix 2023-12-21 18:02:22 +01:00
dbf58027d8 trying cloudnative postgres 2023-12-21 18:00:20 +01:00
2f9019b6ba fixing pvc 2023-12-21 12:37:29 +01:00
1743ffca74 grafana cleanup 2023-12-21 12:28:48 +01:00
ea7527c143 Merge pull request 'Update Helm release grafana to v7' (#20) from renovate/grafana-7.x into main
Reviewed-on: #20
2023-12-21 11:17:01 +00:00
c27b289866 Update Helm release grafana to v7 2023-12-21 10:00:38 +00:00
4cbd95fd78 Merge pull request 'Update Helm release grafana to v6.61.2' (#19) from renovate/grafana-6.x into main
Reviewed-on: #19
2023-12-21 09:12:50 +00:00
5cfb2a02e3 Merge pull request 'Update Helm release telegraf to v1.8.39' (#18) from renovate/telegraf-1.x into main
Reviewed-on: #18
2023-12-21 09:12:21 +00:00
82559e848a Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.43' (#9) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #9
2023-12-21 09:11:54 +00:00
2f31cd6934 Update Helm release grafana to v6.61.2 2023-12-18 13:00:30 +00:00
4fdd4a39f5 Update Helm release telegraf to v1.8.39 2023-12-18 12:00:33 +00:00
86d32efc64 Update adguard/adguardhome Docker tag to v0.107.43 2023-12-11 15:00:22 +00:00
238 changed files with 2990 additions and 4157 deletions

6
.gitignore vendored
View File

@@ -1,2 +1,6 @@
# Kubernetes secrets
*.secret.yaml
charts/
main.key
# Helm Chart files
charts/

View File

@@ -1,11 +1,9 @@
# Kluster setup and IaaC using argoCD
### Initial setup
#### Requirements:
- A running k3s instance run:
- `metalLB` deployed
- A running k3s instance
- `sealedsecrets` deployed
#### Installing argo and the app-of-apps
@@ -29,5 +27,21 @@ The app-of-apps will bootstrap a fully featured cluster with the following compo
- immich
- ...
#### Recap
- install sealedsecrets see [README](./infrastructure/sealedsecrets/README.md)
```bash
kubectl apply -k infrastructure/sealedsecrets
kubectl apply -f infrastructure/sealedsecrets/main.key
kubectl delete pod -n kube-system -l name=sealed-secrets-controller
```
- install argocd
```bash
kubectl apply -k infrastructure/argocd
```
- wait...
### Adding an application
todo

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: adguard-tls-ingress

View File

@@ -10,7 +10,7 @@ resources:
images:
- name: adguard/adguardhome
newName: adguard/adguardhome
newTag: v0.107.42
newTag: v0.107.53
namespace: adguard

View File

@@ -24,6 +24,8 @@ metadata:
spec:
allocateLoadBalancerNodePorts: true
loadBalancerIP: 192.168.3.2
externalTrafficPolicy: Local
ports:
- name: dns-tcp
nodePort: 31306
@@ -46,6 +48,7 @@ metadata:
spec:
allocateLoadBalancerNodePorts: true
loadBalancerIP: 192.168.3.2
externalTrafficPolicy: Local
ports:
- name: dns-udp
nodePort: 30547

View File

@@ -0,0 +1,42 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: audiobookshelf
spec:
replicas: 1
selector:
matchLabels:
app: audiobookshelf
template:
metadata:
labels:
app: audiobookshelf
spec:
containers:
- name: audiobookshelf
image: audiobookshelf
ports:
- containerPort: 80
env:
- name: TZ
value: Europe/Berlin
- name: CONFIG_PATH
value: /data/config
- name: METADATA_PATH
value: /data/metadata
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
cpu: "100m"
memory: "200Mi"
limits:
cpu: "2"
memory: "1Gi"
volumes:
- name: data
persistentVolumeClaim:
claimName: audiobookshelf-data

View File

@@ -0,0 +1,17 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: audiobookshelf-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`audiobookshelf.kluster.moll.re`)
kind: Rule
services:
- name: audiobookshelf-web
port: 80
tls:
certResolver: default-tls

View File

@@ -0,0 +1,15 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
namespace: audiobookshelf
images:
- name: audiobookshelf
newName: ghcr.io/advplyr/audiobookshelf
newTag: "2.15.0"

View File

@@ -1,11 +1,9 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-backup-claim
name: audiobookshelf-data
spec:
storageClassName: nfs-client
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: audiobookshelf-web
spec:
selector:
app: audiobookshelf
ports:
- port: 80
targetPort: 80

View File

@@ -0,0 +1,18 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: dendrite-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`dendrite.kluster.moll.re`)
kind: Rule
services:
- name: dendrite
port: 8008
# scheme: https
tls:
certResolver: default-tls

View File

@@ -0,0 +1,16 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- postgres.yaml
- postgres-user.secret.yaml
- ingress.yaml
namespace: dendrite
helmCharts:
- name: dendrite
releaseName: dendrite
version: 0.13.5
valuesFile: values.yaml
repo: https://matrix-org.github.io/dendrite/

View File

@@ -0,0 +1,25 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: dendrite-postgres
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:16.4
bootstrap:
initdb:
owner: dendrite
database: dendrite
secret:
name: postgres-password
# Persistent storage configuration
storage:
size: 2Gi
pvcTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: nfs-client
volumeMode: Filesystem

287
apps/dendrite/values.yaml Normal file
View File

@@ -0,0 +1,287 @@
# signing key to use
signing_key:
# -- Create a new signing key, if not exists
create: true
persistence:
jetstream:
# -- PVC Storage Request for the jetstream volume
capacity: "1Gi"
# -- The storage class to use for volume claims.
storageClass: "nfs-client"
media:
# -- PVC Storage Request for the media volume
capacity: "1Gi"
# -- The storage class to use for volume claims.
storageClass: "nfs-client"
search:
# -- PVC Storage Request for the search volume
capacity: "1Gi"
# -- The storage class to use for volume claims.
storageClass: "nfs-client"
dendrite_config:
version: 2
global:
# -- **REQUIRED** Servername for this Dendrite deployment.
server_name: "dendrite.kluster.moll.re"
# -- The server name to delegate server-server communications to, with optional port
# e.g. localhost:443
well_known_server_name: ""
# -- The server name to delegate client-server communications to, with optional port
# e.g. localhost:443
well_known_client_name: ""
# -- Lists of domains that the server will trust as identity servers to verify third
# party identifiers such as phone numbers and email addresses.
trusted_third_party_id_servers:
- matrix.org
- vector.im
# -- The paths and expiry timestamps (as a UNIX timestamp in millisecond precision)
# to old signing keys that were formerly in use on this domain name. These
# keys will not be used for federation request or event signing, but will be
# provided to any other homeserver that asks when trying to verify old events.
old_private_keys:
# If the old private key file is available:
# - private_key: old_matrix_key.pem
# expired_at: 1601024554498
# If only the public key (in base64 format) and key ID are known:
# - public_key: mn59Kxfdq9VziYHSBzI7+EDPDcBS2Xl7jeUdiiQcOnM=
# key_id: ed25519:mykeyid
# expired_at: 1601024554498
# -- Disable federation. Dendrite will not be able to make any outbound HTTP requests
# to other servers and the federation API will not be exposed.
disable_federation: false
key_validity_period: 168h0m0s
database:
# -- The connection string for connections to Postgres.
# This will be set automatically if using the Postgres dependency
connection_string: "postgresql://dendrite:supersecretpassword!@dendrite-postgres-rw/dendrite"
# -- Default database maximum open connections
max_open_conns: 90
# -- Default database maximum idle connections
max_idle_conns: 5
# -- Default database maximum lifetime
conn_max_lifetime: -1
jetstream:
# -- Persistent directory to store JetStream streams in.
storage_path: "/data/jetstream"
# -- NATS JetStream server addresses if not using internal NATS.
addresses: []
# -- The prefix for JetStream streams
topic_prefix: "Dendrite"
# -- Keep all data in memory. (**NOTE**: This is overriden in Helm to `false`)
in_memory: false
# -- Disables TLS validation. This should **NOT** be used in production.
disable_tls_validation: true
cache:
# -- The estimated maximum size for the global cache in bytes, or in terabytes,
# gigabytes, megabytes or kilobytes when the appropriate 'tb', 'gb', 'mb' or
# 'kb' suffix is specified. Note that this is not a hard limit, nor is it a
# memory limit for the entire process. A cache that is too small may ultimately
# provide little or no benefit.
max_size_estimated: 1gb
# -- The maximum amount of time that a cache entry can live for in memory before
# it will be evicted and/or refreshed from the database. Lower values result in
# easier admission of new cache entries but may also increase database load in
# comparison to higher values, so adjust conservatively. Higher values may make
# it harder for new items to make it into the cache, e.g. if new rooms suddenly
# become popular.
max_age: 1h
report_stats:
# -- Configures phone-home statistics reporting. These statistics contain the server
# name, number of active users and some information on your deployment config.
# We use this information to understand how Dendrite is being used in the wild.
enabled: false
presence:
# -- Controls whether we receive presence events from other servers
enable_inbound: false
# -- Controls whether we send presence events for our local users to other servers.
# (_May increase CPU/memory usage_)
enable_outbound: false
server_notices:
# -- Server notices allows server admins to send messages to all users on the server.
enabled: false
# -- The local part for the user sending server notices.
local_part: "_server"
# -- The display name for the user sending server notices.
display_name: "Server Alerts"
# -- The avatar URL (as a mxc:// URL) name for the user sending server notices.
avatar_url: ""
# The room name to be used when sending server notices. This room name will
# appear in user clients.
room_name: "Server Alerts"
# prometheus metrics
metrics:
# -- Whether or not Prometheus metrics are enabled.
enabled: false
# HTTP basic authentication to protect access to monitoring.
basic_auth:
# -- HTTP basic authentication username
user: "metrics"
# -- HTTP basic authentication password
password: metrics
app_service_api:
# -- Disable the validation of TLS certificates of appservices. This is
# not recommended in production since it may allow appservice traffic
# to be sent to an insecure endpoint.
disable_tls_validation: false
# -- Appservice config files to load on startup. (**NOTE**: This is overriden by Helm, if a folder `./appservices/` exists)
config_files: []
client_api:
# -- Prevents new users from being able to register on this homeserver, except when
# using the registration shared secret below.
registration_disabled: true
# Prevents new guest accounts from being created. Guest registration is also
# disabled implicitly by setting 'registration_disabled' above.
guests_disabled: true
# -- If set, allows registration by anyone who knows the shared secret, regardless of
# whether registration is otherwise disabled.
registration_shared_secret: "supersecretpassword"
# TURN server information that this homeserver should send to clients.
turn:
# -- Duration for how long users should be considered valid ([see time.ParseDuration](https://pkg.go.dev/time#ParseDuration) for more)
turn_user_lifetime: "24h"
turn_uris: []
turn_shared_secret: ""
# -- The TURN username
turn_username: ""
# -- The TURN password
turn_password: ""
rate_limiting:
# -- Enable rate limiting
enabled: true
# -- After how many requests a rate limit should be activated
threshold: 20
# -- Cooloff time in milliseconds
cooloff_ms: 500
# -- Users which should be exempt from rate limiting
exempt_user_ids:
federation_api:
# -- Federation failure threshold. How many consecutive failures that we should
# tolerate when sending federation requests to a specific server. The backoff
# is 2**x seconds, so 1 = 2 seconds, 2 = 4 seconds, 3 = 8 seconds, etc.
# The default value is 16 if not specified, which is circa 18 hours.
send_max_retries: 16
# -- Disable TLS validation. This should **NOT** be used in production.
disable_tls_validation: false
prefer_direct_fetch: false
# -- Prevents Dendrite from keeping HTTP connections
# open for reuse for future requests. Connections will be closed quicker
# but we may spend more time on TLS handshakes instead.
disable_http_keepalives: false
# -- Perspective keyservers, to use as a backup when direct key fetch
# requests don't succeed.
# @default -- See value.yaml
key_perspectives:
- server_name: matrix.org
keys:
- key_id: ed25519:auto
public_key: Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw
- key_id: ed25519:a_RXGa
public_key: l8Hft5qXKn1vfHrg3p4+W8gELQVo8N13JkluMfmn2sQ
media_api:
# -- The path to store media files (e.g. avatars) in
base_path: "/data/media_store"
# -- The max file size for uploaded media files
max_file_size_bytes: 10485760
# Whether to dynamically generate thumbnails if needed.
dynamic_thumbnails: false
# -- The maximum number of simultaneous thumbnail generators to run.
max_thumbnail_generators: 10
# -- A list of thumbnail sizes to be generated for media content.
# @default -- See value.yaml
thumbnail_sizes:
- width: 32
height: 32
method: crop
- width: 96
height: 96
method: crop
- width: 640
height: 480
method: scale
sync_api:
# -- This option controls which HTTP header to inspect to find the real remote IP
# address of the client. This is likely required if Dendrite is running behind
# a reverse proxy server.
real_ip_header: X-Real-IP
# -- Configuration for the full-text search engine.
search:
# -- Whether fulltext search is enabled.
enabled: true
# -- The path to store the search index in.
index_path: "/data/search"
# -- The language most likely to be used on the server - used when indexing, to
# ensure the returned results match expectations. A full list of possible languages
# can be found [here](https://github.com/matrix-org/dendrite/blob/76db8e90defdfb9e61f6caea8a312c5d60bcc005/internal/fulltext/bleve.go#L25-L46)
language: "en"
user_api:
# -- bcrypt cost to use when hashing passwords.
# (ranges from 4-31; 4 being least secure, 31 being most secure; _NOTE: Using a too high value can cause clients to timeout and uses more CPU._)
bcrypt_cost: 10
# -- OpenID Token lifetime in milliseconds.
openid_token_lifetime_ms: 3600000
# - Disable TLS validation when hitting push gateways. This should **NOT** be used in production.
push_gateway_disable_tls_validation: false
# -- Rooms to join users to after registration
auto_join_rooms: []
# -- Default logging configuration
logging:
- type: std
level: info
postgresql:
# -- Enable and configure postgres as the database for dendrite.
# @default -- See value.yaml
enabled: false
ingress:
# -- Create an ingress for the deployment
enabled: false
service:
type: ClusterIP
port: 8008
prometheus:
servicemonitor:
# -- Enable ServiceMonitor for Prometheus-Operator for scrape metric-endpoint
enabled: false
# -- Extra Labels on ServiceMonitor for selector of Prometheus Instance
labels: {}
rules:
# -- Enable PrometheusRules for Prometheus-Operator for setup alerting
enabled: false
# -- Extra Labels on PrometheusRules for selector of Prometheus Instance
labels: {}
# -- additional alertrules (no default alertrules are provided)
additionalRules: []

View File

@@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ocis-statefulset
spec:
selector:
matchLabels:
app: ocis
serviceName: ocis-web
replicas: 1
template:
metadata:
labels:
app: ocis
spec:
containers:
- name: ocis
image: ocis
resources:
limits:
memory: "1Gi"
cpu: "1000m"
env:
- name: OCIS_INSECURE
value: "true"
- name: OCIS_URL
value: "https://ocis.kluster.moll.re"
- name: OCIS_LOG_LEVEL
value: "debug"
ports:
- containerPort: 9200
volumeMounts:
- name: config
mountPath: /etc/ocis
# - name: ocis-config-file
# mountPath: /etc/ocis/config.yaml
- name: data
mountPath: /var/lib/ocis
volumes:
# - name: ocis-config
# persistentVolumeClaim:
# claimName: ocis-config
- name: config
secret:
secretName: ocis-config
- name: data
persistentVolumeClaim:
claimName: ocis

18
apps/files/ingress.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: ocis-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`ocis.kluster.moll.re`)
kind: Rule
services:
- name: ocis-web
port: 9200
scheme: https
tls:
certResolver: default-tls

View File

@@ -0,0 +1,16 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- ingress.yaml
- service.yaml
- pvc.yaml
- deployment.yaml
- ocis-config.sealedsecret.yaml
namespace: files
images:
- name: ocis
newName: owncloud/ocis
newTag: "5.0.8"

File diff suppressed because one or more lines are too long

View File

@@ -1,13 +1,11 @@
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
name: ocis
spec:
storageClassName: nfs-client
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
- ReadWriteOnce
resources:
requests:
storage: 1Mi
```
storage: 150Gi

10
apps/files/service.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: ocis-web
spec:
selector:
app: ocis
ports:
- port: 9200
targetPort: 9200

View File

@@ -1,12 +1,10 @@
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: finance
name: actualbudget
labels:
app: actualbudget
spec:
# deployment running a single container
selector:
matchLabels:
app: actualbudget
@@ -18,83 +16,19 @@ spec:
spec:
containers:
- name: actualbudget
image: actualbudget/actual-server:latest
image: actualbudget
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Berlin
volumeMounts:
- name: actualbudget-data-nfs
- name: data
mountPath: /data
ports:
- containerPort: 5006
name: http
protocol: TCP
volumes:
- name: actualbudget-data-nfs
- name: data
persistentVolumeClaim:
claimName: actualbudget-data-nfs
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: finance
name: "actualbudget-data-nfs"
spec:
# storageClassName: fast
capacity:
storage: "5Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /export/kluster/actualbudget
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: finance
name: "actualbudget-data-nfs"
spec:
storageClassName: "fast"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "5Gi"
# selector:
# matchLabels:
# directory: "journal-data"
---
apiVersion: v1
kind: Service
metadata:
namespace: finance
name: actualbudget
spec:
selector:
app: actualbudget
ports:
- protocol: TCP
port: 5006
targetPort: 5006
type: ClusterIP
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: finance
name: actualbudget
spec:
entryPoints:
- websecure
routes:
- match: Host(`actualbudget.kluster.moll.re`)
kind: Rule
services:
- name: actualbudget
port: 5006
tls:
certResolver: default-tls
claimName: data

View File

@@ -0,0 +1,15 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: actualbudget
spec:
entryPoints:
- websecure
routes:
- match: Host(`actualbudget.kluster.moll.re`)
kind: Rule
services:
- name: actualbudget
port: 5006
tls:
certResolver: default-tls

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "data"
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "5Gi"

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: actualbudget
spec:
selector:
app: actualbudget
ports:
- protocol: TCP
port: 5006
targetPort: 5006
type: ClusterIP

View File

@@ -1,66 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: firefly-importer
name: firefly-importer
namespace: finance
spec:
selector:
matchLabels:
app: firefly-importer
template:
metadata:
labels:
app: firefly-importer
spec:
containers:
- image: fireflyiii/data-importer:latest
imagePullPolicy: Always
name: firefly-importer
resources: {}
ports:
- containerPort: 8080
env:
- name: FIREFLY_III_ACCESS_TOKEN
value: redacted
- name: FIREFLY_III_URL
value: firefly-http:8080
# - name: APP_URL
# value: https://finance.kluster.moll.re
- name: TRUSTED_PROXIES
value: "**"
---
apiVersion: v1
kind: Service
metadata:
name: firefly-importer-http
namespace: finance
labels:
app: firefly-importer-http
spec:
type: ClusterIP
ports:
- port: 8080
# name: http
selector:
app: firefly-importer
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: firefly-importer-ingress
namespace: finance
spec:
entryPoints:
- websecure
routes:
- match: Host(`importer.finance.kluster.moll.re`)
kind: Rule
services:
- name: firefly-importer-http
port: 8080
tls:
certResolver: default-tls

View File

@@ -1,79 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: firefly
name: firefly
namespace: finance
spec:
selector:
matchLabels:
app: firefly
template:
metadata:
labels:
app: firefly
spec:
containers:
- image: fireflyiii/core:latest
imagePullPolicy: Always
name: firefly
resources: {}
ports:
- containerPort: 8080
env:
- name: APP_ENV
value: "local"
- name: APP_KEY
value: iKejRAlgwx2Y/fxdosXjABbNxNzEuJdl
- name: DB_CONNECTION
value: sqlite
- name: APP_URL
value: https://finance.kluster.moll.re
- name: TRUSTED_PROXIES
value: "**"
volumeMounts:
- mountPath: /var/www/html/storage/database
name: firefly-database
volumes:
- name: firefly-database
persistentVolumeClaim:
claimName: firefly-database-nfs
---
apiVersion: v1
kind: Service
metadata:
name: firefly-http
namespace: finance
labels:
app: firefly-http
spec:
type: ClusterIP
ports:
- port: 8080
# name: http
selector:
app: firefly
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: firefly-ingress
namespace: finance
spec:
entryPoints:
- websecure
routes:
- match: Host(`finance.kluster.moll.re`)
kind: Rule
services:
- name: firefly-http
port: 8080
tls:
certResolver: default-tls

View File

@@ -1,34 +0,0 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: finance
name: firefly-database-nfs
labels:
directory: firefly
spec:
# storageClassName: fast
# volumeMode: Filesystem
accessModes:
- ReadOnlyMany
capacity:
storage: "1G"
nfs:
path: /firefly # inside nfs part.
server: 10.43.239.43 # assigned to nfs-server service. Won't change as long as service is not redeployed
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: finance
name: firefly-database-nfs
spec:
resources:
requests:
storage: "1G"
# storageClassName: fast
accessModes:
- ReadOnlyMany
volumeName: firefly-database-nfs

View File

@@ -0,0 +1,16 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: finance
resources:
- namespace.yaml
- actualbudget.pvc.yaml
- actualbudget.deployment.yaml
- actualbudget.service.yaml
- actualbudget.ingress.yaml
images:
- name: actualbudget
newName: actualbudget/actual-server
newTag: 24.10.1

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

View File

@@ -1,15 +0,0 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: homarr-ingress
spec:
entryPoints:
- websecure
routes:
- match: Host(`start.kluster.moll.re`)
kind: Rule
services:
- name: homarr
port: 7575
tls:
certResolver: default-tls

View File

@@ -1,17 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: homarr
resources:
- namespace.yaml
- pvc.yaml
- ingress.yaml
helmCharts:
- name: homarr
releaseName: homarr
repo: https://oben01.github.io/charts/
version: 1.0.1
valuesFile: values.yaml

View File

@@ -1,60 +0,0 @@
# -- Default values for homarr
# -- Declare variables to be passed into your templates.
# -- Number of replicas
replicaCount: 1
env:
# -- Your local time zone
TZ: "Europe/Berlin"
# -- Colors and preferences, possible values dark / light
DEFAULT_COLOR_SCHEME: "dark"
# -- Service configuration
service:
# -- Service type
type: ClusterIP
# -- Service port
port: 7575
# -- Service target port
targetPort: 7575
# -- Ingress configuration
ingress:
enabled: false
persistence:
- name: homarr-config
# -- Enable homarr-config persistent storage
enabled: true
# -- homarr-config storage class name
storageClassName: "nfs-client"
# -- homarr-config access mode
accessMode: "ReadWriteOnce"
persistentVolumeReclaimPolicy: Retain
# -- homarr-config storage size
size: "50Mi"
# -- homarr-config mount path inside the pod
mountPath: "/app/data/configs"
- name: homarr-database
# -- Enable homarr-database persistent storage
enabled: true
# -- homarr-database storage class name
storageClassName: "nfs-client"
# -- homarr-database access mode
accessMode: "ReadWriteOnce"
# -- homarr-database storage size
size: "50Mi"
# -- homarr-database mount path inside the pod
mountPath: "/app/database"
- name: homarr-icons
# -- Enable homarr-icons persistent storage
enabled: true
# -- homarr-icons storage class name
storageClassName: "nfs-client"
# -- homarr-icons access mode
accessMode: "ReadWriteOnce"
# -- homarr-icons storage size
size: "50Mi"
# -- homarr-icons mount path inside the pod
mountPath: "/app/public/icons"

View File

@@ -1,4 +1,3 @@
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -22,7 +21,7 @@ spec:
- name: TZ
value: Europe/Berlin
volumeMounts:
- name: config
- name: config-dir
mountPath: /config
resources:
requests:
@@ -32,6 +31,7 @@ spec:
cpu: "2"
memory: "1Gi"
volumes:
- name: config
- name: config-dir
persistentVolumeClaim:
claimName: homeassistant-nfs
claimName: config

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: homeassistant-ingress
@@ -6,7 +6,7 @@ spec:
entryPoints:
- websecure
routes:
- match: Host(`home.kluster.moll.re`)
- match: Host(`home.kluster.moll.re`) && !Path(`/api/prometheus`)
middlewares:
- name: homeassistant-websocket
kind: Rule
@@ -15,9 +15,8 @@ spec:
port: 8123
tls:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: homeassistant-websocket
@@ -27,6 +26,3 @@ spec:
X-Forwarded-Proto: "https"
# enable websockets
Upgrade: "websocket"

View File

@@ -9,8 +9,10 @@ resources:
- pvc.yaml
- service.yaml
- deployment.yaml
- servicemonitor.yaml
images:
- name: homeassistant/home-assistant
newName: homeassistant/home-assistant
newTag: "2023.12"
newTag: "2024.10"

View File

@@ -1,28 +1,11 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: homeassistant-nfs
spec:
# storageClassName: slow
capacity:
storage: "1Gi"
# volumeMode: Filesystem
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/homeassistant
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: homeassistant-nfs
name: config
spec:
storageClassName: ""
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
volumeName: homeassistant-nfs

View File

@@ -2,9 +2,12 @@ apiVersion: v1
kind: Service
metadata:
name: homeassistant-web
labels:
app: homeassistant
spec:
selector:
app: homeassistant
ports:
- port: 8123
targetPort: 8123
targetPort: 8123
name: http

View File

@@ -0,0 +1,13 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: homeassistant-servicemonitor
labels:
app: homeassistant
spec:
selector:
matchLabels:
app: homeassistant
endpoints:
- port: http
path: /api/prometheus

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: stripprefix
@@ -7,7 +7,7 @@ spec:
prefixes:
- /api
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: websocket
@@ -18,7 +18,7 @@ spec:
# enable websockets
Upgrade: "websocket"
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: immich-ingressroute

View File

@@ -1,16 +1,33 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- ingress.yaml
- pvc.yaml
- postgres.sealedsecret.yaml
- namespace.yaml
- ingress.yaml
- pvc.yaml
- postgres.yaml
- postgres.sealedsecret.yaml
namespace: immich
helmCharts:
- name: immich
releaseName: immich
version: 0.2.0
version: 0.8.1
valuesFile: values.yaml
repo: https://immich-app.github.io/immich-charts
images:
- name: ghcr.io/immich-app/immich-machine-learning
newTag: v1.117.0
- name: ghcr.io/immich-app/immich-server
newTag: v1.117.0
patches:
- path: patch-redis-pvc.yaml
target:
kind: StatefulSet
name: immich-redis-master

View File

@@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: immich-redis-master
spec:
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi

35
apps/immich/postgres.yaml Normal file
View File

@@ -0,0 +1,35 @@
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: immich-postgres
spec:
instances: 1
imageName: ghcr.io/tensorchord/cloudnative-pgvecto.rs:16.2
bootstrap:
initdb:
owner: immich
database: immich
secret:
name: postgres-password
# Enable the VECTORS extension
postInitSQL:
- CREATE EXTENSION IF NOT EXISTS "vectors";
postgresql:
shared_preload_libraries:
- "vectors.so"
# Persistent storage configuration
storage:
size: 2Gi
pvcTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: nfs-client
volumeMode: Filesystem
monitoring:
enablePodMonitor: true

View File

@@ -1,26 +1,11 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: immich-nfs
spec:
capacity:
storage: "50Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/immich
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-nfs
name: data
spec:
storageClassName: ""
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "50Gi"
volumeName: immich-nfs
storage: "100Gi"

View File

@@ -2,15 +2,11 @@
## You can find it at https://github.com/bjw-s/helm-charts/tree/main/charts/library/common
## Refer there for more detail about the supported values
image:
tag: v1.90.2
# These entries are shared between all the Immich components
env:
REDIS_HOSTNAME: '{{ printf "%s-redis-master" .Release.Name }}'
DB_HOSTNAME: "postgres-postgresql.postgres"
DB_HOSTNAME: "immich-postgres-rw"
DB_USERNAME:
valueFrom:
secretKeyRef:
@@ -26,20 +22,19 @@ env:
secretKeyRef:
name: postgres-password
key: password
TYPESENSE_ENABLED: "{{ .Values.typesense.enabled }}"
TYPESENSE_API_KEY: "{{ .Values.typesense.env.TYPESENSE_API_KEY }}"
TYPESENSE_HOST: '{{ printf "%s-typesense" .Release.Name }}'
IMMICH_WEB_URL: '{{ printf "http://%s-web:3000" .Release.Name }}'
IMMICH_SERVER_URL: '{{ printf "http://%s-server:3001" .Release.Name }}'
IMMICH_MACHINE_LEARNING_URL: '{{ printf "http://%s-machine-learning:3003" .Release.Name }}'
IMMICH_METRICS: true
immich:
metrics:
# Enabling this will create the service monitors needed to monitor immich with the prometheus operator
enabled: true
persistence:
# Main data store for all photos shared between different components.
library:
# Automatically creating the library volume is not supported by this chart
# You have to specify an existing PVC to use
existingClaim: immich-nfs
existingClaim: data
# Dependencies
@@ -52,18 +47,6 @@ redis:
auth:
enabled: false
typesense:
enabled: true
env:
TYPESENSE_DATA_DIR: /tsdata
TYPESENSE_API_KEY: typesense
persistence:
tsdata:
# Enabling typesense persistence is recommended to avoid slow reindexing
enabled: true
accessMode: ReadWriteOnce
size: 1Gi
# Immich components
server:
@@ -72,16 +55,6 @@ server:
main:
enabled: false
microservices:
enabled: true
persistence:
geodata-cache:
enabled: true
size: 1Gi
# Optional: Set this to pvc to avoid downloading the geodata every start.
type: emptyDir
accessMode: ReadWriteMany
machine-learning:
enabled: true
persistence:

View File

@@ -18,15 +18,19 @@ spec:
limits:
memory: "2Gi"
cpu: "2"
requests:
memory: "128Mi"
cpu: "250m"
ports:
- containerPort: 8096
name: jellyfin
env:
- name: TZ
value: Europe/Berlin
volumeMounts:
- name: jellyfin-config
- name: config
mountPath: /config
- name: jellyfin-data
- name: media
mountPath: /media
livenessProbe:
httpGet:
@@ -35,10 +39,10 @@ spec:
initialDelaySeconds: 100
periodSeconds: 15
volumes:
- name: jellyfin-config
- name: config
persistentVolumeClaim:
claimName: jellyfin-config-nfs
- name: jellyfin-data
claimName: config
- name: media
persistentVolumeClaim:
claimName: jellyfin-data-nfs
claimName: media

View File

@@ -1,23 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: jellyfin-vue-ingress
namespace: media
spec:
entryPoints:
- websecure
routes:
- match: Host(`media.kluster.moll.re`)
middlewares:
- name: jellyfin-websocket
kind: Rule
services:
- name: jellyfin-web
port: 80
tls:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: jellyfin-backend-ingress
@@ -26,7 +7,7 @@ spec:
entryPoints:
- websecure
routes:
- match: Host(`media-backend.kluster.moll.re`)
- match: Host(`media.kluster.moll.re`) && !Path(`/metrics`)
middlewares:
- name: jellyfin-websocket
- name: jellyfin-server-headers
@@ -37,7 +18,7 @@ spec:
tls:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: jellyfin-websocket
@@ -48,7 +29,7 @@ spec:
Connection: keep-alive, Upgrade
Upgrade: WebSocket
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: jellyfin-server-headers
@@ -60,4 +41,4 @@ spec:
accessControlAllowMethods: [ "GET","HEAD","OPTIONS" ] # "POST","PUT"
accessControlAllowOriginList:
- "*"
accessControlMaxAge: 100
accessControlMaxAge: 100

View File

@@ -5,16 +5,11 @@ namespace: media
resources:
- namespace.yaml
- pvc.yaml
- server.deployment.yaml
- server.service.yaml
- web.deployment.yaml
- web.service.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
images:
- name: jellyfin/jellyfin
newName: jellyfin/jellyfin
newTag: 10.8.13
- name: ghcr.io/jellyfin/jellyfin-vue
newName: ghcr.io/jellyfin/jellyfin-vue
newTag: stable-rc.0.3.1
newTag: 10.9.11

View File

@@ -1,39 +1,21 @@
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: media
name: jellyfin-config-nfs
spec:
capacity:
storage: "1Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /export/kluster/jellyfin-config
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: media
name: jellyfin-config-nfs
name: config
spec:
storageClassName: ""
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
volumeName: jellyfin-config-nfs
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: media
name: jellyfin-data-nfs
name: media
spec:
capacity:
storage: "1Ti"
@@ -46,8 +28,7 @@ spec:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: media
name: jellyfin-data-nfs
name: media
spec:
storageClassName: ""
accessModes:
@@ -55,4 +36,4 @@ spec:
resources:
requests:
storage: "1Ti"
volumeName: jellyfin-data-nfs
volumeName: media

View File

@@ -3,6 +3,8 @@ apiVersion: v1
kind: Service
metadata:
name: jellyfin-server
labels:
app: jellyfin-server-service
spec:
selector:
app: jellyfin-server

View File

@@ -1,27 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: jellyfin-web
spec:
selector:
matchLabels:
app: jellyfin-web
template:
metadata:
labels:
app: jellyfin-web
spec:
containers:
- name: jellyfin-web
image: ghcr.io/jellyfin/jellyfin-vue
resources:
limits:
memory: "128Mi"
cpu: "30m"
ports:
- containerPort: 80
env:
- name: TZ
value: Europe/Berlin
- name: DEFAULT_SERVERS
value: "https://media-backend.kluster.moll.re"

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: jellyfin-web
spec:
selector:
app: jellyfin-web
ports:
- protocol: TCP
port: 80
targetPort: 80

7
apps/minecraft/README.md Normal file
View File

@@ -0,0 +1,7 @@
## Sending a command
```
kubectl exec -it -n minecraft deploy/minecraft-server -- /bin/bash
mc-send-to-console /help
# or directly
kubectl exec -it -n minecraft deploy/minecraft-server -- mc-send-to-console /help
```

View File

@@ -0,0 +1,16 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: curseforge-api
namespace: minecraft
spec:
encryptedData:
key: AgBYeAiejdmxDBorvgnxQX5YvUhR3NId2vfWybMKlc27e6D/bKglLNyZMk70xSnFAPjcDmZ20mYjFPYvDOr9T6IU/REJ8QlzoKAn0xW779R4SkIxRToT+dJv+OM2avgQ9uqp7vja29xeXMjYAnQML+QGZKcrT8mE04G/Ty8rdUiv3yUXK5HFAR3SUF35aVLdlthLjpRkv1s0R7GAP4L2pNzBJNV3i37viceUSSjU0zpOa23fsQOkPAs67AIukAJBqh/hyF/hR9H1GeYZNTI3OcHcvC2iNk/XGstvv0Zy6ApzoebsfWGdsbVn+QUI0EBw+mSTPqpl71cbkz0v4S4XAVndosxWpe6AIgm5MBTU0FXIyGyoFDe1aMPq8BXiQikYVwB48oVNh9KF0xXX5AOG0whB/FEsL3OJsiNQvQ3R/Hru43JBn64oxjVtLfM3E7u8v/xr1VQahX8dylDmb4s5EV01U6O4y19Ou4td1eEMlhpJb0fBPDRUYuWxZAEDGmp+U4tAakyPed11VkcZPPn9fKAAcv8sGs3TYAbbF18hqsBnv2Wd+i7ZEvKwmdmfR/T0r1TJGsvKI7jaW0QtH256XrSxQp7a52qMKMVQWOSKw2k27t/IkRhxT2Prw4GfJvaVr4RozUaBf3LV/hfDWlDfmM2zg3X9W8HkzjotGg021OLxsa0Wzmhffvb8h4bvZwxeq3U1xaJocqXui7z0rT2pF4z3wYHR/lPtexHcOA2M8gfBGKb1rBKh+kW+N+/ZfVLNI0mokg5vrTO2nR2rb4c=
template:
metadata:
creationTimestamp: null
name: curseforge-api
namespace: minecraft
type: Opaque

57
apps/minecraft/job.yaml Normal file
View File

@@ -0,0 +1,57 @@
apiVersion: batch/v1
kind: Job
metadata:
name: start-server
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: minecraft-server
image: minecraft
resources:
limits:
memory: "10000Mi"
cpu: "5"
requests:
memory: "1500Mi"
cpu: "500m"
ports:
- containerPort: 25565
env:
- name: EULA
value: "TRUE"
- name: TYPE
value: "AUTO_CURSEFORGE"
- name: CF_API_KEY
valueFrom:
secretKeyRef:
name: curseforge-api
key: key
- name: CF_PAGE_URL
value: "https://www.curseforge.com/minecraft/modpacks/vault-hunters-1-18-2/files/5413446"
- name: VERSION
value: "1.18.2"
- name: INIT_MEMORY
value: "1G"
- name: MAX_MEMORY
value: "8G"
- name: MOTD
value: "VaultHunters baby!"
- name: ENABLE_RCON
value: "false"
- name: CREATE_CONSOLE_IN_PIPE
value: "true"
- name: ONLINE_MODE
value: "true"
- name: ENABLE_AUTOSTOP
value: "true"
volumeMounts:
- name: minecraft-data
mountPath: /data
volumes:
- name: minecraft-data
persistentVolumeClaim:
claimName: minecraft-data

View File

@@ -0,0 +1,17 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: minecraft
resources:
- namespace.yaml
- pvc.yaml
- job.yaml
- service.yaml
- curseforge.sealedsecret.yaml
images:
- name: minecraft
newName: itzg/minecraft-server
newTag: java21

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

11
apps/minecraft/pvc.yaml Normal file
View File

@@ -0,0 +1,11 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: minecraft-data
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: minecraft-server
spec:
selector:
app: minecraft-server
ports:
- port: 25565
targetPort: 25565
type: LoadBalancer
loadBalancerIP: 192.168.3.4

View File

@@ -0,0 +1,17 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: grafana-admin-secret
namespace: monitoring
spec:
encryptedData:
password: AgAwMLnsYN1y8JQSqgGQbNG/8jKensTDsEw6ogITdkhDRlJcg8HQ5t7a6xLzNCrLHLJiQW8YOoyLT4lvFkBRMOa2EYcrDvBiRD0PjygWLIscKa7dA+jpAUf/icD9zsiDnTym2yf+VUANcmEgE6DiNvlcsrcmYqiR4pKVUTDlKPNOjOpTJ3nXETb3/sbt69E0JSGwtkvusYQSXKLU9KLbciihv+ycdkdlC9xy9myd4+vYZYXSh/eAvyZeb/hsmdSX7yaASmupMvet6Qsdt99PNzFQxtbQH+LQvYalVZ8bjWZQvCN/p0bA4H15otKBfe8rtEwVthgvyEvo6TK0Mg0pFY/b3AOGFmImnT3rDmgG6S8KTZH0Jce17ksFqvELQmHjqHuYpQsPDl44glM8kWRJ9Mf/Z424LRwZlJNVcOkuVl4qFqPUjzd2rWIyF0RaD0BE012C0ThJxKn2l17lVJbNtdUiR3qNpW01ot2m0CgKd2kXbjDmgRgAll4WgrukfCIn9ZnE0gVCFLJuK3MOQAaipFYy/bDO0izwl9T8nldgcI8OfiC3NTk2O+Es5jJRXu0oJGaC3HrTB7wXiwOoELvAsxLTPxKBiN9mCHCMtZX0PEtrio0dFRQ6Pi5xPng0KVT0I9dvGNsPdhPETNOB913WEvbgP8Gt3cj016nCzk51eUsYbXPpNL2B4kmbIhecqW/8kwKQPwYjVlBSXj3NxjzwMY6PvOl1
user: AgBqmjCYGMqy5zBE+vhtsynOvhWdHWDJDyl1D+laBtLjXTJwzRbNTdunHYo1ekwyqQ6Cr5pi4YMiLxAl1LIHF+Lfsp2QlY+ResAGzp9WgSBtNQDX3EmLDQofeWxMUDdMtMsE9wiKLCfNGDkRDsGquXTz+YFq03m1vH9cB8Bp+1ClWOTui+/Ce0MZlWsJZX1W8WXH7XTirtwUo0s53pc4AplUUH97ZEK3KSIxWa3gLCn0sAPDDLPX+JVA2xtpMq1XuVFiFifjzEtG2h0dejiF35FtSAR+rR4YmEfimk3QpRDfOqV5QUxvjCG+dTV49upSevF2mvbHW+o+lB6vEc6l9cZXvlbnMdaep3NmOsJcJ8wQIdFpFK4iVzFOTKSEbzLPlZ/J+sjS5vDXsfthorIO2faMA1iIf+I663zNxQU5btaK4TNYOZQlrFVjAmioRLkDhGZ6tDUPX/zMv+Crt+0HCwyEyhmvFZckDvezTZrxARSXXMKBVcvjHCyUNkz7ubZRiMU0PGM7fYuHr659e+XMRvj+LFA68ZaEIzCQpCFJenWWYAXgUdRG4LQ1LP2MwvRHpkOYSoRkHIpX7jOfhX82A60h/ta/CdbWifqNyL9OecvE3FKsZu/Kr0taw9W6nm6FBhQLgFkOnFrqp9dWnxfHruXuDBgcn0iE8nR7Ht2zS7hfQPeR4a3Y0xK3Plqbzdrb9HKnWQQhf14=
template:
metadata:
creationTimestamp: null
name: grafana-admin-secret
namespace: monitoring
type: Opaque

View File

@@ -0,0 +1,16 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: grafana-auth
namespace: monitoring
spec:
encryptedData:
client_secret: AgCcKsnS3u2eI+fNVC9hAZ3QRFOHFErAzs5aQgX51CSdJwM03SZUoTyrDi5JPcHUVyS3MbevFH5piMhDTARMI3bLOjYlcwMbpf77JCPa7o95Y9asA/FW3lXicYt3biN9xBXJBz7Ws3fVRtEzyf6DmbGedT9gaX8aPwrUVbP19RdyJiuu76oB1A/jdUkX4K+X6kVvmoP/BWdypk/kdQJrzBNt00DIXF4NHfYey36AuhpBtqYZs4faA/tBXMXLE4RxPNtcHwNfVjnRj3v3qzNufD1fnweJvLq2UfLMrQjoR9XDVnM0zkpautylkI7yrvcoEH7ljnf6b1FMogOEZUfH1BIdqTd/WwrrlCqE58OPfJWthIfN+pQ8LvdHsGo3jc9gXvfXS2cStyhP06eTZ4D79kG+RtDQGOsD/Wpx7EcM6hbB3+dIjcs3wEAIGjpIVtY9JayW8YeRnFApMuhDST1+hscm+LdoGvaSTlAuGzv9BbVrPX/Fo9XKeYHlbG/x71Er+vF8WbW0wUa46MHLvbEy376XIdJDYi+vjl4eqznZ6YhvPbawhoKXT8ZcKUcUAjVcMue/O/jCSPZplbn3vdSCeqPTiqVqDw9PTMIeWFUepgPMxiGpFRAqdwIecFBnYItq0dXoGlFrZpo0S6AECgZjxzUR5EgdkdPlDDs2CN+d9yP7f2S+gmL7AIlQr74NW1GrTGw2x/rD4IJhunh7
template:
metadata:
creationTimestamp: null
name: grafana-auth
namespace: monitoring
type: Opaque

View File

@@ -1,5 +1,5 @@
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
metadata:
name: grafana-ingress
spec:

View File

@@ -1,28 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana-nfs
labels:
directory: grafana
spec:
capacity:
storage: "1Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /export/kluster/grafana
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
selector:
matchLabels:
directory: grafana

View File

@@ -1,570 +1,61 @@
rbac:
create: true
## Use an existing ClusterRole/Role (depending on rbac.namespaced false/true)
# useExistingRole: name-of-some-(cluster)role
pspEnabled: true
pspUseAppArmor: true
namespaced: false
extraRoleRules: []
# - apiGroups: []
# resources: []
# verbs: []
extraClusterRoleRules: []
# - apiGroups: []
# resources: []
# verbs: []
serviceAccount:
create: true
name:
nameTest:
## Service account annotations. Can be templated.
# annotations:
# eks.amazonaws.com/role-arn: arn:aws:iam::123456789000:role/iam-role-name-here
autoMount: true
replicas: 1
## Create a headless service for the deployment
headlessService: false
## Create HorizontalPodAutoscaler object for deployment type
#
autoscaling:
enabled: false
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
## See `kubectl explain poddisruptionbudget.spec` for more
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
podDisruptionBudget: {}
# minAvailable: 1
# maxUnavailable: 1
## See `kubectl explain deployment.spec.strategy` for more
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
deploymentStrategy:
type: RollingUpdate
readinessProbe:
httpGet:
path: /api/health
port: 3000
livenessProbe:
httpGet:
path: /api/health
port: 3000
initialDelaySeconds: 60
timeoutSeconds: 30
failureThreshold: 10
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName: "default-scheduler"
image:
repository: grafana/grafana
tag: 9.0.2
sha: ""
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## Can be templated.
##
# pullSecrets:
# - myRegistrKeySecretName
testFramework:
enabled: true
image: "bats/bats"
tag: "v1.4.1"
imagePullPolicy: IfNotPresent
securityContext: {}
securityContext:
runAsUser: 472
runAsGroup: 472
fsGroup: 472
containerSecurityContext:
{}
# Extra configmaps to mount in grafana pods
# Values are templated.
extraConfigmapMounts: []
# - name: certs-configmap
# mountPath: /etc/grafana/ssl/
# subPath: certificates.crt # (optional)
# configMap: certs-configmap
# readOnly: true
extraEmptyDirMounts: []
# - name: provisioning-notifiers
# mountPath: /etc/grafana/provisioning/notifiers
# Apply extra labels to common labels.
extraLabels: {}
## Assign a PriorityClassName to pods if set
# priorityClassName:
downloadDashboardsImage:
repository: curlimages/curl
tag: 7.73.0
sha: ""
pullPolicy: IfNotPresent
downloadDashboards:
env: {}
envFromSecret: ""
resources: {}
## Pod Annotations
# podAnnotations: {}
## Pod Labels
# podLabels: {}
podPortName: grafana
## Deployment annotations
# annotations: {}
## Expose the grafana service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##
service:
enabled: true
type: ClusterIP
port: 80
targetPort: 3000
# targetPort: 4181 To be used with a proxy extraContainer
annotations: {}
labels: {}
portName: service
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: false
path: /metrics
# namespace: monitoring (defaults to use the namespace this chart is deployed to)
labels: {}
interval: 1m
scheme: http
tlsConfig: {}
scrapeTimeout: 30s
relabelings: []
extraExposePorts: []
# - name: keycloak
# port: 8080
# targetPort: 8080
# type: ClusterIP
# overrides pod.spec.hostAliases in the grafana deployment's pods
hostAliases: []
# - ip: "1.2.3.4"
# hostnames:
# - "my.host.com"
envValueFrom:
AUTH_GRAFANA_CLIENT_SECRET:
secretKeyRef:
name: grafana-auth
key: client_secret
ingress:
enabled: true
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
# ingressClassName: nginx
# Values can be templated
annotations: {
kubernetes.io/ingress.class: nginx,
cert-manager.io/cluster-issuer: cloudflare-letsencrypt-prod
}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
enabled: false
# pathType is only for k8s >= 1.1=
pathType: Prefix
hosts:
- grafana.kluster.moll.re
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## Or for k8s > 1.19
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# name: use-annotation
tls:
- hosts:
- grafana.kluster.moll.re
secretName: cloudflare-letsencrypt-issuer-account-key
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Node labels for pod assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
#
nodeSelector: {}
## Tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Affinity for pod assignment (evaluated as template)
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Additional init containers (evaluated as template)
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
##
extraInitContainers: []
## Enable an Specify container in extraContainers. This is meant to allow adding an authentication proxy to a grafana pod
extraContainers: ""
# extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - -provider=github
# - -client-id=
# - -client-secret=
# - -github-org=<ORG_NAME>
# - -email-domain=*
# - -cookie-secret=
# - -http-address=http://0.0.0.0:4181
# - -upstream-url=http://127.0.0.1:3000
# ports:
# - name: proxy-web
# containerPort: 4181
## Volumes that can be used in init containers that will not be mounted to deployment pods
extraContainerVolumes: []
# - name: volume-from-secret
# secret:
# secretName: secret-to-mount
# - name: empty-dir-volume
# emptyDir: {}
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
type: pvc
enabled: true
# storageClassName: default
accessModes:
- ReadWriteOnce
size: 10Gi
# annotations: {}
finalizers:
- kubernetes.io/pvc-protection
# selectorLabels: {}
## Sub-directory of the PV to mount. Can be templated.
# subPath: ""
## Name of an existing PVC. Can be templated.
existingClaim: grafana-nfs
## If persistence is not enabled, this allows to mount the
## local storage in-memory to improve performance
##
inMemory:
enabled: false
## The maximum usage on memory medium EmptyDir would be
## the minimum value between the SizeLimit specified
## here and the sum of memory limits of all containers in a pod
##
# sizeLimit: 300Mi
initChownData:
## If false, data ownership will not be reset at startup
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: true
## initChownData container image
##
image:
repository: busybox
tag: "1.31.1"
sha: ""
pullPolicy: IfNotPresent
## initChownData resource requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
# adminPassword: strongpassword
# Use an existing secret for the admin user.
# credentials
admin:
## Name of the secret. Can be templated.
existingSecret: ""
userKey: admin-user
passwordKey: admin-password
existingSecret: grafana-admin-secret
userKey: user
passwordKey: password
## Define command to be executed at startup by grafana container
## Needed if using `vault-env` to manage secrets (ref: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/)
## Default is "run.sh" as defined in grafana's Dockerfile
# command:
# - "sh"
# - "/run.sh"
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Extra environment variables that will be pass onto deployment pods
##
## to provide grafana with access to CloudWatch on AWS EKS:
## 1. create an iam role of type "Web identity" with provider oidc.eks.* (note the provider for later)
## 2. edit the "Trust relationships" of the role, add a line inside the StringEquals clause using the
## same oidc eks provider as noted before (same as the existing line)
## also, replace NAMESPACE and prometheus-operator-grafana with the service account namespace and name
##
## "oidc.eks.us-east-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:sub": "system:serviceaccount:NAMESPACE:prometheus-operator-grafana",
##
## 3. attach a policy to the role, you can use a built in policy called CloudWatchReadOnlyAccess
## 4. use the following env: (replace 123456789000 and iam-role-name-here with your aws account number and role name)
##
## env:
## AWS_ROLE_ARN: arn:aws:iam::123456789000:role/iam-role-name-here
## AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
## AWS_REGION: us-east-1
##
## 5. uncomment the EKS section in extraSecretMounts: below
## 6. uncomment the annotation section in the serviceAccount: above
## make sure to replace arn:aws:iam::123456789000:role/iam-role-name-here with your role arn
env: {}
## "valueFrom" environment variable references that will be added to deployment pods. Name is templated.
## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvarsource-v1-core
## Renders in container spec as:
## env:
## ...
## - name: <key>
## valueFrom:
## <value rendered as YAML>
envValueFrom: {}
# ENV_NAME:
# configMapKeyRef:
# name: configmap-name
# key: value_key
## The name of a secret in the same kubernetes namespace which contain values to be added to the environment
## This can be useful for auth tokens, etc. Value is templated.
envFromSecret: ""
## Sensible environment variables that will be rendered as new secret object
## This can be useful for auth tokens, etc
envRenderSecret: {}
## The names of secrets in the same kubernetes namespace which contain values to be added to the environment
## Each entry should contain a name key, and can optionally specify whether the secret must be defined with an optional key.
## Name is templated.
envFromSecrets: []
## - name: secret-name
## optional: true
## The names of conifgmaps in the same kubernetes namespace which contain values to be added to the environment
## Each entry should contain a name key, and can optionally specify whether the configmap must be defined with an optional key.
## Name is templated.
## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmapenvsource-v1-core
envFromConfigMaps: []
## - name: configmap-name
## optional: true
# Inject Kubernetes services as environment variables.
# See https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
enableServiceLinks: true
## Additional grafana server secret mounts
# Defines additional mounts with secrets. Secrets must be manually created in the namespace.
extraSecretMounts: []
# - name: secret-files
# mountPath: /etc/secrets
# secretName: grafana-secret-files
# readOnly: true
# subPath: ""
#
# for AWS EKS (cloudwatch) use the following (see also instruction in env: above)
# - name: aws-iam-token
# mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
# readOnly: true
# projected:
# defaultMode: 420
# sources:
# - serviceAccountToken:
# audience: sts.amazonaws.com
# expirationSeconds: 86400
# path: token
#
# for CSI e.g. Azure Key Vault use the following
# - name: secrets-store-inline
# mountPath: /run/secrets
# readOnly: true
# csi:
# driver: secrets-store.csi.k8s.io
# readOnly: true
# volumeAttributes:
# secretProviderClass: "akv-grafana-spc"
# nodePublishSecretRef: # Only required when using service principal mode
# name: grafana-akv-creds # Only required when using service principal mode
## Additional grafana server volume mounts
# Defines additional volume mounts.
extraVolumeMounts: []
# - name: extra-volume-0
# mountPath: /mnt/volume0
# readOnly: true
# existingClaim: volume-claim
# - name: extra-volume-1
# mountPath: /mnt/volume1
# readOnly: true
# hostPath: /usr/shared/
## Container Lifecycle Hooks. Execute a specific bash command or make an HTTP request
lifecycleHooks: {}
# postStart:
# exec:
# command: []
## Pass the plugins you want installed as a list.
##
plugins: []
# - digrich-bubblechart-panel
# - grafana-clock-panel
## Configure grafana datasources
## ref: http://docs.grafana.org/administration/provisioning/#datasources
##
datasources: {}
# datasources.yaml:
# apiVersion: 1
# datasources:
# - name: Prometheus
# type: prometheus
# url: http://prometheus-prometheus-server
# access: proxy
# isDefault: true
# - name: CloudWatch
# type: cloudwatch
# access: proxy
# uid: cloudwatch
# editable: false
# jsonData:
# authType: default
# defaultRegion: us-east-1
## Configure notifiers
## ref: http://docs.grafana.org/administration/provisioning/#alert-notification-channels
##
notifiers: {}
# notifiers.yaml:
# notifiers:
# - name: email-notifier
# type: email
# uid: email1
# # either:
# org_id: 1
# # or
# org_name: Main Org.
# is_default: true
# settings:
# addresses: an_email_address@example.com
# delete_notifiers:
## Configure grafana dashboard providers
## ref: http://docs.grafana.org/administration/provisioning/#dashboards
##
## `path` must be /var/lib/grafana/dashboards/<provider_name>
##
dashboardProviders: {}
# dashboardproviders.yaml:
# apiVersion: 1
# providers:
# - name: 'default'
# orgId: 1
# folder: ''
# type: file
# disableDeletion: false
# editable: true
# options:
# path: /var/lib/grafana/dashboards/default
## Configure grafana dashboard to import
## NOTE: To use dashboards you must also enable/configure dashboardProviders
## ref: https://grafana.com/dashboards
##
## dashboards per provider, use provider name as key.
##
dashboards: {}
# default:
# some-dashboard:
# json: |
# $RAW_JSON
# custom-dashboard:
# file: dashboards/custom-dashboard.json
# prometheus-stats:
# gnetId: 2
# revision: 2
# datasource: Prometheus
# local-dashboard:
# url: https://example.com/repository/test.json
# token: ''
# local-dashboard-base64:
# url: https://example.com/repository/test-b64.json
# token: ''
# b64content: true
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Thanos
type: prometheus
url: http://thanos-querier.prometheus.svc:10902
isDefault: true
- name: Prometheus
type: prometheus
url: http://prometheus.prometheus.svc:9090
isDefault: false
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
## Reference to external ConfigMap per provider. Use provider name as key and ConfigMap name as value.
## A provider dashboards must be defined either by external ConfigMaps or in values.yaml, not in both.
## ConfigMap data example:
@@ -573,301 +64,30 @@ dashboards: {}
## example-dashboard.json: |
## RAW_JSON
##
dashboardsConfigMaps: {}
# default: ""
dashboardsConfigMaps:
default: grafana-dashboards
## Grafana's primary configuration
## NOTE: values in map will be converted to ini format
## ref: http://docs.grafana.org/installation/configuration/
##
grafana.ini:
paths:
data: /var/lib/grafana/
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
wal: true
default_theme: dark
unified_alerting:
enabled: false
analytics:
check_for_updates: true
log:
mode: console
grafana_net:
url: https://grafana.net
## grafana Authentication can be enabled with the following values on grafana.ini
# server:
# The full public facing url you use in browser, used for redirects and emails
# root_url:
# https://grafana.com/docs/grafana/latest/auth/github/#enable-github-in-grafana
# auth.github:
# enabled: false
# allow_sign_up: false
# scopes: user:email,read:org
# auth_url: https://github.com/login/oauth/authorize
# token_url: https://github.com/login/oauth/access_token
# api_url: https://api.github.com/user
# team_ids:
# allowed_organizations:
# client_id:
# client_secret:
## LDAP Authentication can be enabled with the following values on grafana.ini
## NOTE: Grafana will fail to start if the value for ldap.toml is invalid
# auth.ldap:
# enabled: true
# allow_sign_up: true
# config_file: /etc/grafana/ldap.toml
## Grafana's LDAP configuration
## Templated by the template in _helpers.tpl
## NOTE: To enable the grafana.ini must be configured with auth.ldap.enabled
## ref: http://docs.grafana.org/installation/configuration/#auth-ldap
## ref: http://docs.grafana.org/installation/ldap/#configuration
ldap:
enabled: false
# `existingSecret` is a reference to an existing secret containing the ldap configuration
# for Grafana in a key `ldap-toml`.
existingSecret: ""
# `config` is the content of `ldap.toml` that will be stored in the created secret
config: ""
# config: |-
# verbose_logging = true
# [[servers]]
# host = "my-ldap-server"
# port = 636
# use_ssl = true
# start_tls = false
# ssl_skip_verify = false
# bind_dn = "uid=%s,ou=users,dc=myorg,dc=com"
## Grafana's SMTP configuration
## NOTE: To enable, grafana.ini must be configured with smtp.enabled
## ref: http://docs.grafana.org/installation/configuration/#smtp
smtp:
# `existingSecret` is a reference to an existing secret containing the smtp configuration
# for Grafana.
existingSecret: ""
userKey: "user"
passwordKey: "password"
## Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders
## Requires at least Grafana 5 to work and can't be used together with parameters dashboardProviders, datasources and dashboards
sidecar:
image:
repository: quay.io/kiwigrid/k8s-sidecar
tag: 1.15.6
sha: ""
imagePullPolicy: IfNotPresent
resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
securityContext: {}
# skipTlsVerify Set to true to skip tls verification for kube api calls
# skipTlsVerify: true
enableUniqueFilenames: false
readinessProbe: {}
livenessProbe: {}
dashboards:
enabled: false
SCProvider: true
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
# value of label that the configmaps with dashboards are set to
labelValue: null
# folder in the pod that should hold the collected dashboards (unless `defaultFolderName` is set)
folder: /tmp/dashboards
# The default folder name, it will create a subfolder under the `folder` and put dashboards in there instead
defaultFolderName: null
# Namespaces list. If specified, the sidecar will search for config-maps/secrets inside these namespaces.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces.
searchNamespace: null
# Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
watchMethod: WATCH
# search in configmap, secret or both
resource: both
# If specified, the sidecar will look for annotation with this name to create folder and put graph here.
# You can use this parameter together with `provider.foldersFromFilesStructure`to annotate configmaps and create folder structure.
folderAnnotation: null
# Absolute path to shell script to execute after a configmap got reloaded
script: null
# watchServerTimeout: request to the server, asking it to cleanly close the connection after that.
# defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S
# watchServerTimeout: 3600
#
# watchClientTimeout: is a client-side timeout, configuring your local socket.
# If you have a network outage dropping all packets with no RST/FIN,
# this is how long your client waits before realizing & dropping the connection.
# defaults to 66sec (sic!)
# watchClientTimeout: 60
#
# provider configuration that lets grafana manage the dashboards
provider:
# name of the provider, should be unique
name: sidecarProvider
# orgid as configured in grafana
orgid: 1
# folder in which the dashboards should be imported in grafana
folder: ''
# type of the provider
type: file
# disableDelete to activate a import-only behaviour
disableDelete: false
# allow updating provisioned dashboards from the UI
allowUiUpdates: false
# allow Grafana to replicate dashboard structure from filesystem
foldersFromFilesStructure: false
# Additional dashboard sidecar volume mounts
extraMounts: []
# Sets the size limit of the dashboard sidecar emptyDir volume
sizeLimit: {}
datasources:
enabled: false
# label that the configmaps with datasources are marked with
label: grafana_datasource
# value of label that the configmaps with datasources are set to
labelValue: null
# If specified, the sidecar will search for datasource config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
watchMethod: WATCH
# search in configmap, secret or both
resource: both
# Endpoint to send request to reload datasources
reloadURL: "http://localhost:3000/api/admin/provisioning/datasources/reload"
skipReload: false
# Deploy the datasource sidecar as an initContainer in addition to a container.
# This is needed if skipReload is true, to load any datasources defined at startup time.
initDatasources: false
# Sets the size limit of the datasource sidecar emptyDir volume
sizeLimit: {}
plugins:
enabled: false
# label that the configmaps with plugins are marked with
label: grafana_plugin
# value of label that the configmaps with plugins are set to
labelValue: null
# If specified, the sidecar will search for plugin config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds.
watchMethod: WATCH
# search in configmap, secret or both
resource: both
# Endpoint to send request to reload plugins
reloadURL: "http://localhost:3000/api/admin/provisioning/plugins/reload"
skipReload: false
# Deploy the datasource sidecar as an initContainer in addition to a container.
# This is needed if skipReload is true, to load any plugins defined at startup time.
initPlugins: false
# Sets the size limit of the plugin sidecar emptyDir volume
sizeLimit: {}
notifiers:
enabled: false
# label that the configmaps with notifiers are marked with
label: grafana_notifier
# If specified, the sidecar will search for notifier config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
searchNamespace: null
# search in configmap, secret or both
resource: both
# Sets the size limit of the notifier sidecar emptyDir volume
sizeLimit: {}
## Override the deployment namespace
##
namespaceOverride: ""
## Number of old ReplicaSets to retain
##
revisionHistoryLimit: 10
## Add a seperate remote image renderer deployment/service
imageRenderer:
# Enable the image-renderer deployment & service
enabled: false
replicas: 1
image:
# image-renderer Image repository
repository: grafana/grafana-image-renderer
# image-renderer Image tag
tag: latest
# image-renderer Image sha (optional)
sha: ""
# image-renderer ImagePullPolicy
pullPolicy: Always
# extra environment variables
env:
HTTP_HOST: "0.0.0.0"
# RENDERING_ARGS: --no-sandbox,--disable-gpu,--window-size=1280x758
# RENDERING_MODE: clustered
# IGNORE_HTTPS_ERRORS: true
# image-renderer deployment serviceAccount
serviceAccountName: ""
# image-renderer deployment securityContext
securityContext: {}
# image-renderer deployment Host Aliases
hostAliases: []
# image-renderer deployment priority class
priorityClassName: ''
service:
# Enable the image-renderer service
check_for_updates: false
server:
domain: grafana.kluster.moll.re
root_url: https://grafana.kluster.moll.re
auth.generic_oauth:
name: Authelia
enabled: true
# image-renderer service port name
portName: 'http'
# image-renderer service port used by both service and deployment
port: 8081
targetPort: 8081
# If https is enabled in Grafana, this needs to be set as 'https' to correctly configure the callback used in Grafana
grafanaProtocol: http
# In case a sub_path is used this needs to be added to the image renderer callback
grafanaSubPath: ""
# name of the image-renderer port on the pod
podPortName: http
# number of image-renderer replica sets to keep
revisionHistoryLimit: 10
networkPolicy:
# Enable a NetworkPolicy to limit inbound traffic to only the created grafana pods
limitIngress: true
# Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods
limitEgress: false
resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
## Node labels for pod assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
#
nodeSelector: {}
## Tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## Affinity for pod assignment (evaluated as template)
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
# Create a dynamic manifests via values:
extraObjects: []
# - apiVersion: "kubernetes-client.io/v1"
# kind: ExternalSecret
# metadata:
# name: grafana-secrets
# spec:
# backendType: gcpSecretsManager
# data:
# - key: grafana-admin-password
# name: adminPassword
allow_sign_up: true
client_id: grafana
client_secret: ${AUTH_GRAFANA_CLIENT_SECRET}
scopes: openid profile email groups
auth_url: https://auth.kluster.moll.re/api/oidc/authorization
token_url: https://auth.kluster.moll.re/api/oidc/token
api_url: https://auth.kluster.moll.re/api/oidc/authorization/userinfo
tls_skip_verify_insecure: true
auto_login: true
use_pkce: true

View File

@@ -1,31 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: influxdb-nfs
labels:
directory: influxdb
spec:
# storageClassName: slow
capacity:
storage: "10Gi"
# volumeMode: Filesystem
accessModes:
- ReadWriteOnce
nfs:
path: /export/kluster/influxdb
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: influxdb-nfs
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10Gi"
selector:
matchLabels:
directory: influxdb

View File

@@ -1,26 +0,0 @@
## Create default user through docker entrypoint
## Defaults indicated below
##
adminUser:
organization: "influxdata"
bucket: "default"
user: "admin"
retention_policy: "0s"
## Leave empty to generate a random password and token.
## Or fill any of these values to use fixed values.
password: ""
token: ""
## Persist data to a persistent volume
##
persistence:
enabled: true
## If true will use an existing PVC instead of creating one
useExisting: true
## Name of existing PVC to be used in the influx deployment
name: influxdb-nfs
ingress:
enabled: false

View File

@@ -5,25 +5,17 @@ namespace: monitoring
resources:
- namespace.yaml
- grafana.pvc.yaml
- influxdb.pvc.yaml
- grafana.ingress.yaml
- grafana-admin.sealedsecret.yaml
- grafana-auth.sealedsecret.yaml
# grafana dashboards are provisioned from a git repository
# in the initial bootstrap of the app of apps, the git repo won't be available, so this sync will initially fail
- https://git.kluster.moll.re/remoll/grafana-dashboards//?timeout=10&ref=main
helmCharts:
- releaseName: grafana
name: grafana
repo: https://grafana.github.io/helm-charts
version: 6.56.2
version: 8.5.4
valuesFile: grafana.values.yaml
- releaseName: influxdb
name: influxdb2
repo: https://helm.influxdata.com/
version: 2.1.2
valuesFile: influxdb.values.yaml
- releaseName: telegraf-speedtest
name: telegraf
repo: https://helm.influxdata.com/
version: 1.8.27
valuesFile: telegraf-speedtest.values.yaml

View File

@@ -1,52 +0,0 @@
env:
- name: HOSTNAME
value: "telegraf-speedtest"
service:
enabled: false
rbac:
# Specifies whether RBAC resources should be created
create: false
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: false
## Exposed telegraf configuration
## For full list of possible values see `/docs/all-config-values.yaml` and `/docs/all-config-values.toml`
## ref: https://docs.influxdata.com/telegraf/v1.1/administration/configuration/
config:
agent:
interval: "2h"
round_interval: true
metric_batch_size: 1000
metric_buffer_limit: 10000
collection_jitter: "0s"
flush_interval: "10s"
flush_jitter: "0s"
precision: ""
debug: false
quiet: false
logfile: ""
hostname: "$HOSTNAME"
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
outputs:
- influxdb_v2:
urls:
- "http://influxdb-influxdb2.monitoring:80"
token: We64mk4L4bqYCL77x3fAUSYfOse9Kktyf2eBLyrryG9c3-y8PQFiKPIh9EvSWuq78QSQz6hUcsm7XSFR2Zj1MA==
organization: "influxdata"
bucket: "homeassistant"
inputs:
- internet_speed:
enable_file_download: false

View File

@@ -1,5 +0,0 @@
### Runninf `occ` commands:
```
su -s /bin/bash www-data -c "php occ user:list"
```

View File

@@ -1,16 +0,0 @@
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: nextcloud-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`nextcloud.kluster.moll.re`)
kind: Rule
services:
- name: nextcloud
port: 8080
tls:
certResolver: default-tls

View File

@@ -1,16 +0,0 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- ingress.yaml
- pvc.yaml
- postgres.sealedsecret.yaml
namespace: nextcloud
helmCharts:
- name: nextcloud
releaseName: nextcloud
version: 4.5.5
valuesFile: values.yaml
repo: https://nextcloud.github.io/helm/

View File

@@ -1,22 +0,0 @@
{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "postgres-password",
"namespace": "nextcloud",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "postgres-password",
"namespace": "nextcloud",
"creationTimestamp": null
}
},
"encryptedData": {
"password": "AgCTmvBe9YFnyWOdz02rxr0hTXnWuVLeUt5dpieWMzl4cVMBj7WcyyODWtNd+eQOLARRssGNZAP4C9gH90iVRFAW1aU+NeA76oceXE5Kiiqoc8T30wE5FC6/UbTjQYRH520NF4wcCQKm//iH8o5uI2+NxZW4goeuShibXK9sijFVNXxUuTeXTmaSJjEPyB+pnmPwjzw+qjhkJJADefh9oryy5+t9ecCwXDiI/2ce2n1Vawm/Nq6/0rZMUSsF8XSiTFczKMunuGMhxGEyyx/I8NZd4XMXGSnBo0YZF7jR9+eRHIjuenPHq1kfEid2Ps4fhFSE8mEecnK7w5xE3r0XeTNHQcTId1yYneK/LQfcRkzInuRddytTwTAmsoSjROcjKjAvtyZSM81pFWJsMQ7bSVXOC0K2wvEz9khDT0RIoR/8tMh2G737F15raTe9Ggbgy3DHst4mYIpoWV/slHrOF0vR9j7X+MRN9R1cVtI1coof/tVSWQsLvv0AJfB4/6dUl+i/yNO/j+4c3WolGwqyXd+oxsZK1VrSwSCBZwBO17BmePJL2QsPVRdutq06TrlvGqP4wXySH9LRuHr3sWgr2VuDV00w+UvuU7ExI+16dWh7jrn/rvIBQSJlHDhl5+VpyM0WTMy5kSfO6nits73ZzT7BAoSU7AeQOMj3t+cUiEq9f9dk7em7QxWMuWg6QIJ+ZZ2+CCBms4rSE4x2glOxanNX/HktQg==",
"username": "AgCxJKzhsF7yNJesK5oLJP62kjFnX4UUNQ2NrHl02Hv6MAzi/AUEV3uJSXXIi3H/uMJSMxRpJQjIDsrznYVI0YHOoz1M8/y1dx8xotFv/i0XByI9sMuGtesop7ncmQbEPMaJ3pqTJyaGkEwcsEMGmwwYiRfJHmEhhCYtzEc5IAnx+nmk//HYsrSWKpJGSWl0LvdMJsnsTxrWoJjaYTW3J0Of3VOOmgkuwIFKyXW9S2cUbAco8xVYchbyiHc8LXbS3izyAidRzg1OWyqvTGMIKJDQZ3ibIiXheon5ZeYjj0fkEkv3TrB7WoKdo0090OY1eHabqAPHT8aP+WG1g6TAzbJEtg+zFfYDKIw5Tp1WkRlsD2me4HycGuZbsaXgP5vWlxF5+rULUzUgxfmTRmYTl0H8kIlmUrusZwxR5ZXnSuBJ3n3AMEjmpmTTALakxEFEPDJJoVbgcViLtANwk72yu15FlOxczT22uyW8FMkj9kYzcq/+2a/EjaTo62SnUYJ3UTQXvgMKML1yJD+zym2+xscPNmwZFBPN5BQ/64ru/Z51nWB20fWFgW3Rw67jEQMajmVclmUcASWOjHzO87feEprHeilTH+224IHzpmC4aLz/JtIP9EEvqfDUr3fRrxcgtT1DgxV37vPj6Pqn47MHr39AA850CxjFmb1VcwfH6ygXABFlxnVByZDn7xCyBNswtKJqtw=="
}
}
}

View File

@@ -1,25 +0,0 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: nextcloud-nfs
spec:
capacity:
storage: "150Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/nextcloud
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-nfs
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "150Gi"
volumeName: nextcloud-nfs

View File

@@ -1,171 +0,0 @@
## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
image:
tag: "28"
ingress:
enabled: false
nextcloud:
host: nextcloud.kluster.moll.re
username: admin
password: changeme
## Use an existing secret
existingSecret:
enabled: false
update: 0
# If web server is not binding default port, you can define it
# containerPort: 8080
datadir: /var/www/html/data
persistence:
subPath:
mail:
enabled: false
# PHP Configuration files
# Will be injected in /usr/local/etc/php/conf.d for apache image and in /usr/local/etc/php-fpm.d when nginx.enabled: true
phpConfigs: {}
# Default config files
# IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
# Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/16.0/apache/config
defaultConfigs:
# To protect /var/www/html/config
.htaccess: true
# Redis default configuration
redis.config.php: true
# Apache configuration for rewrite urls
apache-pretty-urls.config.php: true
# Define APCu as local cache
apcu.config.php: true
# Apps directory configs
apps.config.php: true
# Used for auto configure database
autoconfig.php: true
# SMTP default configuration
smtp.config.php: true
# Extra config files created in /var/www/html/config/
# ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
configs: {}
# For example, to use S3 as primary storage
# ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
#
# configs:
# s3.config.php: |-
# <?php
# $CONFIG = array (
# 'objectstore' => array(
# 'class' => '\\OC\\Files\\ObjectStore\\S3',
# 'arguments' => array(
# 'bucket' => 'my-bucket',
# 'autocreate' => true,
# 'key' => 'xxx',
# 'secret' => 'xxx',
# 'region' => 'us-east-1',
# 'use_ssl' => true
# )
# )
# );
nginx:
## You need to set an fpm version of the image for nextcloud if you want to use nginx!
enabled: false
internalDatabase:
enabled: true
name: nextcloud
##
## External database configuration
##
externalDatabase:
enabled: true
## Supported database engines: mysql or postgresql
type: postgresql
## Database host
host: postgres-postgresql.postgres
## Database user
# user: nextcloud
# ## Database password
# password: test
## Database name
database: nextcloud
## Use a existing secret
existingSecret:
enabled: true
secretName: postgres-password
usernameKey: username
passwordKey: password
##
## MariaDB chart configuration
##
mariadb:
enabled: false
postgresql:
enabled: false
redis:
enabled: false
## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#webcron
##
cronjob:
enabled: false
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
# Nextcloud Data (/var/www/html)
enabled: true
annotations: {}
## If defined, PVC must be created manually before volume will be bound
existingClaim: nextcloud-nfs
## Use an additional pvc for the data directory rather than a subpath of the default PVC
## Useful to store data on a different storageClass (e.g. on slower disks)
nextcloudData:
enabled: false
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 100m
memory: 128Mi
livenessProbe:
enabled: true
# disable when upgrading from a previous chart version
## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
enabled: false
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
rbac:
enabled: false

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: websocket
@@ -9,7 +9,7 @@ spec:
# enable websockets
Upgrade: "websocket"
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: ntfy-ingressroute

View File

@@ -13,4 +13,4 @@ resources:
images:
- name: binwiederhier/ntfy
newName: binwiederhier/ntfy
newTag: v2.8.0
newTag: v2.11.0

View File

@@ -0,0 +1,63 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: paperless
spec:
replicas: 1
selector:
matchLabels:
app: paperless
template:
metadata:
labels:
app: paperless
spec:
containers:
- name: paperless
image: paperless
ports:
- containerPort: 8000
env:
- name: PAPERLESS_REDIS
value: redis://redis-master:6379
- name: PAPERLESS_TIME_ZONE
value: Europe/Berlin
- name: PAPERLESS_OCR_LANGUAGE
value: deu+eng+fra
- name: PAPERLESS_URL
value: https://paperless.kluster.moll.re
- name: PAPERLESS_OCR_USER_ARGS
value: '{"invalidate_digital_signatures": true}'
- name: PAPERLESS_SECRET_KEY
valueFrom:
secretKeyRef:
name: paperless-secret-key
key: key
- name: PAPERLESS_DATA_DIR
value: /data
- name: PAPERLESS_MEDIA_ROOT
value: /data
- name: PAPERLESS_APPS
value: allauth.socialaccount.providers.openid_connect
- name: PAPERLESS_SOCIALACCOUNT_PROVIDERS
valueFrom:
secretKeyRef:
name: paperless-oauth
key: provider-config
# - name: PAPERLESS_DISABLE_REGULAR_LOGIN
# value: "True"
volumeMounts:
- name: data
mountPath: /data
resources:
requests:
cpu: "100m"
memory: "200Mi"
limits:
cpu: "2"
memory: "1Gi"
volumes:
- name: data
persistentVolumeClaim:
claimName: paperless-data

View File

@@ -0,0 +1,17 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: paperless-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`paperless.kluster.moll.re`)
kind: Rule
services:
- name: paperless-web
port: 8000
tls:
certResolver: default-tls

View File

@@ -0,0 +1,32 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- deployment.yaml
- service.yaml
- ingress.yaml
- paperless-secret-key.sealedsecret.yaml
- paperless-oauth.sealedsecret.yaml
namespace: paperless
images:
- name: paperless
newName: ghcr.io/paperless-ngx/paperless-ngx
newTag: "2.12.1"
helmCharts:
- name: redis
releaseName: redis
repo: https://charts.bitnami.com/bitnami
version: 20.1.5
valuesInline:
auth:
enabled: false
replica:
replicaCount: 0
master:
persistence:
storageClass: "nfs-client"

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

View File

@@ -0,0 +1,15 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: paperless-oauth
namespace: paperless
spec:
encryptedData:
provider-config: AgBI9IcNOfBevjUtIMwNTd0MTnr1WGxMKJ0cPnHzAS3cddmI+LTrkxxdRBuz2PFKTrhJ6/vh/2tiI9FBWMVm/YqTB64drbF3v13GfZMk/9c7W4SFyMoMcoE4xe6gs4SOm1ggTVxWT6O8IQ0gt7+FRUFaiLmwa08dxTkzrT0/zfQfYg+0aV8qS0eCJIrQk/IA1N31RpUoNV5Jl6vF7oE+cKIVyZ6LVMdecmFnuUgU+1qTC7ncgxxhWQekDQXJQnfYpgsQTF5GaHkGV8kvqJOa2Ohnk7MIeQEz5WuiKaXzU1ZMCYq3D8q/kaf/itLLBlL5MQh/hkuksCVG13aWxvulA/zIw3rSDujjZcSrD8LWH6oCMCn8zVcZjYQQBcTKUYEyYNvHLsmm0fOFIkUmFfBOS4WdHhjsBudz+941Wuc2EX5i6eLind7dk6gOlCL1HEyvbQRV6W50T104DQSNHslRG9CIjPh0BueGJ5fiaFoQa/UuM/JI8R/7cv3y5VkCG6j4gax9FVFgZKxPMtOTxB3gKolT25JHPDOqbDo996T4lsmiYRrYShni4JFZ8ALhcr7pKwlg+gVbDVaqMrVaSz1xzTP0MNxPMsojXVLtG3/Zv0/iXSVW4DPY67pFITEbZBWB3bHLvL9MiwKbWNwsDwPUylWkXTTFHsNbuRUnAXhRxcLn43uIv98JFTQcMVl2J9qrYkHm7w0FVImUr3oC+Ny/Z6j89ARposNx27B4FBgW7H7+yWMKRsAObC8cAjBOkBdXue0x5bEl7Al2BRRG7/WUKHXZvTOlvlj7GFTpLOQbPYjnBo8V8h42uOjGbiMLaCeN5sWlMtWD+7mpHV3XGdcGtPAZlIzgpUs2si/XIRNun2oDoUmJhb7YcGmugodcAK9+aYBThIkNU3guXdrM6Vc7CO2RP8PFpKBpcI9pUHgYA8dyYY+TaBqfYGrKFlGgoVcgh6oVkeOuctTX90XkojVFfqkCqab93faMh2pGCGcH4IZ81sdTYeWwNIvz1RGoi9GhUhQU5NfDeUBn2eHdOpfdsf4FkWe0kgE6TBPlQx7GQy56FldIc0G4QA8H8utL3E/MXYrao70ek/GHIxuev1/hzljJDk+5HJz5itBtKiW4s/5j2ZMD7MMBu/voDQW14XEK5pM9EbwmC6kRg6ljXvTlnUVmw1s04iUvIzF/dO6bCgaOEwFPjZj8oZs0dt64Ov+ZPLwTrmezFgHtfh4dyiRgHt4cO/WYFmzzYwd532p2De3JqjHUzT0iQIpkaz4jrF7+fdtDxtj7XgkJIg==
template:
metadata:
creationTimestamp: null
name: paperless-oauth
namespace: paperless

View File

@@ -0,0 +1,15 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: paperless-secret-key
namespace: paperless
spec:
encryptedData:
key: AgAjDxYW+bf+7a+65DR/u5Qrh37JSPQCstUrNWRdxq9We1eGf7qzTnmyke/O5TiE26rHVx3yzyK/Lcp/R+pJUnDauCvL6ja7k82DLzElkoRGhvRg4nr5Iehw488WIdJDXWqAbus4oLFCgnj5axs1B97hEiAN2onCPDsOuk7oSdJfG4mMI47Ass2qFPyQaff9TulLXQQEY5U7LrawCTudUPeiTCYGbOjBadPjEzn5pDwsyAd1G+NrqoPOwkrbNzrwMwbnwB4hLO0f+jrYOh2OMNcdZzMZgM671VH9cRYSVV6uz5iAN4A1NpZ1ZdenQN4pcWvaPmPOcvp14vjZtrYbGeyaNGnob5IycRrO4yaf+0V7DZ4Thwc/vqm6r5y/MR9U4Q9EFoNNHYmfo9VEw7LhivtaDOG8OaZXUnIFoXFLOZ59qfoZdIyK4eByTRQBZFZLK9rVgXOommbqlCgzNuDM7u11OGcYfROJFeiI9pH333x5u7GZsDz0hnAjWKphXzeTglXdaXMsQUeAHusdqKCn0X1cMatGUjkBAXwlOBrqmaDwSRdyc/+J2QIdkyQM9A+88+yoop7q8c5P8oizBikVaL7SojulUTJStH5cv7nRzhmpAY4j15+o3RQKrbEjGB4HVVx3VBFjjOiP9gfjhiYqxznYwkYTpXADPwjhLFf4opOPuhpoUD1M3OKXlQpPK/RvFTWWsh14jbJuL7WJpXbfyYs0+drbVdnYeUsn8OKlnFDoOaACdpNUCr6t9dSFMs7o7Mo8yN0E
template:
metadata:
creationTimestamp: null
name: paperless-secret-key
namespace: paperless

11
apps/paperless/pvc.yaml Normal file
View File

@@ -0,0 +1,11 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: paperless-data
spec:
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: paperless-web
spec:
selector:
app: paperless
ports:
- port: 8000
targetPort: 8000

View File

@@ -0,0 +1,40 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mealie
spec:
selector:
matchLabels:
app: mealie
template:
metadata:
labels:
app: mealie
spec:
containers:
- name: mealie
image: mealie
resources:
limits:
memory: "500Mi"
cpu: "500m"
ports:
- containerPort: 9000
env:
- name: TZ
value: Europe/Paris
- name: BASE_URL
value: https://recipes.kluster.moll.re
- name: ALLOW_SIGNUP
value: "true"
envFrom:
- secretRef:
name: mealie-oauth
volumeMounts:
- name: mealie-data
mountPath: /app/data
volumes:
- name: mealie-data
persistentVolumeClaim:
claimName: mealie

16
apps/recipes/ingress.yaml Normal file
View File

@@ -0,0 +1,16 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: mealie-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`recipes.kluster.moll.re`)
kind: Rule
services:
- name: mealie-web
port: 9000
tls:
certResolver: default-tls

View File

@@ -0,0 +1,17 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: recipes
resources:
- namespace.yaml
- deployment.yaml
- mealie-oauth.sealedsecret.yaml
- pvc.yaml
- service.yaml
- ingress.yaml
images:
- name: mealie
newTag: v1.12.0
newName: ghcr.io/mealie-recipes/mealie

View File

@@ -0,0 +1,26 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: mealie-oauth
namespace: recipes
spec:
encryptedData:
OIDC_ADMIN_GROUP: AgCTE4M54XEyKP1cHcsosZwIp5GYcvZPIzb7sgOpvVJb7mzKT9kYFOqUkwc1eyn4bjqaXu6SJyP3M4ss3K7CgkNWMsSjBVrYpP6yqsREoWqoETwwecSxRS7hrK8pJb6kRKBKyLDd2oaFMxeg0y1lm9+ul4HwNL+uPcNLSX7KRue1Hy5xqjDNlwL2mzU9JVVZGqEW8FyvPCRKMPliQtK7WpTUbadKbltN9RKrRa4PqFW9xyBrg8Al35Xj7HrNfwt8YAUDeq+73QMLtbP3y2orSck2I46h9DKW4foqYWGnVh/272fvQSXDr3tr8Il969o5M/IfVzpRS74f4zU0LeI4POxT9EgBL2sALoKhxil0EZGJ3dsPREvSS7eccCMY8n0AQE8rOT8AHyRDhd8RzsCoyPoDTAJJWf4oGS4aSH/N7f6CvsMF2EdOSCghBWC7klfRFxraWcbTiFRi5ICxlFL4GkiB9svNWpQlCkiduR9RZTop1DU+xJK2XgBmTampbjlEiS+yf9L7tXpCRvp+vRcxHwlvsMUncIxUzY3CPTST6FrVOL0BMtLeKI+MmgaCxgzo6BBAUKBZSCJbJKLTrcnhviLJnxYXKYPah5ozfk5XBzcz4C7dWuhN6N1Uwa08nDtHbuN9Mj0AWElT3HAqf3naowVBeXgLV6vRNTwYOIVoqGhyVG0O65DpwuxUdFZMUBpmhFz1QTpAZ6cbDZbL
OIDC_AUTH_ENABLED: AgAQ48wodtQ11peCbSpBL61n28GMeMJPq6cmb0a39x/n+AJGOZbjcE2e/xdFu06J+ahHTehi2QtwWxlRBi4Op6sbjdn5cXkTZ+8vGB3v/JQZQzJpGraFT1WTPZ9onYXTLl8iGsZSteF3iaxSOMzLswMahDvk/DCNrIY78mkTpqSC3Pggf9GnAWTQKOnuTFYyStBaL9ExyDBiJkbFahm/bC8h7QG9OXD93bDQrApS/W6LF+3CU6TOKj1VXmeVuMpxGdasC4B5ShL9kvBz66i5P7BJYjPfTEJfIACIFzZQmLJwxzfST1kV0urdQ9PZmcvQLapxGI4xUssD275hSPrUE6XXMLyo0BzVEBPq5QsMW5UZB1sOPQMcIajdzWagByFJWcBU1mFMwJXVUHQXlsMDEl1pDTGbSS660XCS1aAtmVb19ToXH/GIEkWlohiNpVqI+D8ypoAwMzF1FnHGvyxF6maHV4vZA4zDE7bT4S+E6dEmXM2cc0essUDlbwgOTLi/3fuqxIar3Dub857H0B9++SeriHrSUJD1A/9IOZQQNsAOwEOylK+9BuyhHYFVzk9l2lKnCRYqif75wusyoly+4yBS69LGzg4mc4Bk6LejZe91VnZLlZB3M6fZzS87qWoSKuEibxJ469KVrCPNc5mn33gfg5nW/05rSYcBnl1VazPmA18sZyX1nWnQUwwErg98jGn7Ft0q
OIDC_AUTO_REDIRECT: AgBuLoXhaCs86FfeiagJQg2bnz99Poebydj212AdXpt66Lx1nciFlJMzZ03ahJiF5T8lalxPLSzkHnc5VHBT8yRSrKe1lCisDmXVv9SBBoZHntu9zQdJeChkkHImxILlJVZk4steD2CyIBDj2xjbjUfRjFYJX4F1WrzSDXG4Vf5bal8tPXNuqbHEGfx7xw9CRetMD4cfDESyKl7FgbXi4vyLgGyNTnKTAvY1w72Y7WKminiScj7S39TZZIVH7wJyvvsNEv7MCgwaPeT+W5pNwpx34oqfumTHTeJUwQWccwCnZknhOl9jwNRJ4cHFAsVArT1yvjLCbMir9p1DvP2w0R/A8CzPgDQxbJno1QKeGnw02YVlsYH5KiOKkQQdT6/kT+kk444Mg0JLPbG7XZSUZXGmVl5n31DuDFvTGum5BQOwFzYfOim1MM1OJyFiM7XuaCZCzSMNUQHtLGv3kBHZFhIS5PXqMjfdV/RAv3QhZgAnJzGYLEDawOwy4KwIXTk9b55OX7AyEwQzwMlJUujg/+IJP28BH71tgVtgQsVDnWVT2GtWW+UMamXhqh+YK6VD2BZ/81D/p0BEGq8m/dweP/qv3MK/CGBjgtIFOHnEcNZkqMH4HhrxFeu6fabUpT6+C5cGNyYJSEFpbUqjJRb7vBPIu/ewb8Y43SNikzaSp3/1A2LtkGDyzcRckQYTuFIiIg5r3jcg
OIDC_CLIENT_ID: AgBQiUtbuXKJABinO8VfjQ+yIj0a4AEvKboHd544CxYdx1OfZd0DZLsKpkowZS+Wl5QiheC/+8HFqsRzzwKas2i4IyOpihhSW/xd6X7XjDtu2Tjwuol1oRel5AmBpBj6Kl6zTvgLbHE+iUaJi7u5f2Pw0w2vXAJWxg9nhOFVEfTVtKaafYqVVXPB1KTqRPzwR5zsTuIZToEx0BvA95yWM5oKRPnjlSLk1vHMBDPqdgUq+7u+7qhj7C+rvhI7nuTOWRk0NzsELBl3AyAg82w0r+I9H7buU8OHPEhVOID//UIc/wL/QuaJNxeFiU+rM6K0CD6MMGQanuUngaEr/ZpEdxe82hGSuaLik2mWfvG4J4328a1bmmTT/JgPPDQW8Xtg68Tsqj9zT4rfaVt/+TttMMgj4oAFkyVLGZkp1sfgIX5zT7gHc0fCusTR5gGNLJ5PmHyqCu9D2jP1ERLRltiyXH6wms8+aACj+wsmo1OKUjmqrtqtAWNwZz2bmOJhhUgkP1mPz4o2LeECjBPPv9uPWkMb67ZUI4+Qp8o7J0SgQt4ZlRqhg04Rh5MxCY9TvbhLxQD5QqTktmGYWo6cDVnIEIKd/XOSG9pBT6bTGsRLicgRxjDwm/0ZU1Y3stI6UhONKYA1HG8rso8ZfRLvDsgp1Zb8tH/GAw+bl2HkOw1DbyFXWeRxau3RPwTym3GYjFgWZhP5ScOu2Rjg
OIDC_CONFIGURATION_URL: AgBiXJGc7WugRqlUE58fVPkkKkka3G39lhajhuR7EQePZ1HdE1D02/XncXgRO0r7C4kPAX3oZ1Uczo0XAMxQxAZye9NNFNei7E4rlc3BTNOOxCc2Uj57DjuqFA74cHcjs0xRCJ1KVMEcGX8zdG85v/qEI9Oru7vQf8YT3giTKNHDZMgeyrbCMFNS0cyXniseMMVIkiX0ky6TfcFdn84bZiqJYwXwnNNzeM/kJe01NvEe3DuvxZTEme8LNkie7CTgaMHfEYeXZWIh+JqkIPW+48uDYe4ojf/Ts9P2jwzf6qW8XyqwlLti0MxoVH6IqbLbJ9JVJZ94WNMm2tcjigD1eSV2klNxXYZefyqqratQpfi7Ou/KXdfs9Reijj7dtfZDOnztVouS7ivBLlufqdxgyJrnQok1MxQloZs602CBPRDE+oSHGh5ean2wbJLqt/Vl9NxcsFZpWI4tazZ0DUhD5pgS4MXoG8D+RXIz4++nfn+XhYhv84vP4VJh6kCF84dKSYDFUKkAMGJSZRDCcs4jaORO/DZLUUnytnOHV8Mo2ypfC9KgETdnbXWT7OcdCQPesABE/l2lN9KDYHxL6DqE3GibmCqE7yk6TOF36pmVeEsECeBkmcl2DpHVu9PRBiwO0YVTGUr9AVmwO3vAR+2LAVHkDmU+jn/e5SFHg2dkEpWDORC7Zn4vZrjy3pmTJdxBthVwTYqmdnWk4uvppyKP2l6MBtaVPqtB35sDuYtLGSFV1FLBJGJueW+Kk2cGbNxJJEEhBUf6ZHCDWLrZEJPN
OIDC_GROUPS_CLAIM: AgBjsoq/VaSx/P7PnODa2TIiSy/noUFrVmPuIPAyjoZP/w62zmwTqy8Ln4yRKywmsy+n9CMGgauUzkEU8HSuWJ0Moxzt+NBRpuA3nL5R8b0hMsdQXCvY3L5zqyvPH7hfY1LRVcM5cVyzTR2CTVUNbO04EeGaFt8Mh8tsmyHk+Cf8VidbkeqgEpee8tNO638F4xQGx9aob7H7UVKOou8CdpOvH3zsNFzGmSbwv9qm1sgcTxkZkjt8cGH/c4k30p8szcMFQmUK7dzrZAma5bDPg5BuwspCnRXoGVWLYN02jHYDg/08qLpL/vL+pPpChf0DMB4j2M+s5EDHnbcfT7S7pf2NkHCnWINCJSKLMUIcBSFXXEkbmSrHo1Ft6aHf/i6JHld4CT0dQs5AyK68mCzkZTWoHU6MM8+/3/J3J/TWkSP8HOyBY3gWPOU4hYEQQQlJp3T+mnnua70mo/vMr4CuZFyxLjz872CDwG5WfZkzJxM69s0XRkHEmsXi7VYjn7NThrqhh2lqbiIIJNpAemjruRl49T3gtfstVdxfgp3dfz/H/4FWRy5KY5XDUjGwYBXDCpaEey42CFSiT1w9yXV67emahUwKekvq1vvuz2bWzaTYGtCW77WzCO1cC26hORPAYbZZxgSeDgWmxMIhJF6tVFNSAu11rMjcMUKErujC5cKWb8N4DuF0H4cQv36SESKBdVCOMPzPxDg=
OIDC_PROVIDER_NAME: AgBHYuK2JYedcaHwDca/c1RKzq8YgO9cJ0lXPY2+ain+lwFJwZ7oMOPevVOq3RoUHznEYbRxKw0CLE+110NFr16bav4bn4pXRD5CeqQDAWPIxxouu+w2NgWijnqfKnVYrtrprQlfGO9mYsZiEGj/sbftX2Wd73L7oXp/6Ab+28rnxtv3KF46wXo8yfqzOGf+QRQKdT+2pv58zjN3UCGrSANirLHvmY8+a73YiVW8/xuJZN4og3JbPW65FwBpgnvQCaHKFTbPyPIpILx7sKz2yxGzT6Jt2PeSjdwou/fT70tmrwOVucbHJkKUsd/jkI0gzRPLr9Fo5LcALAKQsn70hNQIRgTFgCy4ILurJMBKeYtHkNMdnBwKRh4NlTgJB2vNXq604480ta20Z3E2m3DUxW6XPFTQN9uC5/7IT0e/F4UgQai6HENwAqIPGuOM4eKeouw8pr+RoLMI76Z0B6xWhLRqxrZknzwzOngpaku8w9SeJPkifSFwxxFjacAqFrMwLGT1kg/KIJcBa3BUK8gVRLzqJd9hq1hQtavmm6T3W9PW8yCG3IP4b/Kxsm0+B1pXSaFC2a3elFu5lgw7JeJO22NwPmCUw3cX/zfYcn4u1MSsn2Wlr5tLNwjfpeGXb0w9eGStEZNKpj/OLo5W6sxdO1PIdCRd/IdxFuDd3q3G9cJTP7aMjeWSQhr7bzgzx1K78zZ9h4Qiw4YXrw==
OIDC_REMEMBER_ME: AgARqfhGDWYi0iqE//vNsJfgZBYJklJ7KUUHm3GkD27gNk0JtOobt/RlE3UV4cBGkstjGQkEGldkwshtQk/hAuuu8Qb9PpeDxQOUABAFX7NDcr2hWnDosFSShq724ycY6B63vjzahMBYEGo9tAB07VPy6RtvZDH5EGYcubfSe2GgX/P9VcKq64eO2NkOHJWc1A9cPqQGYqFi3YZkN1yaix7kRW7sL627Slv3nU/yLpFpEwUi9ayzDBCDxfiBZoRZh+Gy4VpD2S0XmTNYgr+uRTmFxEm0gL4qvualqahssswG6x+l6S+KizbLOkhNmLtZ12t166mZLWlzUPQKiJwEiWUK5DfJs2LtZCWyqVko9972df+buaQ4yPEBO212OoM1gF0P0JaTJ7TNzo0hhYxDIBZPIx9RnG7XzqI+sccj/OLQMeyB4p9gbbKHP4YJyyYQTSPK3dHG8BFdfUDyyq1tv5IkzKRHmy48xaumN6KCPsCqrpr8j1XL9oLPGPtDgf9VRjfp01FkTg9ujskxjnfAt6fNw8JThMyRUdInagos8Wh/qxPPV33yN+EvQaooXxq1FhvZtLVAXsErzqXnNbLBmQGx87IpBT4KMpXBWmQ7+XaiFCWXQ2pdXRtwL9IT7c6dtyCpUpYtlVjYuh8L64W8GJuRUjzg6pRbcw0205MFUZfWuOObAHK67K+ChNgzzsCCy4GHVtpp
OIDC_SIGNUP_ENABLED: AgBju2S3K1/i7oEsF+KyOIAyduYJMqA5c/spXNVDtFDgrf5+v8dIa25b1LyNZWctBPLZrcuviQtY3uJWgBbY4cKagheBlwN/eCijghdFLR5U9iFNJUv31zkmnVeOAZqkLKAmO1fmHfgqrZ8+UG5YlaRXcGsMjtuCuteKhF+/ss8VxWNwsWVcJc+xcfK11UG/ogrQ3+h/Ii1RCiS5WSMmQB3uNRvyNQEdDJHdtK6gifK5ZHboHN36Rx3DEzyr3eDeIJasyYgfDtepTBTyw5cb/7yFhRMSraHTpGy3uxJwzzNL1YdCi2EHyllJMea1/AvxUQXQAxD3zOSmNwQdroE8NU/Y+Y/vuCOLVrLeveHaTUWvI9PkIlIxbU1Xql7UGf7s+Y1PTnG7aXWNYNuTCgaw03TO5bhAPX1tBubVT9oVUZgjOH+XGY4nImtBOt/1bJ2xmciC6t2nt+8lUxLfNuFS/876vJokYtd22YHGqHuGSLhAdWg7LcwIilx4iCB3OY0ORBtdE0hDOu5KR282vRqGu5vi977k6kD7sL2far0fHjpb8rOaJqfy56UsF1CjClqQWB0F2vPh/ZNjecC87ei0ds3hCEhGtXxYYdJ1u9U3lZHyvA37NxKaagTHZCmhgs4M6Pdqm5YjsO0iDorv+aBnRkZysqzdG8LPuZNy7BkqxejW9icS5+b4TeJgCButl2Cb2RO77tda
OIDC_USER_CLAIM: AgCKYNaYTjJxEFBsijRDPNXRH2nodeUMG6SBa/emB6QKNh9jnQISn9poaZXZ8Tc/xwy1sLJ3m02x+Uh7/PE+A/jiQ26uc8FVvnjvzzUOaily9Hj+xgKZnjQHmNpxPBfj3pzomWZIEoEHybsWGrae1jDzIBxSeKbMYgTGfwJoWt9n4lnQ1Z++Z5/Texp7o+oV5ZJqsEqvjebTUT+nwWaDicKpDA/QbAjD+ysCim+gh8q8FXDCHZKNgD4OcXONTpDR185EpBYV0EKSMytlz5bdBSKHxVUACTATjf3WrFDG5NtckzmJA3bA85Xl39Jxi9pLP9D0SLk5cSOLkwNbLVL1EeQ4Q2FApXcPGhmEAgan11HQo2vJW3Dsn1R3G69zNC2ArMC6Dz6pH5f8nFId5lyTKsQQs25fCf/BV2we3910a9hGKAEPdkp27xGczu0mBdo3LLWeiFaQcDo326N2pqNm3119mdCdu5I7rhjGGvOLZ7z1c8XwS9NdE0Kb3iQZLqu1xr/FSeQZLd9NNF6qYtyFM3CtdqKSgdM4vpsRkqMEiZA84u7hdglXohuaVOjIaQD66bi0He1LemzyzRkRD57zxdVSvubKIQmedNpeG2yOT7pjZqVEk7FwWPaWZZiK61UhBZNvGXUKSMEWVJ1DIr9QBsK2E4/EGdqWKa/A5+75PKvv1TpINTw26z9schtduuPfaGO4Z/2ThfPlYFwD2eoVMZpAYjI=
OIDC_USER_GROUP: AgCTGx9h4At2lcx4tN5YUBN2PRHG9ex3curTTq4kid+kKQXRUOYckYC3LNzC7aIbJ8byhFHtJVF/T37olPkgJjWzPN6C16C9eGDv8bgq6JnG9faveeXr2zjcceZ9O3bSb4sRxlQ5Zke1Asc2olYP/H6br4VPDdKkDsJ5h5/B/W/dd9FDXuPAbTp32bK/l+3YStR4Zpmaldt4hostPu9TXfE+UxJqcLCFMtQuHsHEtFV3Pimt0XIkoNPsodpKKoAhje8vNwk5YYSlhzH13XgZvKcP43z8bfekicOgRNM6T27sVGRrFM4sE635406sOXWXbxJzwlBQJTqajCtX+tAtei3LHdr0l1sjjyMDzlREUq6RYt/6klMZrLW5gsdma769AFA76JX+e+wjekmv72/aqUVn9635IamFM1J6+jIWKdWo76vJwzR/EisO12vkSbocSoAEsUxc3rGMN2aLZEvo0LjsjENKlj8fNxog5i+4jO9Bc0AXEQaFhlQwPdIKlylQPhrSiW/cnDG1WemDn+e77a9NiOkDxMXGequzdC5KyIeIrSjITXpg1MQNa039yIKkjfVL0uMsH7OL7+qzKPSPm5LOABBxKducSHHK4t364YD+8e7KeQStHjaCTpcxgf43at4BKuQ31Ty2bWfpMRofGRBvJPusgjXrdutNEAIVrzFfW11o0Yx06U7CRF5198yXHCig3zKgxgQW
template:
metadata:
creationTimestamp: null
name: mealie-oauth
namespace: recipes
type: Opaque

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

12
apps/recipes/pvc.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mealie
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce

10
apps/recipes/service.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: mealie-web
spec:
selector:
app: mealie
ports:
- port: 9000
targetPort: 9000

View File

@@ -18,9 +18,9 @@ spec:
ports:
- containerPort: 7070
volumeMounts:
- name: rss-data
- name: data
mountPath: /data
volumes:
- name: rss-data
- name: data
persistentVolumeClaim:
claimName: rss-claim
claimName: data

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: rss-ingressroute
@@ -14,4 +14,3 @@ spec:
port: 80
tls:
certResolver: default-tls

View File

@@ -1,4 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder
name: placeholder

View File

@@ -1,9 +1,9 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rss-claim
name: data
spec:
storageClassName: nfs-client
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:

6
apps/todos/README.md Normal file
View File

@@ -0,0 +1,6 @@
### Adding a user
```bash
kubectl exec -it -n todos deployments/todos-vikunja -- /app/vikunja/vikunja user create -u <username> -e "<user-email>"
```
Password will be prompted.

21
apps/todos/ingress.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: todos-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`todos.kluster.moll.re`) && PathPrefix(`/api/v1`)
kind: Rule
services:
- name: todos-api
port: 3456
- match: Host(`todos.kluster.moll.re`) && PathPrefix(`/`)
kind: Rule
services:
- name: todos-frontend
port: 80
tls:
certResolver: default-tls

View File

@@ -0,0 +1,18 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: todos
resources:
- namespace.yaml
- pvc.yaml
- ingress.yaml
# helmCharts:
# - name: vikunja
# version: 0.1.5
# repo: https://charts.oecis.io
# valuesFile: values.yaml
# releaseName: todos
# managed by argocd directly

Some files were not shown because too many files have changed in this diff Show More