95 Commits

Author SHA1 Message Date
acf9d34b10 Merge branch 'main' of ssh://git.kluster.moll.re:2222/remoll/k3s-infra 2024-04-01 11:47:11 +02:00
3ffead0a14 try fixing grafana 2024-04-01 11:47:01 +02:00
b6bdc09efc Update docker.io/bitnami/sealed-secrets-controller Docker tag to v0.26.1 2024-04-01 09:33:23 +00:00
49b21cde52 proper backup config 2024-03-31 19:37:18 +02:00
deed24aa01 try fixing homeassistant again 2024-03-31 19:28:19 +02:00
9cfb98248d update immich 2024-03-31 19:08:14 +02:00
7bc4beefce Update Helm release cloudnative-pg to v0.20.2 2024-03-31 15:19:09 +00:00
ce9ff68c26 Update binwiederhier/ntfy Docker tag to v2.10.0 2024-03-31 15:18:06 +00:00
8249e7ef01 Update adguard/adguardhome Docker tag to v0.107.46 2024-03-31 15:15:00 +00:00
14e65df483 Update Helm release metallb to v0.14.4 2024-03-31 15:14:18 +00:00
f6fef4278b enable wal for grafana? 2024-03-29 00:55:34 +01:00
ef50df8386 slight mistake 2024-03-28 19:45:27 +01:00
b6df7604ed add missing references 2024-03-28 19:22:59 +01:00
a03d869d0c added dashboards 2024-03-28 19:20:28 +01:00
1063349fbe use sealedsecret 2024-03-28 19:17:19 +01:00
b88c212b57 now with correct secret 2024-03-28 19:10:01 +01:00
38a522a8d6 cleaner monitoring 2024-03-28 19:07:42 +01:00
046936f8f6 fix 2024-03-28 14:04:07 +01:00
309cbc08f5 so? 2024-03-28 13:55:57 +01:00
08b4c7eb5e switch ocis to nfs-provisioner 2024-03-28 13:52:44 +01:00
58e632e0b8 migrate mealie pvc 2024-03-28 13:21:50 +01:00
30d02edebc update rss 2024-03-28 13:19:53 +01:00
e30bfe64ae dum dum 2024-03-28 12:59:51 +01:00
764a3eafb7 switch some apps over to nfs-client 2024-03-28 12:40:48 +01:00
eff07665de add nfs-provisioner with sensible path template 2024-03-28 12:29:16 +01:00
571aebe78d now? 2024-03-27 14:15:13 +01:00
91a2ae5fe8 annoying 2024-03-27 14:13:22 +01:00
f12c21ef18 update vikunja 2024-03-27 14:03:55 +01:00
2a96b288bf or like that? 2024-03-27 09:39:58 +01:00
6f3a5aeab2 okey 2024-03-27 09:37:51 +01:00
b001bd3efc maybe like that? 2024-03-27 09:36:22 +01:00
b54794df35 dum-dum 2024-03-27 09:19:00 +01:00
51c8f7c092 fix the db location 2024-03-27 09:15:25 +01:00
cfb1a87a5b now with correct api path 2024-03-27 09:07:01 +01:00
10483431c6 trying todos like that 2024-03-27 09:04:40 +01:00
3a9450da9d now? 2024-03-27 08:34:48 +01:00
374e23ba1e trying to fix immich 2024-03-27 08:32:46 +01:00
66f703f5e1 update to correct location 2024-03-27 08:25:53 +01:00
4b05b53d72 small fixes 2024-03-27 00:38:34 +01:00
cfbc7fcd0d disable typesense 2024-03-27 00:31:41 +01:00
ffed2aea50 add media back 2024-03-27 00:27:57 +01:00
e674bf5b94 slim down the file sync 2024-03-27 00:12:50 +01:00
133af74ae0 missing namespace resource 2024-03-27 00:05:55 +01:00
f648064304 remove nfs-client 2024-03-26 23:50:27 +01:00
c7180f793a trying like that 2024-03-26 22:58:17 +01:00
4fcdaad297 move prometheus to its own config 2024-03-26 22:13:02 +01:00
f4b99ca037 now perhaps? 2024-03-26 11:16:33 +01:00
588bf774f9 or like that? 2024-03-26 10:58:44 +01:00
e18c661dbd typo 2024-03-26 10:57:18 +01:00
7d65ffea6a remove ocis:// 2024-03-26 10:56:34 +01:00
e460b5324a try differently configured todos 2024-03-26 10:55:25 +01:00
6fe166e60c manage todos 2024-03-24 15:31:59 +01:00
6ceb3816fb cleanup with regards to upcoming migration 2024-03-23 11:45:11 +01:00
19b63263e6 whoopsie 2024-03-22 14:57:17 +01:00
20d46d89d2 also manage ocis 2024-03-22 14:54:30 +01:00
7aee6c7cf0 basic auth maybe? 2024-03-22 14:53:29 +01:00
443da20ff9 steps towards a completely managed cluster 2024-03-20 23:45:08 +01:00
84a47b15b6 increase renovate frequency 2024-03-12 21:28:35 +01:00
40259ee57e Update apps/immich/kustomization.yaml 2024-03-12 14:01:08 +00:00
619368a2fd Merge pull request 'Update homeassistant/home-assistant Docker tag to v2024.3' (#54) from renovate/homeassistant-home-assistant-2024.x into main
Reviewed-on: #54
2024-03-12 09:04:37 +00:00
3288966b95 Merge pull request 'Update octodns/octodns Docker tag to v2024.03' (#55) from renovate/octodns-octodns-2024.x into main
Reviewed-on: #55
2024-03-12 09:04:16 +00:00
d12d50b906 Update apps/immich/kustomization.yaml 2024-03-12 09:03:55 +00:00
c7f0221062 Update octodns/octodns Docker tag to v2024.03 2024-03-12 09:02:04 +00:00
7819867091 Update homeassistant/home-assistant Docker tag to v2024.3 2024-03-12 09:01:41 +00:00
dd4c3d7a36 Update apps/immich/kustomization.yaml 2024-03-12 08:37:11 +00:00
e66905402e Merge pull request 'Update Helm release immich to v0.4.0' (#47) from renovate/immich-0.x into main
Reviewed-on: #47
2024-03-12 08:35:56 +00:00
1bdb4522c3 Merge pull request 'Update ghcr.io/mealie-recipes/mealie Docker tag to v1.3.2' (#53) from renovate/ghcr.io-mealie-recipes-mealie-1.x into main
Reviewed-on: #53
2024-03-12 08:32:10 +00:00
b5845479c2 Update ghcr.io/mealie-recipes/mealie Docker tag to v1.3.2 2024-03-10 19:01:42 +00:00
f2f31c4f4e Merge pull request 'Update binwiederhier/ntfy Docker tag to v2.9.0' (#52) from renovate/binwiederhier-ntfy-2.x into main
Reviewed-on: #52
2024-03-10 09:57:10 +00:00
ded829500c Update binwiederhier/ntfy Docker tag to v2.9.0 2024-03-09 11:04:03 +00:00
f762f5451b Merge pull request 'Update adguard/adguardhome Docker tag to v0.107.45' (#51) from renovate/adguard-adguardhome-0.x into main
Reviewed-on: #51
2024-03-08 07:42:50 +00:00
709f21998e Update adguard/adguardhome Docker tag to v0.107.45 2024-03-07 18:01:21 +00:00
47f091be83 Merge pull request 'Update actualbudget/actual-server Docker tag to v24.3.0' (#48) from renovate/actualbudget-actual-server-24.x into main
Reviewed-on: #48
2024-03-07 17:31:17 +00:00
da8be916bf fix bad naming 2024-03-07 13:21:05 +01:00
ad67acb9e7 again 2024-03-07 13:17:50 +01:00
5a7b5a82d7 maybe the service was misconfigured 2024-03-07 13:16:14 +01:00
2c32db61ec why? 2024-03-07 13:13:54 +01:00
141b80d15c man... 2024-03-07 13:11:08 +01:00
bf1d4badbe or directly use the dns name 2024-03-07 13:08:29 +01:00
be48049e22 fix bad syntax 2024-03-07 13:01:21 +01:00
3a629284f3 perhaps now 2024-03-07 12:59:04 +01:00
28c92e727f last chance 2024-03-06 14:48:14 +01:00
9a65c531f1 now? 2024-03-06 14:37:23 +01:00
52a086df73 come on 2024-03-06 14:34:19 +01:00
b728e21a15 expose grpc of store 2024-03-06 14:31:04 +01:00
da32c9c2ce neew 2024-03-06 14:25:47 +01:00
846390600e let's try with query as well 2024-03-06 14:24:07 +01:00
18d7a6b4cb or maybe like that? 2024-03-06 11:34:15 +01:00
31c8e91502 actually don't specify data 2024-03-06 11:31:15 +01:00
f0adf6b5db change user of prometheus to make thanos happy 2024-03-06 08:14:55 +01:00
b24ae9c698 with correct image 2024-03-05 16:44:42 +01:00
f3c108e362 fix 2024-03-05 16:41:54 +01:00
d2a8d92864 also use thanos object store 2024-03-05 16:39:15 +01:00
10816c4bd9 Update actualbudget/actual-server Docker tag to v24.3.0 2024-03-03 20:01:34 +00:00
aca0d4ba21 Update Helm release immich to v0.4.0 2024-03-03 20:01:27 +00:00
109 changed files with 834 additions and 281 deletions

6
.gitignore vendored
View File

@@ -1,2 +1,6 @@
# Kubernetes secrets
*.secret.yaml
charts/
main.key
# Helm Chart files
charts/

3
.gitmodules vendored
View File

@@ -1,3 +1,6 @@
[submodule "infrastructure/external-dns/octodns"]
path = infrastructure/external-dns/octodns
url = ssh://git@git.kluster.moll.re:2222/remoll/dns.git
[submodule "apps/monitoring/dashboards"]
path = apps/monitoring/dashboards
url = ssh://git@git.kluster.moll.re:2222/remoll/grafana-dashboards.git

View File

@@ -1,7 +1,6 @@
# Kluster setup and IaaC using argoCD
### Initial setup
#### Requirements:
- A running k3s instance
@@ -28,5 +27,21 @@ The app-of-apps will bootstrap a fully featured cluster with the following compo
- immich
- ...
#### Recap
- install sealedsecrets see [README](./infrastructure/sealedsecrets/README.md)
```bash
kubectl apply -k infrastructure/sealedsecrets
kubectl apply -f infrastructure/sealedsecrets/main.key
kubectl delete pod -n kube-system -l name=sealed-secrets-controller
```
- install argocd
```bash
kubectl apply -k infrastructure/argocd
```
- wait...
### Adding an application
todo

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: adguard-tls-ingress

View File

@@ -10,7 +10,7 @@ resources:
images:
- name: adguard/adguardhome
newName: adguard/adguardhome
newTag: v0.107.44
newTag: v0.107.46
namespace: adguard

View File

@@ -0,0 +1,48 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ocis-statefulset
spec:
selector:
matchLabels:
app: ocis
serviceName: ocis-web
replicas: 1
template:
metadata:
labels:
app: ocis
spec:
containers:
- name: ocis
image: ocis
resources:
limits:
memory: "1Gi"
cpu: "1000m"
env:
- name: OCIS_INSECURE
value: "true"
- name: OCIS_URL
value: "https://ocis.kluster.moll.re"
- name: OCIS_LOG_LEVEL
value: "debug"
ports:
- containerPort: 9200
volumeMounts:
- name: config
mountPath: /etc/ocis
# - name: ocis-config-file
# mountPath: /etc/ocis/config.yaml
- name: data
mountPath: /var/lib/ocis
volumes:
# - name: ocis-config
# persistentVolumeClaim:
# claimName: ocis-config
- name: config
secret:
secretName: ocis-config
- name: data
persistentVolumeClaim:
claimName: ocis

18
apps/files/ingress.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: ocis-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`ocis.kluster.moll.re`)
kind: Rule
services:
- name: ocis-web
port: 9200
scheme: https
tls:
certResolver: default-tls

View File

@@ -0,0 +1,16 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- ingress.yaml
- service.yaml
- pvc.yaml
- deployment.yaml
- ocis-config.sealedsecret.yaml
namespace: files
images:
- name: ocis
newName: owncloud/ocis
newTag: "5.0"

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

File diff suppressed because one or more lines are too long

View File

@@ -1,13 +1,11 @@
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
name: ocis
spec:
storageClassName: nfs-client
storageClassName: "nfs-client"
accessModes:
- ReadWriteMany
- ReadWriteOnce
resources:
requests:
storage: 1Mi
```
storage: 150Gi

10
apps/files/service.yaml Normal file
View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Service
metadata:
name: ocis-web
spec:
selector:
app: ocis
ports:
- port: 9200
targetPort: 9200

View File

@@ -22,13 +22,13 @@ spec:
- name: TZ
value: Europe/Berlin
volumeMounts:
- name: actualbudget-data-nfs
- name: data
mountPath: /data
ports:
- containerPort: 5006
name: http
protocol: TCP
volumes:
- name: actualbudget-data-nfs
- name: data
persistentVolumeClaim:
claimName: actualbudget-data-nfs
claimName: data

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: actualbudget

View File

@@ -1,27 +1,11 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: "actualbudget-data-nfs"
spec:
capacity:
storage: "5Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /export/kluster/actualbudget
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "actualbudget-data-nfs"
name: "data"
spec:
storageClassName: ""
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "5Gi"
volumeName: actualbudget-data-nfs

View File

@@ -13,4 +13,4 @@ resources:
images:
- name: actualbudget
newName: actualbudget/actual-server
newTag: 24.2.0
newTag: 24.3.0

View File

@@ -0,0 +1,16 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: config
namespace: homeassistant
spec:
encryptedData:
configuration.yaml: AgBAcghv6dQYajqfwayHhfRIE2awjHYOw1h9vWkeM7GRWtaB61eGLbievEvDhKYYWbLCCwXEfFpk1NodWvf7fdmsIm+uXu+95VbAv7/uRGdO4VDB37fagXLqEceNRkOD4lBQxUaZM1kHkEIbMTSKaLZAUfDwNwqZzbVMNRkTCd15+DFQMVpO7wT8CJ+oRWqB4ISV+0NivxVABzqzp3djQMbLgJzxYAzYWJro+qAJnXUrr4Jmk/mwcoYKD6JEVQKFKnT6TArFU9zK9T2Uoyiye530d6yV64j+qEgRS85rSEfYvooW+8YqIWnVvwaRt0FzFDAy0ONB91TemWlidM55RYKURiOvYa3L2jH5oDfFTDil3zjTCY5/4JSi7WD0ceChRAtq881O+8iK6RiVLZfAL6N0fcm5LpGG0ug3Sn3CZKN85M70/agDhbz11ryGhPCBkuSUuT7HT1rbxzLy3wUFyD5MnTPbLZ+f88zCBUPIZk6UK9IdacOPYcn/dQzfJEeNoL2FouZbC/SWw6gxpNjcmk/ercwg4/g/WpPEeLZGpCt3gt5sbFi91+j0mocogY/DZ4Y6HC//v2KIZPy9nmLu4EDLjVzJnNcqW0t3aD9matxUiBbhOUSHmIfh3oGA4ZPr4e1UXOYRcihviD6OwNLfU337o86MOOTSLGPtUhERnHdPmz/X0n1oGMWffxcKKhS2m3WJtycAKbLwc4N/oeOFLbEAyc7xbfX0FUKlEskNJeH2toWfgOj8w7TunOi7MNCJN6zA1eclXa9axJozw7JTbAxZDPVyBr17yXgDJ0QymGxI7X0z7Vy8Y3yR4URlNaHgh2OQ1BKi4uEkrVz2lFtujPkfS9oiKYOaSJCBTFG4G9bx7QlTezlbNOeFPkilBSWhosCaLPhHKizRDOtDMu60XyvATEdKWeiFqvt69MNtETZPFfmzXYSAcwYIuMYu3twzMzEaFi3mk+wXN0fcrz/DAU3EJ+1zWr3WaDRXzVhesgf0BcfP+tRBm0zRlfS12AP7OHCTxxbR4KJzCegLcjWLqSHF4kRZgrCkfph1UKdwX+hHDNxWkPx6U/e4/4fhCuQZ7nDu62WysbDQO624Gsit4DsXFkfaX6rX98B5sjw8GYIhv53Pw5HetyjBHr/EFDeIKWmrEafrPRkzUgGq/pdDjZZ8FcfIi4SR6Yx7fHg4rKp2EA7NP0mEUEthqqpncHpE47pu0EFy1d9ZwwW8AjvXIWT+CjMEv8EPBNOJrzT36qXYWFzF3DWgf//MyUyO5IGSIhIBP6oy/3yit6Q4lNPlaxTclxATlbbf7T8OLaHc+4Tk4odRVXk6RSyrEHTphGQot5YhodlLYueZfeINOzKZ05+r/dGkoPwVKyLPV/g2YOjjtHNGEzVSjQgUnHk4rkRtaIJvWAiFQtUaiaJVQKGYlq+ARlKRJDAXk9jpuwiKoH/5zBou1jrfqaxTqstEAK8leNNZyIv8biFLipDf6xC6EoV58gMTdx20tFNGgVRgsWFHfuNW39Gn5LDRS+u9PQWVAqKxZXwJ/8pz/smki0qIWNKaa9LeNxPDHf/RPoU1PqxDt7FVzPhE7KjnpgsPxB+sv3qqOJ9H4oBdueMRWnpUdTuO6CzpvUuio19mhX3iLGToUaYyYxH0v29o8QQzLPrWdBjQ4+g3TQoDu/bJCaP/UdEmK/e73pFrZWo/jM2tdGiRi8Z+67k9RPdIAR/dJW0+mJN+w6rizxrdYMUu9iNgUCps9NZX6BB0D6fpcfbas+SzlrIBthi6s5qnJFGS+jJGn9s0KdgRrqQxyyHvYKdjreR0Y+3XcRXdYercKLILmF/LE3jPiFaDWV3rkbYyetjUx/KFO/f6m5er+tXiUZeznNvOIj8ppLivwpPbXbGBGRcYNyuSgRfPxLTDe2XSbSkROuqGPSLQc+XP6x9KeM3p6Gwq54bO2jkDVWGDjfndNz3J64ovV3ljZ1ON0AwB3i+3MTEkFpzzDXFgg1GMviVZPRT7ZWhDOCOKwJI1ylLoGrMkQQgnbm7qImG9fP7GC2J12TRzSYuxKdIjsdG3xg==
template:
metadata:
creationTimestamp: null
name: config
namespace: homeassistant
type: Opaque

View File

@@ -1,4 +1,3 @@
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -22,8 +21,11 @@ spec:
- name: TZ
value: Europe/Berlin
volumeMounts:
- name: config
- name: config-dir
mountPath: /config
- name: configuration
mountPath: /config/configuration.yaml
readOnly: true
resources:
requests:
cpu: "100m"
@@ -32,6 +34,10 @@ spec:
cpu: "2"
memory: "1Gi"
volumes:
- name: config
- name: config-dir
persistentVolumeClaim:
claimName: homeassistant-nfs
claimName: config
- name: configuration
secret:
secretName: config

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: homeassistant-ingress
@@ -17,7 +17,7 @@ spec:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: homeassistant-websocket

View File

@@ -9,8 +9,10 @@ resources:
- pvc.yaml
- service.yaml
- deployment.yaml
- servicemonitor.yaml
- config.sealedsecret.yaml
images:
- name: homeassistant/home-assistant
newName: homeassistant/home-assistant
newTag: "2024.2"
newTag: "2024.1"

View File

@@ -1,28 +1,11 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: homeassistant-nfs
spec:
# storageClassName: slow
capacity:
storage: "1Gi"
# volumeMode: Filesystem
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/homeassistant
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: homeassistant-nfs
name: config
spec:
storageClassName: ""
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
volumeName: homeassistant-nfs

View File

@@ -0,0 +1,13 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: homeassistant-servicemonitor
labels:
app: homeassistant
spec:
selector:
matchLabels:
app: homeassistant
endpoints:
- port: homeassistant-web
path: /api/prometheus

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: stripprefix
@@ -7,7 +7,7 @@ spec:
prefixes:
- /api
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: websocket
@@ -18,7 +18,7 @@ spec:
# enable websockets
Upgrade: "websocket"
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: immich-ingressroute

View File

@@ -12,14 +12,14 @@ namespace: immich
helmCharts:
- name: immich
releaseName: immich
version: 0.3.1
version: 0.4.0
valuesFile: values.yaml
repo: https://immich-app.github.io/immich-charts
images:
- name: ghcr.io/immich-app/immich-machine-learning
newTag: v1.95.1
newTag: v1.99.0
- name: ghcr.io/immich-app/immich-server
newTag: v1.95.1
newTag: v1.99.0

View File

@@ -10,7 +10,7 @@ spec:
initdb:
owner: immich
database: immich
secret:
secret:
name: postgres-password
postgresql:
@@ -19,7 +19,12 @@ spec:
storage:
size: 1Gi
storageClass: nfs-client
pvcTemplate:
storageClassName: ""
resources:
requests:
storage: "1Gi"
volumeName: immich-postgres
monitoring:
enablePodMonitor: true

View File

@@ -24,3 +24,17 @@ spec:
requests:
storage: "50Gi"
volumeName: immich-nfs
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: immich-postgres
spec:
capacity:
storage: "1Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/immich-postgres
server: 192.168.1.157
# later used by cnpg

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: jellyfin-vue-ingress
@@ -17,7 +17,7 @@ spec:
tls:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: jellyfin-backend-ingress
@@ -37,7 +37,7 @@ spec:
tls:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: jellyfin-websocket
@@ -48,7 +48,7 @@ spec:
Connection: keep-alive, Upgrade
Upgrade: WebSocket
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: jellyfin-server-headers

View File

@@ -1,39 +1,21 @@
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: media
name: jellyfin-config-nfs
spec:
capacity:
storage: "1Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /export/kluster/jellyfin-config
server: 192.168.1.157
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: media
name: jellyfin-config-nfs
name: config
spec:
storageClassName: ""
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "1Gi"
volumeName: jellyfin-config-nfs
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: media
name: jellyfin-data-nfs
name: media
spec:
capacity:
storage: "1Ti"
@@ -46,8 +28,7 @@ spec:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: media
name: jellyfin-data-nfs
name: media
spec:
storageClassName: ""
accessModes:
@@ -55,4 +36,4 @@ spec:
resources:
requests:
storage: "1Ti"
volumeName: jellyfin-data-nfs
volumeName: media

View File

@@ -25,9 +25,9 @@ spec:
- name: TZ
value: Europe/Berlin
volumeMounts:
- name: jellyfin-config
- name: config
mountPath: /config
- name: jellyfin-data
- name: media
mountPath: /media
livenessProbe:
httpGet:
@@ -36,10 +36,10 @@ spec:
initialDelaySeconds: 100
periodSeconds: 15
volumes:
- name: jellyfin-config
- name: config
persistentVolumeClaim:
claimName: jellyfin-config-nfs
- name: jellyfin-data
claimName: config
- name: media
persistentVolumeClaim:
claimName: jellyfin-data-nfs
claimName: media

View File

@@ -0,0 +1,17 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: grafana-admin-secret
namespace: monitoring
spec:
encryptedData:
password: AgBe8isrCWd5MuaQq5CpA+P3fDizCCDo23BVauaBJLuMRIYbVwpfahaJW7Ocj3LTXwdeVVPBrOk2D6vESUXu6I0EWc3y/NFN4ZezScxMcjmeaAb+z1zWwdH0FynTPJYOxv1fis1FDTkXDmGy3FXo5NDK9ET899TtulKFkh7UqSxdrRWbD3pegJgqKGPIqDCTAxZN/ssiccfWGS4lHqQBJkXn8DeampcKwjOCvgaBdilF03GoSfpgsqa2Iw2SfTDEobWBWVMMK/RB3/Oi/YJkGwMW3ECUxvTDam8gb0RFA1xjWXoYTLVVP5fK7q7x63ns51HebloxAP1GBrt138N/iDrfbGfjNP8Lx0NFl5y5bTgYN/z8DVTOFf90xxWe+YYERdwllg0Ci1JLNbA+NszXTD4L/HC7a8XuBfjRzxMTeymNjR76jzfPkH6v1EvesOduTfSrahPgS0qS+eGOier1rHxj3EBRhOScY1ut5Bq4oJMNId9nMVbVa6xyq2HyxuJHXV+j6h5FGHmEXn9gIR7wGp8RhtPhKgVGLrHcbHZ5Th2E7eomz1T2NK/ezNP8ZhcwOj/lyGywlW0vhU798zpWhMf57k2OPeuMlfs8Y8y74epBdyBjsrMR4EDctF8RZR3vraxENiMJ6kk1gqKj04ir6HwL7blqwiybIFFnJrp2j7MzgjS4SQ687qMX5Zf5XT03aEE+9W9Epy73tT7zVQKdENCQlcm5
user: AgAdiOivMn0d+nYjYycMZz9QSiS/9QqwHPJQMHkE7/IOou+CJtBknlETNtdv84KZgBQTucufYqu3LR3djOBpdnQsYbIXDxPFgRZQ11pwu/sO2EGifDk218yyzzfZMvx1FL7JL4LI1rKoiHycZowCwsAjEtlICVOOYv1/Plki+6MHXiAGG4r/yUhugGx3VLLX+Poq8oaTeHndgSsFXJege8SfgYR4TsC7pQgsM1UQEFncGIhJYTD2ashmUxFJ+7CJjHqPR0lFRrZXmFvPwTYTCMT+tnSHnCFWtTht8cEi1NxA4kD/eKEX0rOol15EUZnFUws2WqWI634TbyGwZ7km/Yw4XoDxiQR4ar6ulkqb/djcc3cWDYE7PF1m1c+r3iog85S5CSfZ5EvdCHHrbPN9uO2gmoRQWiR5qI70YMxBSnkeLZWN05O1vUuopdXFDTafY7YskxLEdIGHGqFUpUrJZOvBB0zNBdHGgYxFzb5pNmMCC5LPlOuoKjV4yskh9Tgovz06aAvsPxn2WWx6NOJambeziKB5OmSKvPsFofViyGBekVAWSWtt9yJe6lu5OKpBEiA6xhGhQ4ZryTXu9wvVALuPSIwBFITv85sIxjJb80qhJ51wb12QgzLLcPby0HSanyBI1M4jfsXWpK8gIAbDNO+eD7z3PhD9Y/5hPqYKXZ37Geyq23xiyxG8XDj6cL+Ie6k8XipayI4=
template:
metadata:
creationTimestamp: null
name: grafana-admin-secret
namespace: monitoring
type: Opaque

View File

@@ -1,5 +1,5 @@
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
metadata:
name: grafana-ingress
spec:

View File

@@ -1,4 +1,3 @@
replicas: 1
## Create a headless service for the deployment
@@ -10,13 +9,6 @@ headlessService: false
##
service:
enabled: true
type: ClusterIP
port: 80
targetPort: 3000
# targetPort: 4181 To be used with a proxy extraContainer
annotations: {}
labels: {}
portName: service
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
@@ -24,42 +16,42 @@ serviceMonitor:
##
enabled: false
ingress:
enabled: false
persistence:
type: pvc
enabled: true
# storageClassName: default
accessModes:
- ReadWriteOnce
size: 10Gi
# annotations: {}
finalizers:
- kubernetes.io/pvc-protection
# selectorLabels: {}
## Sub-directory of the PV to mount. Can be templated.
# subPath: ""
## Name of an existing PVC. Can be templated.
existingClaim: grafana-nfs
## If persistence is not enabled, this allows to mount the
## local storage in-memory to improve performance
##
inMemory:
# credentials
admin:
existingSecret: grafana-admin-secret
userKey: user
passwordKey: password
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Thanos
type: prometheus
url: http://thanos-querier.prometheus.svc:9090
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
## Reference to external ConfigMap per provider. Use provider name as key and ConfigMap name as value.
## A provider dashboards must be defined either by external ConfigMaps or in values.yaml, not in both.
## ConfigMap data example:
##
## data:
## example-dashboard.json: |
## RAW_JSON
##
dashboardsConfigMaps:
home-metrics: dashboard-home-metrics
proxmox: dashboard-proxmox
gitea: dashboard-gitea
grafana.ini:
wal: true
default_theme: dark
unified_alerting:
enabled: false
## The maximum usage on memory medium EmptyDir would be
## the minimum value between the SizeLimit specified
## here and the sum of memory limits of all containers in a pod
##
# sizeLimit: 300Mi
initChownData:
## If false, data ownership will not be reset at startup
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: true
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
# adminPassword: strongpassword

View File

@@ -6,29 +6,27 @@ namespace: monitoring
resources:
- namespace.yaml
- grafana.pvc.yaml
- influxdb.pvc.yaml
# - influxdb.pvc.yaml
- grafana.ingress.yaml
# prometheus-operator crds
- https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.70.0/bundle.yaml
- prometheus.yaml
- thanos-objstore-config.sealedsecret.yaml
- grafana-admin.sealedsecret.yaml
- dashboards/
helmCharts:
- releaseName: grafana
name: grafana
repo: https://grafana.github.io/helm-charts
version: 7.3.0
version: 7.3.7
valuesFile: grafana.values.yaml
- releaseName: influxdb
name: influxdb2
repo: https://helm.influxdata.com/
version: 2.1.2
valuesFile: influxdb.values.yaml
# - releaseName: influxdb
# name: influxdb2
# repo: https://helm.influxdata.com/
# version: 2.1.2
# valuesFile: influxdb.values.yaml
- releaseName: telegraf-speedtest
name: telegraf
repo: https://helm.influxdata.com/
version: 1.8.39
valuesFile: telegraf-speedtest.values.yaml
# - releaseName: telegraf-speedtest
# name: telegraf
# repo: https://helm.influxdata.com/
# version: 1.8.39
# valuesFile: telegraf-speedtest.values.yaml

View File

@@ -1,16 +0,0 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: thanos-objstore-config
namespace: monitoring
spec:
encryptedData:
thanos.yaml: AgCXlr7NO2DoH1R0ngtDFi8rgJaDnW5WSmOMjvXF4GMcEjnn1kwQMLkF0Xz1BUB5GlQkTAg+ZjCWGMlfycBmUnZb+koZK3X1YLsk1BxBxtuSqhj35iQYxKQ7rAlsz7FxUQjK2oiJkFeQmo/rwcw6l6vZJ73+THYSebR9mLQ/H0pnmJM3ldLX4iWL2H8BZ7ftOYdXO7Xv0lk2k2L4O4LgnB1Uedpyk0HLVxAv3VdVU/RFpHm5Q7kudrCMm9ENcJG7qIWuii8GkysvEefbo2phgKn1Zr5XR6SyekuW2e6FyHe9us5Pv5HnJ6Z2+ZyewygaGgHiRqtxRMaLbahICewfSHwyGzeAD2kdgwVyJYXxVPV9qKQvZmj0ZDCDZ5K548mSUq7nNXSI9M9AJBTKUoqb2FXK3pqn4yh9M1l+7Pmno5Fs22blAyGsRqO32GxrYvEXPpdSeqHRjOMYTnbPuteGRKcvmSEUSuHzkeoTzU1Jh4Sg0ygtQUNIKtbwhJm1XpbJ0oaR5ukWMxPfpDv+B5FmrDsU/I+o62+NtCLQLkK6MoRBFiJ1kymtKkM3vQ1CVg4Vtc5Gc2D6mMu5K8kEuUODweBb8qPnYH7ULfTYORldj3d+Fb2mGF5mAU6xHMzbocsdgZpbAzUP/FfJmMMDWf4aW3LJ1mBjUD06KAwPsQvbTm6VInrdXh2QVb4UIp41kbyK8sanHrvh3bprHloxt8OnTZ2HQl+XN+kxYirkVkL34lIlk7KdYCWqO7QqH0ncd9WF0f9mpPGbxo3J
template:
metadata:
creationTimestamp: null
name: thanos-objstore-config
namespace: monitoring
type: Opaque

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: nextcloud-ingressroute

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: websocket
@@ -9,7 +9,7 @@ spec:
# enable websockets
Upgrade: "websocket"
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: ntfy-ingressroute

View File

@@ -13,4 +13,4 @@ resources:
images:
- name: binwiederhier/ntfy
newName: binwiederhier/ntfy
newTag: v2.8.0
newTag: v2.10.0

View File

@@ -34,4 +34,4 @@ spec:
volumes:
- name: mealie-data
persistentVolumeClaim:
claimName: mealie-data
claimName: mealie

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: mealie-ingressroute

View File

@@ -12,5 +12,5 @@ resources:
images:
- name: mealie
newTag: v1.2.0
newTag: v1.3.2
newName: ghcr.io/mealie-recipes/mealie

View File

@@ -1,12 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mealie-data
name: mealie
spec:
resources:
requests:
storage: 1Gi
storage: 5Gi
volumeMode: Filesystem
storageClassName: nfs-client
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce

View File

@@ -18,9 +18,9 @@ spec:
ports:
- containerPort: 7070
volumeMounts:
- name: rss-data
- name: data
mountPath: /data
volumes:
- name: rss-data
- name: data
persistentVolumeClaim:
claimName: rss-claim
claimName: data

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: rss-ingressroute

View File

@@ -1,9 +1,9 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rss-claim
name: data
spec:
storageClassName: nfs-client
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce
resources:

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: rss-ingressroute

View File

@@ -1,11 +1,25 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: syncthing-data
spec:
capacity:
storage: "50Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/syncthing
server: 192.168.1.157
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: syncthing-claim
spec:
storageClassName: nfs-client
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: syncthing

6
apps/todos/README.md Normal file
View File

@@ -0,0 +1,6 @@
### Adding a user
```bash
kubectl exec -it -n todos deployments/todos-vikunja -- /app/vikunja/vikunja user create -u <username> -e "<user-email>"
```
Password will be prompted.

21
apps/todos/ingress.yaml Normal file
View File

@@ -0,0 +1,21 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: todos-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`todos.kluster.moll.re`) && PathPrefix(`/api/v1`)
kind: Rule
services:
- name: todos-api
port: 3456
- match: Host(`todos.kluster.moll.re`) && PathPrefix(`/`)
kind: Rule
services:
- name: todos-frontend
port: 80
tls:
certResolver: default-tls

View File

@@ -0,0 +1,18 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: todos
resources:
- namespace.yaml
- pvc.yaml
- ingress.yaml
# helmCharts:
# - name: vikunja
# version: 0.1.5
# repo: https://charts.oecis.io
# valuesFile: values.yaml
# releaseName: todos
# managed by argocd directly

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

12
apps/todos/pvc.yaml Normal file
View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
storageClassName: "nfs-client"
accessModes:
- ReadWriteOnce

51
apps/todos/values.yaml Normal file
View File

@@ -0,0 +1,51 @@
######################
# VIKUNJA COMPONENTS #
######################
# You can find the default values that this `values.yaml` overrides, in the comment at the top of this file.
api:
enabled: true
image:
tag: 0.22.1
persistence:
# This is your Vikunja data will live, you can either let
# the chart create a new PVC for you or provide an existing one.
data:
enabled: true
existingClaim: data
accessMode: ReadWriteOnce
size: 10Gi
mountPath: /app/vikunja/files
ingress:
main:
enabled: false
configMaps:
# The configuration for Vikunja's api.
# https://vikunja.io/docs/config-options/
config:
enabled: true
data:
config.yml: |
service:
frontendUrl: https://todos.kluster.moll.re
database:
type: sqlite
path: /app/vikunja/files/vikunja.db
registration: false
env:
frontend:
enabled: true
image:
tag: 0.22.1
ingress:
main:
enabled: false
postgresql:
enabled: false
redis:
enabled: false
typesense:
enabled: false

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
namespace: whoami

View File

@@ -1,5 +1,5 @@
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: argocd-ingressroute

View File

@@ -4,7 +4,7 @@ kind: Kustomization
namespace: backup
# nameSuffix: -backup
resources:
- ../../base
- ../../cronjobs-base
# - ./restic-commands.yaml

View File

@@ -3,7 +3,7 @@ kind: Kustomization
namespace: backup
resources:
- ../../base
- ../../cronjobs-base
# patch the cronjob args field:

View File

@@ -1,6 +1,14 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: backup
resources:
- rclone-config.sealedsecret.yaml
- namespace.yaml
- pvc.yaml
- restic-password.sealedsecret.yaml
- pvc.yaml
- rclone-config.sealedsecret.yaml
- rclone-gcloud.deployment.yaml
- cronjobs-overlays/prune/
- cronjobs-overlays/backup/

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

View File

@@ -1,3 +1,16 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pg-backup-data
spec:
capacity:
storage: "50Gi"
accessModes:
- ReadWriteOnce
nfs:
path: /kluster/pg-backup
server: 192.168.1.157
---
kind: PersistentVolumeClaim
apiVersion: v1
@@ -5,9 +18,10 @@ metadata:
name: postgres-backup-claim
spec:
storageClassName: nfs-client
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeName: pg-backup-data

View File

@@ -11,7 +11,7 @@ resources:
images:
- name: octodns
newName: octodns/octodns # has all plugins
newTag: "2024.02"
newTag: "2024.03"
- name: git
newName: alpine/git

View File

@@ -0,0 +1,11 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: external
resources:
- namespace.yaml
- omv-s3.ingress.yaml
- openmediavault.ingress.yaml
- proxmox.ingress.yaml

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

View File

@@ -1,8 +1,7 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: omv-s3-ingressroute
namespace: external
spec:
entryPoints:
- websecure
@@ -20,7 +19,6 @@ apiVersion: v1
kind: Endpoints
metadata:
name: omv-s3
namespace: external
subsets:
- addresses:
- ip: 192.168.1.157
@@ -31,7 +29,6 @@ apiVersion: v1
kind: Service
metadata:
name: omv-s3
namespace: external
spec:
ports:
- port: 9000

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: omv-ingressroute

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: proxmox-ingressroute
@@ -19,7 +19,7 @@ spec:
tls:
certResolver: default-tls
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: proxmox-websocket

View File

@@ -71,7 +71,7 @@ spec:
selector:
app: drone-server
---
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: drone-server-ingress

View File

@@ -1,4 +1,4 @@
apiVersion: traefik.containo.us/v1alpha1
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: gitea-ingress

View File

@@ -10,6 +10,6 @@ namespace: metallb-system
helmCharts:
- name: metallb
repo: https://metallb.github.io/metallb
version: 0.14.3
version: 0.14.4
releaseName: metallb
valuesFile: values.yaml

View File

@@ -0,0 +1,20 @@
## How to use
This deployment exposes a `StorageClass` named `nfs-client` that can be used to create `PersistentVolumeClaim` resources:
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
namespace: test-namespace
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
```
This will create a new folder in the NFS server under `<base-path>/test-namespace/test-claim` and mount it.

View File

@@ -3,8 +3,6 @@ kind: Kustomization
namespace: nfs-provisioner
bases:
resources:
- github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy
- namespace.yaml

View File

@@ -18,4 +18,4 @@ spec:
- name: nfs-client-root
nfs:
server: 192.168.1.157
path: /export/kluster/
path: /export/kluster/

View File

@@ -3,4 +3,5 @@ kind: StorageClass
metadata:
name: nfs-client
parameters:
onDelete: "retain"
archiveOnDelete: "true"
pathPattern: "${.PVC.namespace}/${.PVC.name}"

View File

@@ -9,6 +9,6 @@ namespace: pg-ha
helmCharts:
- name: cloudnative-pg
releaseName: pg-controller
version: 0.19.1
version: 0.20.2
valuesFile: values.yaml
repo: https://cloudnative-pg.io/charts/

View File

@@ -0,0 +1,20 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: prometheus
resources:
- namespace.yaml
# prometheus-operator crds
- https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.70.0/bundle.yaml
- prometheus.yaml
- thanos-objstore-config.sealedsecret.yaml
# thanos deployment from kube-thanos project
- thanos-store.statefulset.yaml
- thanos-query.deployment.yaml
images:
- name: thanos
newName: quay.io/thanos/thanos
newTag: v0.34.1

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: placeholder

View File

@@ -46,6 +46,8 @@ kind: Prometheus
metadata:
name: prometheus
spec:
securityContext:
runAsUser: 65534 # same as the thanos sidecar
resources:
requests:
memory: 400Mi

View File

@@ -0,0 +1,16 @@
---
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: thanos-objstore-config
namespace: prometheus
spec:
encryptedData:
thanos.yaml: AgAvZbqz5bYVzhbpfYXKoVHt+wKKuOI2C8pBCkBxuBcORoEf2f6eGVYS+aaojipQk8LaYP2yw/ffRmucKuAH0mVmZllRkVoZTOP7L1ciDmrOpbfrfPtoMJagFY90HKd1FZcI45C64MUYvqHCvF40Jk6rEii9aSmOPj9WyGqh5GrqCO24tt1Prk3o+ZS6Z2tlPRF1PdBpyZcQYYMnSuTgA4aCVuXQF/H5rR1luP85jzJAESp1VEV+xEOUY98ZLaA071fySndFZEn6YflfpixqNWFkloYENMfGL+x9jc/zbvbLyG4Q4FG5etz331GQe29OW7uIibXXGwdKO+a6Wd7a6I9ygA5+V8UdeT9mvB2AZRV6aY3aHifOgTTDDg/9YkXfUnmEwowhzsx7mtumxHOmHssFsWmRCX04wHz596/8/RyqmQrShCdGjtVkhVcEuJPPacC6awj9AZOQcW86uatZgnvnNJPlHjgSYFGxnHS89AWRgY68IF6h4Xhs8CpIcVoujK8Oi05vL3Ypi3g/r0iJ65tbYVo3LfyTnSk4pUgERJHGCPpBmiSo4DG+K9pA+cYyqjgZZGXaGKzLNCUDrBEWg5nY5gNMCyjTEeoQUurz2uPLD07Jme9KUd0a1TpoEsUOvfyyiM1DKW30XTKvIfm5m3voBatV+3oX1XfqyZEqkS7+VtWzqRqYAr0y5MEnBTpIMXQBa8M5SlVvcpaovKk1A6ZzNZExfF77o71v650axcLT3U+sxV8IIRS7y+BDRJGYiXaX0EFOIlIe+YDGFbf2k1kUi1kbhHBQEPCQkTgt79yTLEyXNAGrpNUsNYivWIQ+8GPM6+wDY9NjS2QLFKBt5sPe0VjqxzCeSoUY6rDgmR8rEtWpqOHWfuD5IuyCerzxmh0kMr9vVKjEjfJoSjES2MQRM5wOdVLHJSc=
template:
metadata:
creationTimestamp: null
name: thanos-objstore-config
namespace: prometheus
type: Opaque

View File

@@ -0,0 +1,52 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: thanos-querier
labels:
app: thanos-querier
spec:
replicas: 1
selector:
matchLabels:
app: thanos-querier
template:
metadata:
labels:
app: thanos-querier
spec:
containers:
- name: thanos
image: thanos
args:
- query
- --log.level=debug
- --query.replica-label=replica
- --endpoint=dnssrv+_grpc._tcp.thanos-store:10901
ports:
- name: http
containerPort: 10902
- name: grpc
containerPort: 10901
livenessProbe:
httpGet:
port: http
path: /-/healthy
readinessProbe:
httpGet:
port: http
path: /-/ready
---
apiVersion: v1
kind: Service
metadata:
labels:
app: thanos-querier
name: thanos-querier
spec:
ports:
- port: 9090
protocol: TCP
targetPort: http
name: http
selector:
app: thanos-querier

View File

@@ -0,0 +1,73 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: thanos-store-gateway
labels:
app: thanos-store-gateway
spec:
replicas: 1
selector:
matchLabels:
app: thanos-store-gateway
serviceName: thanos-store-gateway
template:
metadata:
labels:
app: thanos-store-gateway
thanos-store-api: "true"
spec:
containers:
- name: thanos
image: thanos
args:
- "store"
- "--log.level=debug"
- "--data-dir=/data"
- "--grpc-address=0.0.0.0:10901"
- "--http-address=0.0.0.0:10902"
- "--objstore.config-file=/etc/secret/thanos.yaml"
- "--index-cache-size=500MB"
- "--chunk-pool-size=500MB"
ports:
- name: http
containerPort: 10902
- name: grpc
containerPort: 10901
livenessProbe:
httpGet:
port: 10902
path: /-/healthy
readinessProbe:
httpGet:
port: 10902
path: /-/ready
volumeMounts:
- name: thanos-objstore-config
mountPath: /etc/secret
readOnly: true
- name: thanos-data
mountPath: /data
volumes:
- name: thanos-objstore-config
secret:
secretName: thanos-objstore-config
- name: thanos-data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: thanos-store
name: thanos-store
spec:
clusterIP: None
ports:
- name: grpc
port: 10901
targetPort: 10901
- name: http
port: 10902
targetPort: 10902
selector:
app: thanos-store-gateway

View File

@@ -3,7 +3,7 @@ kind: CronJob
metadata:
name: renovate
spec:
schedule: '@hourly'
schedule: '0,30 * * * *'
concurrencyPolicy: Forbid
jobTemplate:
spec:

View File

@@ -0,0 +1,9 @@
### Restoring sealed secrets
```bash
# install the sealed secrets controller
kubectl kustomize . | kubectl apply -f -
# restore the sealed secrets
kubectl apply -f main.key
# restart pod
kubectl delete pod -n kube-system -l name=sealed-secrets-controller
```

View File

@@ -6,7 +6,6 @@ metadata:
labels:
name: sealed-secrets-service-proxier
name: sealed-secrets-service-proxier
namespace: kube-system
rules:
- apiGroups:
- ""
@@ -35,7 +34,6 @@ metadata:
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@@ -43,7 +41,6 @@ roleRef:
subjects:
- kind: ServiceAccount
name: sealed-secrets-controller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
@@ -52,7 +49,6 @@ metadata:
labels:
name: sealed-secrets-key-admin
name: sealed-secrets-key-admin
namespace: kube-system
rules:
- apiGroups:
- ""
@@ -116,7 +112,6 @@ metadata:
labels:
name: sealed-secrets-service-proxier
name: sealed-secrets-service-proxier
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@@ -133,7 +128,6 @@ metadata:
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
spec:
minReadySeconds: 30
replicas: 1
@@ -157,7 +151,7 @@ spec:
command:
- controller
env: []
image: docker.io/bitnami/sealed-secrets-controller:v0.23.1
image: controller
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -342,7 +336,6 @@ metadata:
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system
spec:
ports:
- port: 8080
@@ -365,7 +358,6 @@ roleRef:
subjects:
- kind: ServiceAccount
name: sealed-secrets-controller
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
@@ -374,4 +366,3 @@ metadata:
labels:
name: sealed-secrets-controller
name: sealed-secrets-controller
namespace: kube-system

View File

@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: kube-system
resources:
- controller.yaml
images:
- name: controller
newName: docker.io/bitnami/sealed-secrets-controller
newTag: 0.26.1

View File

@@ -5,6 +5,8 @@ resources:
- pvc.yaml
- configmap.yaml
- servicemonitor.yaml
- https://raw.githubusercontent.com/traefik/traefik/v2.11/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
- https://raw.githubusercontent.com/traefik/traefik/v2.11/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
namespace: traefik-system
@@ -13,4 +15,4 @@ helmCharts:
releaseName: traefik
version: 26.1.0
valuesFile: values.yaml
repo: https://helm.traefik.io/traefik
repo: https://traefik.github.io/charts

View File

@@ -1,14 +1,14 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: restic-commons-application
name: backup-application
namespace: argocd
spec:
project: infrastructure
source:
repoURL: ssh://git@git.kluster.moll.re:2222/remoll/k3s-infra.git
targetRevision: main
path: infrastructure/backup/common
path: infrastructure/backup/
destination:
server: https://kubernetes.default.svc
namespace: backup

View File

@@ -1,18 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: restic-backup-application
namespace: argocd
spec:
project: infrastructure
source:
repoURL: ssh://git@git.kluster.moll.re:2222/remoll/k3s-infra.git
targetRevision: main
path: infrastructure/backup/overlays/backup
destination:
server: https://kubernetes.default.svc
namespace: backup
syncPolicy:
automated:
prune: true
selfHeal: true

View File

@@ -1,7 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- common.application.yaml
- backup.application.yaml
- prune.application.yaml
- postgres.backup.application.yaml
- application.yaml

View File

@@ -1,18 +1,19 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: postgres-backup-application
name: external-application
namespace: argocd
spec:
project: infrastructure
source:
repoURL: ssh://git@git.kluster.moll.re:2222/remoll/k3s-infra.git
targetRevision: main
path: infrastructure/backup/postgres
path: infrastructure/external
destination:
server: https://kubernetes.default.svc
namespace: backup
namespace: external
syncPolicy:
automated:
prune: true
selfHeal: true
selfHeal: true

View File

@@ -1,4 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- application.yaml
- application.yaml

View File

@@ -1,18 +1,19 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: restic-prune-application
name: files-application
namespace: argocd
spec:
project: infrastructure
project: apps
source:
repoURL: ssh://git@git.kluster.moll.re:2222/remoll/k3s-infra.git
targetRevision: main
path: infrastructure/backup/overlays/prune
path: apps/files
destination:
server: https://kubernetes.default.svc
namespace: backup
namespace: files
syncPolicy:
automated:
prune: true
selfHeal: true
selfHeal: true

Some files were not shown because too many files have changed in this diff Show More