2022-01-23 22:31:22 +07:00
|
|
|
# Single node cluster adjustments
|
|
|
|
|
2024-03-13 00:27:52 +07:00
|
|
|
!!! danger
|
|
|
|
|
|
|
|
This is not officially supported and I don't regularly test it,
|
|
|
|
I highly recommend using multiple nodes.
|
2022-01-23 22:31:22 +07:00
|
|
|
|
2024-03-13 00:27:52 +07:00
|
|
|
Using a single node could lead to data loss unless your backup strategy is rock solid,
|
|
|
|
make sure you are **ABSOLUTELY CERTAIN** this is what you want.
|
2022-01-23 22:31:22 +07:00
|
|
|
|
2024-03-13 00:27:52 +07:00
|
|
|
Update the following changes, then commit and push.
|
2022-01-23 22:31:22 +07:00
|
|
|
|
2024-03-13 00:27:52 +07:00
|
|
|
## Remove storage redundancy
|
|
|
|
|
|
|
|
Set pod counts and number of data copies to `1`:
|
|
|
|
|
|
|
|
```yaml title="system/rook-ceph/values.yaml" hl_lines="4 6 11 12 18 22 25"
|
|
|
|
rook-ceph-cluster:
|
|
|
|
cephClusterSpec:
|
|
|
|
mon:
|
|
|
|
count: 1
|
|
|
|
mgr:
|
|
|
|
count: 1
|
|
|
|
cephBlockPools:
|
|
|
|
- name: standard-rwo
|
|
|
|
spec:
|
|
|
|
replicated:
|
|
|
|
size: 1
|
|
|
|
requireSafeReplicaSize: false
|
|
|
|
cephFileSystems:
|
|
|
|
- name: standard-rwx
|
|
|
|
spec:
|
|
|
|
metadataPool:
|
|
|
|
replicated:
|
|
|
|
size: 1
|
|
|
|
dataPools:
|
|
|
|
- name: data0
|
|
|
|
replicated:
|
|
|
|
size: 1
|
|
|
|
metadataServer:
|
|
|
|
activeCount: 1
|
|
|
|
activeStandby: false
|
2022-01-23 22:31:22 +07:00
|
|
|
```
|
|
|
|
|
2024-01-18 20:13:35 +07:00
|
|
|
## Disable automatic upgrade
|
2022-01-23 22:31:22 +07:00
|
|
|
|
2022-02-25 01:50:45 +07:00
|
|
|
Because they will try to drain the only node, the pods will have no place to go.
|
|
|
|
Remove them entirely:
|
2022-01-23 22:31:22 +07:00
|
|
|
|
|
|
|
```sh
|
2022-02-25 01:50:45 +07:00
|
|
|
rm -rf system/kured
|
2022-01-23 22:31:22 +07:00
|
|
|
```
|
2022-02-25 01:50:45 +07:00
|
|
|
|
|
|
|
Commit and push the change.
|
|
|
|
You can revert it later when you add more nodes.
|