|
|
1. disk fails
|
|
|
|
|
|
1. remove disk from node
|
|
|
|
|
|
1. mark out osd. `ceph osd out osd.5`
|
|
|
|
|
|
1. remove from crush map. `ceph osd crush remove osd.5`
|
|
|
|
|
|
1. delete caps. `ceph auth del osd.5`
|
|
|
|
|
|
1. remove osd. `ceph osd rm osd.5`
|
|
|
|
|
|
1. delete the deployment `kubectl delete deployment -n rook-ceph rook-ceph-osd-id-5`
|
|
|
|
|
|
1. delete osd data dir on node `rm -rf /var/lib/rook/osd5`
|
|
|
|
|
|
1. edit the osd configmap `kubectl edit configmap -n rook-ceph rook-ceph-osd-nodename-config` 9a) edit out the config section pertaining to your osd id and underlying device.
|
|
|
|
|
|
1. add new disk and verify node sees it.
|
|
|
|
|
|
1. restart the rook-operator pod by deleting the rook-operator pod
|
|
|
|
|
|
1. osd prepare pods run
|
|
|
|
|
|
1. new rook-ceph-osd-id-5 will be created
|
|
|
|
|
|
1. check health of your cluster `ceph -s; ceph osd tree`
|
|
|
|
|
|
1. remove the node: `ceph osd crush remove {node-name}` |
|
|
\ No newline at end of file |