Rancher, K8S Persistent Storage Ceph RBD Setup and Configuration

Keywords: Linux Ceph osd yum RPM

1. Configure host, install ntp (not required)
2. Configure Secret-Free ssh
3. Configure ceph, yum source

vim /etc/yum.repo.d/ceph.cepo

[ceph]
name=ceph
baseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/x86_64/
gpgcheck=0
priority=1

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/noarch/
gpgcheck=0
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.cloud.tencent.com/ceph/rpm-luminous/el7/SRPMS
enabled=0   
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.cloud.tencent.com/ceph/keys/release.asc
priority=1

4. Install ceph-deploy

yum update
yum install ceph-deploy

5. Installation

During installation, if an error occurs, you can use the following commands to clear the configuration:

ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys

The following commands clear the ceph installation package together:

ceph-deploy purge {ceph-node} [{ceph-node}]

mkdir -p /root/cluster
cd /root/cluster/
ceph-deploy new yj-ceph1

If an error occurs:
Traceback
(most recent call last):
File "/usr/bin/ceph-deploy", line 18, in <module>
from ceph_deploy.cli import main
File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in
<module>
import pkg_resources
ImportError: No module named pkg_resources

Installation:

yum install python-setuptools

Change the default number of copies in the Ceph profile from 3 to 2 so that only two OSD s can reach the active + clean state.

vim ceph.conf 

[global]
fsid = 8764fad7-a8f0-4812-b4db-f1a65af66e4a
mon_initial_members = ceph1,ceph2
mon_host = 192.168.10.211,192.168.10.212
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd pool default size = 2
mon clock drift allowed = 5
mon clock drift warn backoff = 30

ceph-deploy install yj-ceph1 yj-ceph2
ceph-deploy mon create-initial

ceph-deploy osd create --data /dev/vdb yj-ceph1
ceph-deploy osd create --data /dev/vdb yj-ceph2

Copy the configuration file and admin key to the management and Eph nodes with ceph-deploy, so you do not need to specify the monitor address and ceph.client.admin.keyring each time you execute the Ceph command line

ceph-deploy admin yj-ceph1 yj-ceph2

ceph osd tree
ceph-deploy mgr create yj-ceph1

ceph health
ceph -s

A ceph cluster can have multiple pools, each of which is a logically isolated unit. Different pools can have completely different data processing methods, such as Replica Size (number of copies), Placement Groups, CRUSH Rules, snapshots, owner, and so on.
Usually before creating a pool, you need to override the default pg_num, which is officially recommended:

If there are less than 5 OSD s, set pg_num to 128.
5-10 OSD s, set pg_num to 512.
10-50 OSD s, set pg_num to 4096.
More than 50 OSD s can be calculated with reference to pgcalc.

osd pool default pg num = 128
osd pool default pgp num = 128

ceph osd pool create k8s-pool 128 128

Administrator key s need to be stored as secret s to k8s, preferably in default space
ceph auth get-key client.admin|base64
Replace the resulting key value with the following key

vim ceph-secret-admin.yaml

apiVersion: v1
kind: Secret
metadata:
   name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
   key: QVFBTHhxxxxxxxxxxFpRQmltbnBDelRkVmc9PQ==

kubectl apply -f ceph-secret-admin.yaml

rancher error:

MountVolume.SetUp failed for volume "pvc-a2754739-cf6f-11e7-a7a5-02e985942c89" :
rbd: map failed exit status 2 2017-11-22 12:35:53.503224 7f0753c66100 -1 did not load config file,
using default settings. libkmod: ERROR ../libkmod/libkmod.c:586 kmod_search_moddep:
could not open moddep file '/lib/modules/4.9.45-rancher/modules.dep.bin' modinfo: ERROR:
Module alias rbd not found. modprobe:
ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file
'/lib/modules/4.9.45-rancher/modules.dep.bin' modprobe:
FATAL: Module rbd not found in directory /lib/modules/4.9.45-rancher rbd: failed to load rbd kernel module (1)
rbd: sysfs write failed In some cases useful info is found in syslog - try "dmesg | tail" or so. rbd: map failed:
(2) No such file or directory

The new node needs to install the ceph-client already configured with the CEPH profile:

yum install ceph-common

Configure Users
Ceph.client.admin.keyring ceph.client.kube.keyring ceph.client.test.keyring ceph.conf Copy to/etc/ceph/

Because the container cannot access/lib/modules, you need to add on the rke configuration:

services:
  etcd:
    backup_config:
        enabled: true
        interval_hours: 6
        retention: 60
  kubelet:
    extra_binds:
      - "/lib/modules:/lib/modules"

Then use:

rke up --config rancher-cluster.yml

Or rancher's use of ceph:

sc built,
Deployment was created using pvc with errors,

To the effect that ceph map failed ~~

emm,

Finally, it is found that if you need to manually map once at each node, no error will be reported again ~~MMP

rbd create foo --size 1024 --image-feature=layring -p test
rbd map foo -p test

ceph rbd expansion:

Look at the mirror id that needs to be expanded, and expand on ceph:

rbd resize --size 2048 kubernetes-dynamic-pvc-572a74e9-db6a-11e9-9b3a-525400e65297 -p test

Modify the pv configuration to the appropriate size and restart the corresponding container.

Posted by sissy on Fri, 27 Dec 2019 17:10:58 -0800