1. Environmental preparation
1.1. Modify the host name
172.31.21.135 Execute hostname CTL set-hostname ceph1 172.31.21.185 Execute hostname CTL set-hostname ceph2 172.31.21.167 Execute hostname CTL set-hostname ceph3
1.2. Configuration of Aliyuan
rm -f /etc/yum.repos.d/* wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo sed -i '/aliyuncs.com/d' /etc/yum.repos.d/*.repo echo '#Ali ceph Source [ceph] name=ceph baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch/ gpgcheck=0 [ceph-source] name=ceph-source baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS/ gpgcheck=0 #'>/etc/yum.repos.d/ceph.repo yum clean all && yum makecache yum install deltarpm -y yum -y install python2-pip
1.3. Configuration time synchronization
yum install ntp ntpdate ntp-doc -y
1.4. Formatted Disk
mkfs.xfs /dev/vdb -f
2. Deployment node preparation
2.1. Modify the hosts file of the primary node
echo '172.31.21.135' ceph1 >> /etc/hosts echo '172.31.21.185' ceph2 >> /etc/hosts echo '172.31.21.167' ceph3 >> /etc/hosts
2.2. SSH Secret-Free Logon Settings
Generating public and private keys
ssh-keygen -t rsa
Import public key to authentication file
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys scp ~/.ssh/id_rsa.pub root@ceph1:~/.ssh/id_rsa.pub scp ~/.ssh/id_rsa.pub root@ceph2:~/.ssh/id_rsa.pub scp ~/.ssh/id_rsa.pub root@ceph3:~/.ssh/id_rsa.pub cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
change permission
chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys
2.3. Main node installs ceph-deploy and CEPH
yum install yum install -y ceph yum install ceph-deploy -y
2.4. Installing ceph from slave node
yum install -y ceph
2.5. Create monitoring nodes
mkdir -p /etc/ceph cd /etc/ceph ceph-deploy new ceph1 ceph2 ceph3
2.6. Installation of cpeh
ceph-deploy install ceph1 ceph2 ceph3
2.7. Initialization of monitoring nodes
ceph-deploy mon create-initial
2.8. Deploy MGR and create monitor management node
ceph-deploy mgr create ceph1
2.9 distribution of configuration files
ceph-deploy admin ceph1 ceph2 ceph3
2.10, add and activate osd
List all disks: ceph-deploy disk list ceph1 ceph2 ceph3 ceph-deploy osd create --data /dev/vdb ceph2 ceph-deploy osd create --data /dev/vdb ceph3
2.11. Viewing Cluster Status
ceph -s [root@host-172-31-21-135 ~]# ceph -s cluster: id: bd2f46ff-afc8-4d91-8070-a16246f6266c health: HEALTH_OK services: mon: 3 daemons, quorum ceph1,ceph3,ceph2 mgr: ceph1(active) osd: 2 osds: 2 up, 2 in data: pools: 0 pools, 0 pgs objects: 0 objects, 0B usage: 2.00GiB used, 98.0GiB / 100GiB avail pgs:
2.12. Open dashboard module
ceph mgr module enable dashboard
Enter http://172.31.21.135:7000/to get the return page in the browser