Introduction and Setup of Ceph(CephFS) File System

Keywords: Linux Ceph osd less

1: Introduction before installation

A Ceph (CephFS) file system requires at least two RADOS pools, one for data and one for metadata. When configuring these pools, we should consider the following three points

  • Use a higher level of replication for the metadata pool, because any data loss in this pool may render the entire file system inaccessible
  • Use less latent storage (such as SSD) for the metadata pool, as this will directly affect the latency of file system operations observed on the client
  • The data pool used to create the file system is the "default" data pool and is the location where all inode traceability information is stored for hard link management and disaster recovery.Therefore, all inodes created in CephFS have at least one object in the default data pool.If a pool of erase codes is scheduled for the file system, it is generally best to use the replication pool for the default data pool to improve read and write performance for small objects to update backtracking.In addition, you can add another pool of erase codes that can be used throughout the hierarchy of directories and files

2: Installation and configuration steps

Create two pools with default settings for the file system and create mds, 2 is pg_number. There is no value specified for pgp. Please refer to my previous articles for instructions on PG and pgp. https://blog.51cto.com/11093860/2456570

[root@ceph-node1 ~]# ceph osd pool create cephfs_data 2
pool 'cephfs_data' created
[root@ceph-node1 ~]# ceph osd pool create cephfs_metadata 2
pool 'cephfs_metadata' created
[root@ceph-node1 ~]# ceph-deploy mds create ceph-node2
[root@ceph-node1 ~]# ceph mds stat
cephfs-1/1/1 up {0=ceph-node2=up:active}

Once the pool is created, you can create cephFS

[root@ceph-node1 ~]# ceph fs new cephfs cephfsmetadata cephfsdata
new fs with metadata pool 41 and data pool 40
[root@ceph-node1 ~]# ceph fs ls
name: cephfs, metadata pool: cephfsmetadata, data pools: [cephfsdata ]

Client mounting (kernel-driven)

Create mount directory

[root@ceph-client /]# mkdir -p /mnt/cephfs

On ceph-node2, create user client.cephfs

[root@ceph-node2 ~]#ceph auth get-or-create client.cephfs mon 'allow r' mds 'allow r, allow rw path=/' osd 'allow rw pool=cephfs_data'

On ceph-node2, get the key for the client.cephfs user

[root@ceph-node2 ~]#ceph auth get-key client.cephfs
AQCL2d1dj4OgFRAAFeloClm23YTBsPn1qQnfTA==

Save the key obtained from the previous command to the ceph client

[root@ceph-client ~]#echo AQCL2d1dj4OgFRAAFeloClm23YTBsPn1qQnfTA== > /etc/ceph/cephfskey

Mount this file system

[root@ceph-client ~]# mount -t ceph ceph-node2:6789:/ /mnt/cephfs -o name=cephfs,secretfile= /etc/ceph/cephfskey

Write to fstab

[root@ceph-client ~] echglusterfs-node2:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0o ceph-node2:6789:/ /mnt/cephfs ceph name=cephfs,secretfile=/etc/ceph/cephfskey,_netdev,noatime 0 0 > /etc/fstab

View mount status

[root@ceph-client ~]# df -Th | grep cephfs
172.16.4.79:6789:/ ceph 46G 0 46G 0% /mnt/cephfs

Posted by renj0806 on Tue, 17 Dec 2019 18:20:04 -0800