欢迎来到ceph的世界
3.ceph的配置
配置ceph-mon
管理节点执行下列命令,配置初始 monitor(s)、并收集所有密钥:
1
2
3
4# cd /root/my-cluster
# ceph-deploy mon create-initial
# ls
ceph1.client.admin.keyring, ceph1.bootstrap-osd.keyring, ceph1.bootstrap-mds.keyring, ceph1.bootstrap-rgw.keyring配置osd
以物理盘dev/sda, dev/sdb, dev/sdc和nvme0n1为例。添加9个 OSD ,在管理节点上执行磁盘准备操作,命令如下:
1
2
3# ceph-deploy osd prepare ceph1:/dev/sda:/dev/nvme0n1p1 ceph1:/dev/sdb: /dev/nvme0n1p2 ceph1:/dev/sdc:/dev/nvme0n1p3
# ceph-deploy osd prepare ceph2:/dev/sda:/dev/nvme0n1p1 ceph2:/dev/sdb: /dev/nvme0n1p2 ceph2:/dev/sdc:/dev/nvme0n1p3
# ceph-deploy osd prepare ceph3:/dev/sda:/dev/nvme0n1p1 ceph3:/dev/sdb: /dev/nvme0n1p2 ceph3:/dev/sdc:/dev/nvme0n1p3(注:nvme0n1是dev下的一块SSD盘。 nvme0n1p1,nvme0n1p2,nvme0n1p3是nvme0n1的三个分区)
设置ceph执行权限:
1
2
3
4
5# chown ceph: ceph /dev/sda /dev/sdb /dev/sdc /dev/nvme0n1
# ssh ceph2
# chown ceph: ceph /dev/sda /dev/sdb /dev/sdc /dev/nvme0n1
# ssh ceph3
# chown ceph: ceph /dev/sda /dev/sdb /dev/sdc /dev/nvme0n1启动osd
管理主机ceph1命令行执行:1
2
3# ceph-deploy osd activate ceph1:/dev/sda:/dev/nvme0n1p1 ceph1:/dev/sdb: /dev/nvme0n1p2 ceph1:/dev/sdc:/dev/nvme0n1p3
# ceph-deploy osd activate ceph2:/dev/sda:/dev/nvme0n1p1 ceph2:/dev/sdb: /dev/nvme0n1p2 ceph2:/dev/sdc:/dev/nvme0n1p3
# ceph-deploy osd activate ceph3:/dev/sda:/dev/nvme0n1p1 ceph3:/dev/sdb: /dev/nvme0n1p2 ceph3:/dev/sdc:/dev/nvme0n1p3所有3台机器设置完后,运行ceph -s,可以看到24个osd节点已经加入。
设置存储池pg_num和pgp_num的值在/etc/ceph/ceph.conf文件中,设置pg_num和pgp_num的值,如下:
1
2
3[global]
osd_pool_default_pg_num = 64
osd_pool_default_pgp_num = 64(注:pg_num和pgp_num数量可随不用应用场景进行调整,具体方法可见对象网关rgw配置)
Ceph的网络配置
Ceph的网络配置使用两个物理隔离的网络,分别作为Public Network(公共网络,即客户端访问网络)和Cluster Network(集群网络,即节点之间的网络)。在/etc/ceph/ceph.conf文件中,追加public_network信息如下:
1
2
3[global]
public network = 10.0.0.0/24
cluster network = 192.168.0/24