cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10688
Views
4
Helpful
0
Comments
Karl Vietmeier
Community Member

Setting up client access for RBD is a simple process but it requires coordination between the cluster and the client.  On the cluster you will need to make sure you have created any required user IDs and written out the keyring files.  Once you have the keyring files and installed "ceph-common" on the client, you can copy over the ceph.conf and keyrings.


On the Client System


  • Install the "ceph-common" package:

[root@sj19-j16-storage07 ceph]# yum install ceph-common -y

[root@sj19-j16-storage07 ceph]# yum install ceph-common -y

[root@sj19-j16-storage07 ceph]# yum install ceph-common -y

  • Ansible can be helpful here:

[root@sj19-j16-ceph1admin ansible]# ansible j16cephclients -s -m yum -a "name=ceph-common state=installed"


On the Ceph Admin Host:


  • Create users and set permissions for OS services and libvirt/qemu access (We will need the other users later so I am creating them now) .

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'

ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'

ceph auth get-or-create client.kernelrbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'

  • Create the keyring files:

From the ~ceph-config directory

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph auth get client.cinder -o client.cinder.keyring

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph auth get client.glance  -o client.glance.keyring

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph auth get client.cinder-backup  -o client.cinder-backup.keyring

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph auth get client.libvirt -o client.libvirt.keyring

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph auth get client.kernelrbd  -o client.kernelrbd.keyring

You can combine the 2 previous commands but I separated them to make the steps more clear.

  • Check the ~ceph-config directory:

[ceph@sj19-j16-ceph1admin ceph-config]# ls -l *.keyring

-rw-------. 1 root root  71 Jul 15 03:10 ceph.bootstrap-mds.keyring

-rw-------. 1 root root  71 Jul 15 03:10 ceph.bootstrap-osd.keyring

-rw-------. 1 root root  63 Jul 15 03:10 ceph.client.admin.keyring

-rw-------. 1 root root  73 Jul 15 02:06 ceph.mon.keyring

-rw-r--r--  1 root root 175 Jul 26 23:36 client.cinder-backup.keyring

-rw-r--r--  1 root root 210 Jul 26 23:36 client.cinder.keyring

-rw-r--r--  1 root root 167 Jul 26 23:36 client.glance.keyring

-rw-r--r--  1 root root 167 Jul 26 23:38 client.kernelrbd.keyring

-rw-r--r--  1 root root 165 Jul 26 23:37 client.libvirt.keyring

  • Verify the user IDs are in the cluster configuration

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph auth list

<snip>  All of the OSDs will get listed out

........

client.cinder

    key:###########################

    caps: [mon] allow r

    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images

client.cinder-backup

    key:###########################

    caps: [mon] allow r

    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=backups

client.glance

    key:###########################

    caps: [mon] allow r

    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images

client.kernelrbd

    key:###########################

    caps: [mon] allow r

    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd

client.libvirt

    key:###########################

    caps: [mon] allow r

    caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=rbd

  • Push out the ceph.conf file and the appropriate keyrings (you can use the shorthand syntax here to push to multiple clients)

[ceph@sj19-j16-ceph1admin ceph-config]$ sudo ceph-deploy --overwrite-conf config push root@sj19-j16-storage0{7,8,9}

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/ceph-config/cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /bin/ceph-deploy --overwrite-conf config push root@sj19-j16-storage07 root@sj19-j16-storage08 root@sj19-j16-storage09

[ceph_deploy.config][DEBUG ] Pushing config to root@sj19-j16-storage07

[root@sj19-j16-storage07][DEBUG ] connected to host: root@sj19-j16-storage07

[root@sj19-j16-storage07][DEBUG ] detect platform information from remote host

[root@sj19-j16-storage07][DEBUG ] detect machine type

[root@sj19-j16-storage07][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.config][DEBUG ] Pushing config to root@sj19-j16-storage08

[root@sj19-j16-storage08][DEBUG ] connected to host: root@sj19-j16-storage08

[root@sj19-j16-storage08][DEBUG ] detect platform information from remote host

[root@sj19-j16-storage08][DEBUG ] detect machine type

[root@sj19-j16-storage08][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.config][DEBUG ] Pushing config to root@sj19-j16-storage09

[root@sj19-j16-storage09][DEBUG ] connected to host: root@sj19-j16-storage09

[root@sj19-j16-storage09][DEBUG ] detect platform information from remote host

[root@sj19-j16-storage09][DEBUG ] detect machine type

[root@sj19-j16-storage09][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.con

  • You have a decision to make at this point.   You can simply push out the admin keyring file which will make commands simpler but mask authentication issues and potentially create a security risk.  Or you can be explicit and only push out the keyring file for the client authorized to create RBD volumes.  In this example I will use the "kernelrbd" user and keyfile.  The next section will explain how to correctly configure the host to use the keyring and id.

[ceph@sj19-j16-ceph1admin ceph-config]$ for i in root@sj19-j16-storage09 root@sj19-j16-storage08 root@sj19-j16-storage07; do scp client.kernelrbd.keyring $i:/etc/ceph; done

client.kernelrbd.keyring                                                                               100%  167     0.2KB/s   00:00

client.kernelrbd.keyring                                                                               100%  167     0.2KB/s   00:00

client.kernelrbd.keyring                                                                               100%  167     0.2KB/s   00:00


Setting Up cephx authentication On the Client System


  • Verify that the appropriate keyfile is present-

[root@sj19-j16-storage07 ceph]# pwd

/etc/ceph

[root@sj19-j16-storage07 ceph]# ll

total 12

-rw-r--r-- 1 root root 1257 Jul 31 00:30 ceph.conf

-rw-r--r-- 1 root root  167 Jul 31 00:43 client.kernelrbd.keyring

-rwxr-xr-x 1 root root   92 Nov 21  2014 rbdmap

-rw------- 1 root root    0 Jul 30 20:54 tmpOx2FQe

  • By default, every ceph command will try to authenticate with the admin client.  In this case we did not copy over the admin keyfile so Ceph commands will fail. 

[root@sj19-j16-storage07 ~]# ceph -s

2015-08-02 18:09:53.984571 7fc367f45700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication

2015-08-02 18:09:53.984573 7fc367f45700  0 librados: client.admin initialization error (2) No such file or directory

Error connecting to cluster: ObjectNotFound

[root@sj19-j16-storage07 ~]# rbd list

2015-08-02 18:12:12.617189 7f7fd88ee7c0 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication

2015-08-02 18:12:12.617193 7f7fd88ee7c0  0 librados: client.admin initialization error (2) No such file or directory

rbd: couldn't connect to the cluster!

Explicitly define the keyring and user in the CLI:

[root@sj19-j16-storage07 ceph]# rbd --keyring /etc/ceph/client.kernelrbd.keyring --id kernelrbd list

nova02-client2a-2g.img

nova02-client2b-2g.img

nova02-client2c-2g.img

storage07-1

storage07-2

storage08-1

storage08-2

storage09-1

storage09-2

  • However - Ceph provides an easy way around this which would be awkward if used frequently.   To avoid the need to specify a client and its associated keyring file in the CLI, you can use the CEPH_ARGS variable:

[root@sj19-j16-storage07 ~]# CEPH_ARGS="--keyring /etc/ceph/client.kernelrbd.keyring --id kernelrbd"

[root@sj19-j16-storage07 ~]# export CEPH_ARGS

[root@sj19-j16-storage07 ~]# echo $CEPH_ARGS

--keyring /etc/ceph/client.kernelrbd.keyring --id kernelrbd

- Try again

[root@sj19-j16-storage07 ~]# rbd list

nova02-client2a-2g.img

nova02-client2b-2g.img

nova02-client2c-2g.img

storage08-1

storage08-2

storage09-1

storage09-2

We are now ready to securely access the Ceph cluster and start creating some RBD volumes for use.


Bare Metal RBD Volume Operations using librbd


  • Create a few test RBD images:

[root@sj19-j16-storage07 ceph]# rbd create --size 50000 rbd/testing

[root@sj19-j16-storage07 ceph]# rbd create --size 50000 rbd/testing2

[root@sj19-j16-storage07 ceph]# rbd create --size 50000 rbd/testing3

  • Check that they are on the Cluster

[root@sj19-j16-storage07 ceph]# rbd list

testing

testing2

testing3

- Clean up the test vols before moving forward:

[root@sj19-j16-storage07 ceph]# rbd rm testing3

Removing image: 100% complete...done.

[root@sj19-j16-storage07 ceph]# rbd rm testing2

Removing image: 100% complete...done.

[root@sj19-j16-storage07 ceph]# rbd rm testing

Removing image: 100% complete...done.

[root@sj19-j16-storage07 ceph]# rbd list

<empty list>

  • Create RBD Volumes for IO Testing

[root@sj19-j16-storage07 ceph]# rbd create --size 102400 rbd/storage07-1

[root@sj19-j16-storage07 ceph]# rbd create --size 102400 rbd/storage07-2

[root@sj19-j16-storage07 ceph]# rbd list

storage07-1

storage07-2

  • Map and use it on the host

[root@sj19-j16-storage07 ceph]# ls /dev/rbd*

ls: cannot access /dev/rbd*: No such file or directory

[root@sj19-j16-storage07 ceph]# rbd map rbd/storage07-1

[root@sj19-j16-storage07 ceph]# rbd map rbd/storage07-2

[root@sj19-j16-storage07 ceph]# ls /dev/rbd*

/dev/rbd0  /dev/rbd1

[root@sj19-j16-storage07 ceph]# rbd showmapped

id pool image       snap device

0  rbd  storage07-1 -    /dev/rbd0

1  rbd  storage07-2 -    /dev/rbd1

[root@sj19-j16-storage07 ceph]# mkfs.xfs /dev/rbd0

log stripe unit (4194304 bytes) is too large (maximum is 256KiB)

log stripe unit adjusted to 32KiB

meta-data=/dev/rbd0              isize=256    agcount=17, agsize=1637376 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=0        finobt=0

data     =                       bsize=4096   blocks=26214400, imaxpct=25

         =                       sunit=1024   swidth=1024 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

log      =internal log           bsize=4096   blocks=12800, version=2

         =                       sectsz=512   sunit=8 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@sj19-j16-storage07 ceph]# mkfs.xfs /dev/rbd1

log stripe unit (4194304 bytes) is too large (maximum is 256KiB)

log stripe unit adjusted to 32KiB

meta-data=/dev/rbd1              isize=256    agcount=17, agsize=1637376 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=0        finobt=0

data     =                       bsize=4096   blocks=26214400, imaxpct=25

         =                       sunit=1024   swidth=1024 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

log      =internal log           bsize=4096   blocks=12800, version=2

         =                       sectsz=512   sunit=8 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@sj19-j16-storage07 ceph]# mkdir -p /rbd/mount1

[root@sj19-j16-storage07 ceph]# mkdir -p /rbd/mount2

[root@sj19-j16-storage07 ceph]# mount /dev/rbd0 /rbd/mount1

[root@sj19-j16-storage07 ceph]# mount /dev/rbd1 /rbd/mount2

[root@sj19-j16-storage07 ceph]# df -kh

Filesystem                         Size  Used Avail Use% Mounted on

/dev/mapper/vg_root-lv_root         35G  1.8G   33G   5% /

devtmpfs                            63G     0   63G   0% /dev

tmpfs                               63G     0   63G   0% /dev/shm

tmpfs                               63G  9.2M   63G   1% /run

tmpfs                               63G     0   63G   0% /sys/fs/cgroup

/dev/mapper/vg_root-lv_home         13G   33M   13G   1% /home

/dev/sdaa1                         497M  205M  292M  42% /boot

sj19-j16-tools01:/export/labshare   98G  161M   98G   1% /mnt/labshare

/dev/rbd0                          100G   33M  100G   1% /rbd/mount1

/dev/rbd1                          100G   33M  100G   1% /rbd/mount2

[root@sj19-j16-storage07 ceph]# dd if=/dev/zero of=/rbd/mount1/foobar1.out bs=100M count=1

1+0 records in

1+0 records out

104857600 bytes (105 MB) copied, 0.101583 s, 1.0 GB/s

[root@sj19-j16-storage07 ceph]# ls -lh /rbd/mount1

total 100M

-rw-r--r-- 1 root root 100M Jul 31 04:15 foobar1.out

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: