[GH-ISSUE #312] Feature Request: Support for Ceph/rbd storage #242

Open
opened 2026-02-27 16:38:33 +03:00 by kerem · 36 comments
Owner

Originally created by @ITBlogger on GitHub (May 19, 2014).
Original GitHub issue: https://github.com/retspen/webvirtmgr/issues/312

Hi, we're currently in the process of moving to Ceph as our VM image storage. What are the chances of getting that added as a supported storage type?

Thanks,

Alex

Originally created by @ITBlogger on GitHub (May 19, 2014). Original GitHub issue: https://github.com/retspen/webvirtmgr/issues/312 Hi, we're currently in the process of moving to Ceph as our VM image storage. What are the chances of getting that added as a supported storage type? Thanks, Alex
Author
Owner

@retspen commented on GitHub (May 19, 2014):

Hello,

WebVirtMgr doesn't support creating and managing Ceph storage pool but you can manage VM's with Ceph image.

<!-- gh-comment-id:43477208 --> @retspen commented on GitHub (May 19, 2014): Hello, WebVirtMgr doesn't support creating and managing Ceph storage pool but you can manage VM's with Ceph image.
Author
Owner

@jsknnr commented on GitHub (Jun 11, 2014):

retspen,

It would be awesome if we could get this support. I know that WebVirtMgr currently doesn't support it. But if you could add support for RBD storage pools that would be fantastic! The support is already there in libvirt today. This would allow us to use Ceph storage pools with WebVirtMgr through the RBD support in libvirt.

http://libvirt.org/storage.html#StorageBackendRBD

<!-- gh-comment-id:45763260 --> @jsknnr commented on GitHub (Jun 11, 2014): retspen, It would be awesome if we could get this support. I know that WebVirtMgr currently doesn't support it. But if you could add support for RBD storage pools that would be fantastic! The support is already there in libvirt today. This would allow us to use Ceph storage pools with WebVirtMgr through the RBD support in libvirt. http://libvirt.org/storage.html#StorageBackendRBD
Author
Owner

@EmbeddedAndroid commented on GitHub (Jun 11, 2014):

I've been working on adding something like this using the ceph-rest-api

screen shot 2014-06-11 at 10 52 08 am

screen shot 2014-06-11 at 10 52 56 am

screen shot 2014-06-11 at 10 53 05 am

Would there be any interest in having features like this upstream in the codebase? It's very basic right now, it can monitor the ceph cluster health. I plan to extend these features for more control.

<!-- gh-comment-id:45776256 --> @EmbeddedAndroid commented on GitHub (Jun 11, 2014): I've been working on adding something like this using the ceph-rest-api ![screen shot 2014-06-11 at 10 52 08 am](https://cloud.githubusercontent.com/assets/1061011/3248692/32c33f3a-f191-11e3-9425-36ce37a2e35d.png) ![screen shot 2014-06-11 at 10 52 56 am](https://cloud.githubusercontent.com/assets/1061011/3248703/46ae71d6-f191-11e3-9fe7-2c18b5bfecb9.png) ![screen shot 2014-06-11 at 10 53 05 am](https://cloud.githubusercontent.com/assets/1061011/3248729/797cf538-f191-11e3-8c15-3d19025b9407.png) Would there be any interest in having features like this upstream in the codebase? It's very basic right now, it can monitor the ceph cluster health. I plan to extend these features for more control.
Author
Owner

@jsknnr commented on GitHub (Jun 11, 2014):

That would be awesome.

<!-- gh-comment-id:45799865 --> @jsknnr commented on GitHub (Jun 11, 2014): That would be awesome.
Author
Owner

@primechuck commented on GitHub (Jun 12, 2014):

That would be a fantastic add. Mainly the adding and removing of RBDs into libvirt from the UI.

<!-- gh-comment-id:45825927 --> @primechuck commented on GitHub (Jun 12, 2014): That would be a fantastic add. Mainly the adding and removing of RBDs into libvirt from the UI.
Author
Owner

@MACscr commented on GitHub (Jun 14, 2014):

This would be a great feature!

<!-- gh-comment-id:46080334 --> @MACscr commented on GitHub (Jun 14, 2014): This would be a great feature!
Author
Owner

@EmbeddedAndroid commented on GitHub (Jun 14, 2014):

Once I get back from traveling I'll start to focus on these changes. It would be nice if anyone interested could help test. I only have a single ceph cluster, and would want to ensure it works properly with multiple clusters.

<!-- gh-comment-id:46083811 --> @EmbeddedAndroid commented on GitHub (Jun 14, 2014): Once I get back from traveling I'll start to focus on these changes. It would be nice if anyone interested could help test. I only have a single ceph cluster, and would want to ensure it works properly with multiple clusters.
Author
Owner

@MACscr commented on GitHub (Jun 14, 2014):

I will have a ceph cluster shortly that I can use to help with the testing.

<!-- gh-comment-id:46084778 --> @MACscr commented on GitHub (Jun 14, 2014): I will have a ceph cluster shortly that I can use to help with the testing.
Author
Owner

@retspen commented on GitHub (Jun 14, 2014):

Playbook for fast deploy ceph in vagrant - https://github.com/ceph/ceph-ansible

<!-- gh-comment-id:46094777 --> @retspen commented on GitHub (Jun 14, 2014): Playbook for fast deploy ceph in vagrant - https://github.com/ceph/ceph-ansible
Author
Owner

@nlgordon commented on GitHub (Jun 16, 2014):

I'm getting your code setup in my home and work test environments so I can help build out the rbd backed volumes. We have a test rack at work where we have been using cephfs for libvirt, but it has its limitations.

<!-- gh-comment-id:46193836 --> @nlgordon commented on GitHub (Jun 16, 2014): I'm getting your code setup in my home and work test environments so I can help build out the rbd backed volumes. We have a test rack at work where we have been using cephfs for libvirt, but it has its limitations.
Author
Owner

@retspen commented on GitHub (Jun 16, 2014):

I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it.

<!-- gh-comment-id:46212739 --> @retspen commented on GitHub (Jun 16, 2014): I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it.
Author
Owner

@MACscr commented on GitHub (Jun 16, 2014):

And these rbd volumes will be automatically created on kvm instance creation?

  • Mark

On Jun 16, 2014, at 1:04 PM, Anatoliy Guskov notifications@github.com wrote:

I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it.


Reply to this email directly or view it on GitHub.

<!-- gh-comment-id:46219134 --> @MACscr commented on GitHub (Jun 16, 2014): And these rbd volumes will be automatically created on kvm instance creation? - Mark > On Jun 16, 2014, at 1:04 PM, Anatoliy Guskov notifications@github.com wrote: > > I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it. > > — > Reply to this email directly or view it on GitHub.
Author
Owner

@retspen commented on GitHub (Jun 17, 2014):

I added app Secrets - f933d8f294. Others future function (ceph storage pool) is testing

<!-- gh-comment-id:46311038 --> @retspen commented on GitHub (Jun 17, 2014): I added app Secrets - f933d8f2942a7ccb2a79a55d1ecf6541e95073c4. Others future function (ceph storage pool) is testing
Author
Owner

@retspen commented on GitHub (Jun 18, 2014):

Support Ceph storage pool - 1d424d77ea

<!-- gh-comment-id:46434163 --> @retspen commented on GitHub (Jun 18, 2014): Support Ceph storage pool - 1d424d77ea86caf111d0f002f9875965633df4f3
Author
Owner

@primechuck commented on GitHub (Jun 18, 2014):

Adding the storage pool worked fine. When a VM was created, it didn't generate the correct libvirt settings for the pool. This was the disk entry it made.

<disk type='file' device='disk'>
  <driver name='qemu' type='raw'/>
  <source file='secondary-libvirt/RBDTest'/>
  <target dev='vda' bus='virtio'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<!-- gh-comment-id:46437169 --> @primechuck commented on GitHub (Jun 18, 2014): Adding the storage pool worked fine. When a VM was created, it didn't generate the correct libvirt settings for the pool. This was the disk entry it made. ``` <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='secondary-libvirt/RBDTest'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> ```
Author
Owner

@retspen commented on GitHub (Jun 18, 2014):

After success testing I'll add part for creating VM with ceph

<!-- gh-comment-id:46437317 --> @retspen commented on GitHub (Jun 18, 2014): After success testing I'll add part for creating VM with ceph
Author
Owner

@EmbeddedAndroid commented on GitHub (Jun 18, 2014):

I've added the storage pool as well. Looks good to me. I will add some health monitoring stats to complement this. Thanks @retspen!

<!-- gh-comment-id:46452969 --> @EmbeddedAndroid commented on GitHub (Jun 18, 2014): I've added the storage pool as well. Looks good to me. I will add some health monitoring stats to complement this. Thanks @retspen!
Author
Owner

@ITBlogger commented on GitHub (Jun 18, 2014):

On CentOS the current setup won't work as virt-manager/libvirt does not support RBD pools in CentOS 6.5.

To work around this, I am having to create ceph volumes using qemu-img create, build up an xml template for storage image and use virsh attach-device to attach the rbd storage image to the vm.

Also, Ceph images should always be in RAW format, per the Ceph documentation.

What Puppet runs to do this:

qemu-img create -f raw rbd:(rbd-pool-name)/(rbd image name) (image capacity)
ex: qemu-img create -f raw rbd libvirt/test1 80G

virt-install --name (vm-name) --ram (ram size) --vcpus (# of CPUs) --nodisks --description (desc) --network bridge=(virtnet),mac=(virtmac),model=(virtnic) --graphics vnc,listen=0.0.0.0 --os-type (virtostype) --os-variant (virtosvariant) --virt-type (virttype) --autostart --pxe
ex: virt-install --name test1 --ram 1024 --vcpus 1 --nodisks --description 'test vm' --network bridge=br0,mac=52:54:00:82:a1:a1,model=virtio --graphics vnc,listen=0.0.0.0 --os-type linux --os-variant virtio26 --virt-type kvm --autostart --pxe

ERB template for network disk xml:

$ltdisk type='network' device='disk'>
$ltdriver name='qemu' type='raw'/>
$ltauth username='$lt%= @auth_user %>'>
$ltsecret type='$lt%= @secret_type %>' usage='<%= @secret_usage %>'/>
$lt/auth>
$ltsource protocol='$lt%= @virtproto %>' name='$lt%= @pool %>/$lt%= @vmname %>'>
$lthost name='$lt%= @volhost %>' port='$lt%= @volport %>'/>
$lt/source>
$lttarget dev='$lt%= @targetdev %>' bus='virtio'/>
$lt/disk>

Example created xml:
$ltdisk type='network' device='disk'>
$ltdriver name='qemu' type='raw'/>
$ltauth username='admin'>
$ltsecret type='ceph' usage='ceph_admin'/>
$lt/auth>
$ltsource protocol='rbd' name='libvirt/test1'>
$lthost name='cephrbd' port='6789'/>
$lt/source>
$lttarget dev='vda' bus='virtio'/>
$lt/disk>

virsh attach-device (vmname) (path to xml) --persistent
ex: virsh attach-device test1 /tmp/test1_rbd_virtdisk.xml --persistent

virsh start (vmname)
ex: virsh start test1

<!-- gh-comment-id:46465254 --> @ITBlogger commented on GitHub (Jun 18, 2014): On CentOS the current setup won't work as virt-manager/libvirt does not support RBD pools in CentOS 6.5. To work around this, I am having to create ceph volumes using qemu-img create, build up an xml template for storage image and use virsh attach-device to attach the rbd storage image to the vm. Also, Ceph images should always be in RAW format, per the Ceph documentation. What Puppet runs to do this: qemu-img create -f raw rbd:(rbd-pool-name)/(rbd image name) (image capacity) ex: qemu-img create -f raw rbd libvirt/test1 80G virt-install --name (vm-name) --ram (ram size) --vcpus (# of CPUs) --nodisks --description (desc) --network bridge=(virtnet),mac=(virtmac),model=(virtnic) --graphics vnc,listen=0.0.0.0 --os-type (virtostype) --os-variant (virtosvariant) --virt-type (virttype) --autostart --pxe ex: virt-install --name test1 --ram 1024 --vcpus 1 --nodisks --description 'test vm' --network bridge=br0,mac=52:54:00:82:a1:a1,model=virtio --graphics vnc,listen=0.0.0.0 --os-type linux --os-variant virtio26 --virt-type kvm --autostart --pxe ERB template for network disk xml: $ltdisk type='network' device='disk'> $ltdriver name='qemu' type='raw'/> $ltauth username='$lt%= @auth_user %>'> $ltsecret type='$lt%= @secret_type %>' usage='<%= @secret_usage %>'/> $lt/auth> $ltsource protocol='$lt%= @virtproto %>' name='$lt%= @pool %>/$lt%= @vmname %>'> $lthost name='$lt%= @volhost %>' port='$lt%= @volport %>'/> $lt/source> $lttarget dev='$lt%= @targetdev %>' bus='virtio'/> $lt/disk> Example created xml: $ltdisk type='network' device='disk'> $ltdriver name='qemu' type='raw'/> $ltauth username='admin'> $ltsecret type='ceph' usage='ceph_admin'/> $lt/auth> $ltsource protocol='rbd' name='libvirt/test1'> $lthost name='cephrbd' port='6789'/> $lt/source> $lttarget dev='vda' bus='virtio'/> $lt/disk> virsh attach-device (vmname) (path to xml) --persistent ex: virsh attach-device test1 /tmp/test1_rbd_virtdisk.xml --persistent virsh start (vmname) ex: virsh start test1
Author
Owner

@ITBlogger commented on GitHub (Jun 18, 2014):

By the way, the error that you get when trying to make an RBD pool on an OS that doesn't support them is "internal error missing backend for pool type 8"

<!-- gh-comment-id:46465807 --> @ITBlogger commented on GitHub (Jun 18, 2014): By the way, the error that you get when trying to make an RBD pool on an OS that doesn't support them is "internal error missing backend for pool type 8"
Author
Owner

@retspen commented on GitHub (Jun 18, 2014):

On Ubuntu - 14.04, Fedora - 20, RHEL - 7 - works fine.

<!-- gh-comment-id:46469049 --> @retspen commented on GitHub (Jun 18, 2014): On Ubuntu - 14.04, Fedora - 20, RHEL - 7 - works fine.
Author
Owner

@retspen commented on GitHub (Jun 18, 2014):

Do you have problem when remove image in rdb pool?

<!-- gh-comment-id:46469325 --> @retspen commented on GitHub (Jun 18, 2014): Do you have problem when remove image in rdb pool?
Author
Owner

@primechuck commented on GitHub (Jun 18, 2014):

On Ubuntu 14.04 I was able to create the pool, create a few images using the UI in the pool, delete the images and delete the pool.

Deleting one of the images took about 5 minutes because it was 4TB and the UI was happy waiting for the libvirt command to complete.

<!-- gh-comment-id:46470623 --> @primechuck commented on GitHub (Jun 18, 2014): On Ubuntu 14.04 I was able to create the pool, create a few images using the UI in the pool, delete the images and delete the pool. Deleting one of the images took about 5 minutes because it was 4TB and the UI was happy waiting for the libvirt command to complete.
Author
Owner

@ITBlogger commented on GitHub (Jun 18, 2014):

Unfortunately, we are only certified to use Centos 6.x.

<!-- gh-comment-id:46475264 --> @ITBlogger commented on GitHub (Jun 18, 2014): Unfortunately, we are only certified to use Centos 6.x.
Author
Owner

@primechuck commented on GitHub (Jun 18, 2014):

You'll need to update libvirt/quemu in order to use RBD support. I don't remember the minimal version, but ceph has install instructions to get the new package for RPM distros.

http://ceph.com/docs/master/install/install-vm-cloud/#install-qemu

<!-- gh-comment-id:46478017 --> @primechuck commented on GitHub (Jun 18, 2014): You'll need to update libvirt/quemu in order to use RBD support. I don't remember the minimal version, but ceph has install instructions to get the new package for RPM distros. http://ceph.com/docs/master/install/install-vm-cloud/#install-qemu
Author
Owner

@ITBlogger commented on GitHub (Jun 18, 2014):

Yes, I have all that. Ceph does not supply updated packages for libvirt, only QEMU.

<!-- gh-comment-id:46478322 --> @ITBlogger commented on GitHub (Jun 18, 2014): Yes, I have all that. Ceph does not supply updated packages for libvirt, only QEMU.
Author
Owner

@retspen commented on GitHub (Jun 19, 2014):

Create VM with rbd storage pool - 1a34115ddd

<!-- gh-comment-id:46567334 --> @retspen commented on GitHub (Jun 19, 2014): Create VM with rbd storage pool - 1a34115ddd349bce5965c192965a6066bd8f349e
Author
Owner

@MACscr commented on GitHub (Jun 27, 2014):

Could someone write up a small article in the wiki about what ll we need setup on the ceph nodes for webvirtmgr to communicate with it and while your at it and appears to be related, what these "secrets" are all about?

<!-- gh-comment-id:47375438 --> @MACscr commented on GitHub (Jun 27, 2014): Could someone write up a small article in the wiki about what ll we need setup on the ceph nodes for webvirtmgr to communicate with it and while your at it and appears to be related, what these "secrets" are all about?
Author
Owner

@MACscr commented on GitHub (Jul 3, 2014):

When I try to add a secret, it says "please match requested format", but i have no idea what that format is and i have simply pasted in the cephx key that was created for the ceph user. I checked the forms.py file in the secrets folder and the only limitation i see in there is 100 characters and I am only using 40. Suggestions?

<!-- gh-comment-id:47872857 --> @MACscr commented on GitHub (Jul 3, 2014): When I try to add a secret, it says "please match requested format", but i have no idea what that format is and i have simply pasted in the cephx key that was created for the ceph user. I checked the forms.py file in the secrets folder and the only limitation i see in there is 100 characters and I am only using 40. Suggestions?
Author
Owner

@retspen commented on GitHub (Jul 3, 2014):

https://ceph.com/docs/master/rbd/libvirt/

<!-- gh-comment-id:47873987 --> @retspen commented on GitHub (Jul 3, 2014): https://ceph.com/docs/master/rbd/libvirt/
Author
Owner

@elg commented on GitHub (Jul 6, 2014):

MACsrc,
On Debian, I got some issues with the packaged qemu verision (wihtout rbd support). I recompile kvm and qemu using this help: http://cephnotes.ksperis.com/blog/2013/09/12/using-ceph-rbd-with-libvirt-on-debian-wheezy and debian/rules for qemu. After this qemu-img is able to use ceph directly (and my secret appears in webvirtmgr).

Anyway, I now have an issue trying to create the storage through rdb. I obtain this error:
" internal error unknown storage pool type rbd "
And I can't find the source of it. Any clue?

<!-- gh-comment-id:48109209 --> @elg commented on GitHub (Jul 6, 2014): MACsrc, On Debian, I got some issues with the packaged qemu verision (wihtout rbd support). I recompile kvm and qemu using this help: http://cephnotes.ksperis.com/blog/2013/09/12/using-ceph-rbd-with-libvirt-on-debian-wheezy and debian/rules for qemu. After this qemu-img is able to use ceph directly (and my secret appears in webvirtmgr). Anyway, I now have an issue trying to create the storage through rdb. I obtain this error: " internal error unknown storage pool type rbd " And I can't find the source of it. Any clue?
Author
Owner

@barryorourke commented on GitHub (Jul 6, 2014):

You'll need to recompile libvirt to support RBD storage pools, it's pretty easy on SL6 so hopefully should be on Debian too.

<!-- gh-comment-id:48118063 --> @barryorourke commented on GitHub (Jul 6, 2014): You'll need to recompile libvirt to support RBD storage pools, it's pretty easy on SL6 so hopefully should be on Debian too.
Author
Owner

@elg commented on GitHub (Jul 6, 2014):

Yes thx I figured that out by myself and it was quite easy on the libvirt from backports. Unfortunately this version has a bug and segfault when you try to access to a rbd pool.

I'm done with recompilations and backports: I'll give a try to an ubuntu for my host.

<!-- gh-comment-id:48124058 --> @elg commented on GitHub (Jul 6, 2014): Yes thx I figured that out by myself and it was quite easy on the libvirt from backports. Unfortunately this version has a bug and segfault when you try to access to a rbd pool. I'm done with recompilations and backports: I'll give a try to an ubuntu for my host.
Author
Owner

@MACscr commented on GitHub (Jul 7, 2014):

I just re provisioned my debian cluster with Ubuntu for the same reasons.

<!-- gh-comment-id:48143938 --> @MACscr commented on GitHub (Jul 7, 2014): I just re provisioned my debian cluster with Ubuntu for the same reasons.
Author
Owner

@retspen commented on GitHub (Jul 7, 2014):

Ubuntu 14.04 - host server (libvirt support rbd storage). Ubuntu 12.04 or Debian 6 - ceph cluster.

<!-- gh-comment-id:48146849 --> @retspen commented on GitHub (Jul 7, 2014): Ubuntu 14.04 - host server (libvirt support rbd storage). Ubuntu 12.04 or Debian 6 - ceph cluster.
Author
Owner

@primechuck commented on GitHub (Jul 7, 2014):

Have anyone else been running into this bug related to this feature?
http://comments.gmane.org/gmane.comp.emulators.libvirt/96702
It looks like volumes created using libvirt are broken in versions > 1.2.4

<!-- gh-comment-id:48251880 --> @primechuck commented on GitHub (Jul 7, 2014): Have anyone else been running into this bug related to this feature? http://comments.gmane.org/gmane.comp.emulators.libvirt/96702 It looks like volumes created using libvirt are broken in versions > 1.2.4
Author
Owner

@samuelet commented on GitHub (Oct 11, 2014):

It is not broken with versions > 1.2.4 but with versions < 1.2.4, indeed i'm running into this bug with ubuntu 14.04 and libvirt 1.2.2-0ubuntu13.1.5 .

<!-- gh-comment-id:58741799 --> @samuelet commented on GitHub (Oct 11, 2014): It is not broken with versions > 1.2.4 but with versions < 1.2.4, indeed i'm running into this bug with ubuntu 14.04 and libvirt 1.2.2-0ubuntu13.1.5 .
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/webvirtmgr#242
No description provided.