mirror of
https://github.com/retspen/webvirtmgr.git
synced 2026-04-25 23:55:57 +03:00
[GH-ISSUE #312] Feature Request: Support for Ceph/rbd storage #242
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ITBlogger on GitHub (May 19, 2014).
Original GitHub issue: https://github.com/retspen/webvirtmgr/issues/312
Hi, we're currently in the process of moving to Ceph as our VM image storage. What are the chances of getting that added as a supported storage type?
Thanks,
Alex
@retspen commented on GitHub (May 19, 2014):
Hello,
WebVirtMgr doesn't support creating and managing Ceph storage pool but you can manage VM's with Ceph image.
@jsknnr commented on GitHub (Jun 11, 2014):
retspen,
It would be awesome if we could get this support. I know that WebVirtMgr currently doesn't support it. But if you could add support for RBD storage pools that would be fantastic! The support is already there in libvirt today. This would allow us to use Ceph storage pools with WebVirtMgr through the RBD support in libvirt.
http://libvirt.org/storage.html#StorageBackendRBD
@EmbeddedAndroid commented on GitHub (Jun 11, 2014):
I've been working on adding something like this using the ceph-rest-api
Would there be any interest in having features like this upstream in the codebase? It's very basic right now, it can monitor the ceph cluster health. I plan to extend these features for more control.
@jsknnr commented on GitHub (Jun 11, 2014):
That would be awesome.
@primechuck commented on GitHub (Jun 12, 2014):
That would be a fantastic add. Mainly the adding and removing of RBDs into libvirt from the UI.
@MACscr commented on GitHub (Jun 14, 2014):
This would be a great feature!
@EmbeddedAndroid commented on GitHub (Jun 14, 2014):
Once I get back from traveling I'll start to focus on these changes. It would be nice if anyone interested could help test. I only have a single ceph cluster, and would want to ensure it works properly with multiple clusters.
@MACscr commented on GitHub (Jun 14, 2014):
I will have a ceph cluster shortly that I can use to help with the testing.
@retspen commented on GitHub (Jun 14, 2014):
Playbook for fast deploy ceph in vagrant - https://github.com/ceph/ceph-ansible
@nlgordon commented on GitHub (Jun 16, 2014):
I'm getting your code setup in my home and work test environments so I can help build out the rbd backed volumes. We have a test rack at work where we have been using cephfs for libvirt, but it has its limitations.
@retspen commented on GitHub (Jun 16, 2014):
I have added support rbd storage pool (creating, deleting volumes), after success testing I'll push in master brunch it.
@MACscr commented on GitHub (Jun 16, 2014):
And these rbd volumes will be automatically created on kvm instance creation?
@retspen commented on GitHub (Jun 17, 2014):
I added app Secrets -
f933d8f294. Others future function (ceph storage pool) is testing@retspen commented on GitHub (Jun 18, 2014):
Support Ceph storage pool -
1d424d77ea@primechuck commented on GitHub (Jun 18, 2014):
Adding the storage pool worked fine. When a VM was created, it didn't generate the correct libvirt settings for the pool. This was the disk entry it made.
@retspen commented on GitHub (Jun 18, 2014):
After success testing I'll add part for creating VM with ceph
@EmbeddedAndroid commented on GitHub (Jun 18, 2014):
I've added the storage pool as well. Looks good to me. I will add some health monitoring stats to complement this. Thanks @retspen!
@ITBlogger commented on GitHub (Jun 18, 2014):
On CentOS the current setup won't work as virt-manager/libvirt does not support RBD pools in CentOS 6.5.
To work around this, I am having to create ceph volumes using qemu-img create, build up an xml template for storage image and use virsh attach-device to attach the rbd storage image to the vm.
Also, Ceph images should always be in RAW format, per the Ceph documentation.
What Puppet runs to do this:
qemu-img create -f raw rbd:(rbd-pool-name)/(rbd image name) (image capacity)
ex: qemu-img create -f raw rbd libvirt/test1 80G
virt-install --name (vm-name) --ram (ram size) --vcpus (# of CPUs) --nodisks --description (desc) --network bridge=(virtnet),mac=(virtmac),model=(virtnic) --graphics vnc,listen=0.0.0.0 --os-type (virtostype) --os-variant (virtosvariant) --virt-type (virttype) --autostart --pxe
ex: virt-install --name test1 --ram 1024 --vcpus 1 --nodisks --description 'test vm' --network bridge=br0,mac=52:54:00:82:a1:a1,model=virtio --graphics vnc,listen=0.0.0.0 --os-type linux --os-variant virtio26 --virt-type kvm --autostart --pxe
ERB template for network disk xml:
$ltdisk type='network' device='disk'>
$ltdriver name='qemu' type='raw'/>
$ltauth username='$lt%= @auth_user %>'>
$ltsecret type='$lt%= @secret_type %>' usage='<%= @secret_usage %>'/>
$lt/auth>
$ltsource protocol='$lt%= @virtproto %>' name='$lt%= @pool %>/$lt%= @vmname %>'>
$lthost name='$lt%= @volhost %>' port='$lt%= @volport %>'/>
$lt/source>
$lttarget dev='$lt%= @targetdev %>' bus='virtio'/>
$lt/disk>
Example created xml:
$ltdisk type='network' device='disk'>
$ltdriver name='qemu' type='raw'/>
$ltauth username='admin'>
$ltsecret type='ceph' usage='ceph_admin'/>
$lt/auth>
$ltsource protocol='rbd' name='libvirt/test1'>
$lthost name='cephrbd' port='6789'/>
$lt/source>
$lttarget dev='vda' bus='virtio'/>
$lt/disk>
virsh attach-device (vmname) (path to xml) --persistent
ex: virsh attach-device test1 /tmp/test1_rbd_virtdisk.xml --persistent
virsh start (vmname)
ex: virsh start test1
@ITBlogger commented on GitHub (Jun 18, 2014):
By the way, the error that you get when trying to make an RBD pool on an OS that doesn't support them is "internal error missing backend for pool type 8"
@retspen commented on GitHub (Jun 18, 2014):
On Ubuntu - 14.04, Fedora - 20, RHEL - 7 - works fine.
@retspen commented on GitHub (Jun 18, 2014):
Do you have problem when remove image in rdb pool?
@primechuck commented on GitHub (Jun 18, 2014):
On Ubuntu 14.04 I was able to create the pool, create a few images using the UI in the pool, delete the images and delete the pool.
Deleting one of the images took about 5 minutes because it was 4TB and the UI was happy waiting for the libvirt command to complete.
@ITBlogger commented on GitHub (Jun 18, 2014):
Unfortunately, we are only certified to use Centos 6.x.
@primechuck commented on GitHub (Jun 18, 2014):
You'll need to update libvirt/quemu in order to use RBD support. I don't remember the minimal version, but ceph has install instructions to get the new package for RPM distros.
http://ceph.com/docs/master/install/install-vm-cloud/#install-qemu
@ITBlogger commented on GitHub (Jun 18, 2014):
Yes, I have all that. Ceph does not supply updated packages for libvirt, only QEMU.
@retspen commented on GitHub (Jun 19, 2014):
Create VM with rbd storage pool -
1a34115ddd@MACscr commented on GitHub (Jun 27, 2014):
Could someone write up a small article in the wiki about what ll we need setup on the ceph nodes for webvirtmgr to communicate with it and while your at it and appears to be related, what these "secrets" are all about?
@MACscr commented on GitHub (Jul 3, 2014):
When I try to add a secret, it says "please match requested format", but i have no idea what that format is and i have simply pasted in the cephx key that was created for the ceph user. I checked the forms.py file in the secrets folder and the only limitation i see in there is 100 characters and I am only using 40. Suggestions?
@retspen commented on GitHub (Jul 3, 2014):
https://ceph.com/docs/master/rbd/libvirt/
@elg commented on GitHub (Jul 6, 2014):
MACsrc,
On Debian, I got some issues with the packaged qemu verision (wihtout rbd support). I recompile kvm and qemu using this help: http://cephnotes.ksperis.com/blog/2013/09/12/using-ceph-rbd-with-libvirt-on-debian-wheezy and debian/rules for qemu. After this qemu-img is able to use ceph directly (and my secret appears in webvirtmgr).
Anyway, I now have an issue trying to create the storage through rdb. I obtain this error:
" internal error unknown storage pool type rbd "
And I can't find the source of it. Any clue?
@barryorourke commented on GitHub (Jul 6, 2014):
You'll need to recompile libvirt to support RBD storage pools, it's pretty easy on SL6 so hopefully should be on Debian too.
@elg commented on GitHub (Jul 6, 2014):
Yes thx I figured that out by myself and it was quite easy on the libvirt from backports. Unfortunately this version has a bug and segfault when you try to access to a rbd pool.
I'm done with recompilations and backports: I'll give a try to an ubuntu for my host.
@MACscr commented on GitHub (Jul 7, 2014):
I just re provisioned my debian cluster with Ubuntu for the same reasons.
@retspen commented on GitHub (Jul 7, 2014):
Ubuntu 14.04 - host server (libvirt support rbd storage). Ubuntu 12.04 or Debian 6 - ceph cluster.
@primechuck commented on GitHub (Jul 7, 2014):
Have anyone else been running into this bug related to this feature?
http://comments.gmane.org/gmane.comp.emulators.libvirt/96702
It looks like volumes created using libvirt are broken in versions > 1.2.4
@samuelet commented on GitHub (Oct 11, 2014):
It is not broken with versions > 1.2.4 but with versions < 1.2.4, indeed i'm running into this bug with ubuntu 14.04 and libvirt 1.2.2-0ubuntu13.1.5 .