[GH-ISSUE #42] Adding 'image' to LVM Storage Pool Problems #31

Closed
opened 2026-02-27 16:00:46 +03:00 by kerem · 10 comments
Owner

Originally created by @ghost on GitHub (Mar 16, 2013).
Original GitHub issue: https://github.com/retspen/webvirtmgr/issues/42

Hello.

I have a volume group setup for KVM to use called kvmspace. From the UI, I try adding a new image called centos123 and the following is the output from webvirtmgr:

libvir:  error : internal error Child process (/sbin/lvchange -aln kvmspace/centos123) unexpected exit status 5:   One or more specified logical volume(s) not found.

Any ideas to correcting this?

Originally created by @ghost on GitHub (Mar 16, 2013). Original GitHub issue: https://github.com/retspen/webvirtmgr/issues/42 Hello. I have a volume group setup for KVM to use called `kvmspace`. From the UI, I try adding a new image called `centos123` and the following is the output from webvirtmgr: ``` libvir: error : internal error Child process (/sbin/lvchange -aln kvmspace/centos123) unexpected exit status 5: One or more specified logical volume(s) not found. ``` Any ideas to correcting this?
kerem closed this issue 2026-02-27 16:00:46 +03:00
Author
Owner

@ghost commented on GitHub (Mar 16, 2013):

Some additional information:

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      243202  1953514583+  ee  GPT
virsh # pool-list
Name                 State      Autostart 
-----------------------------------------
kvmspace             active     no  
  --- Volume group ---
  VG Name               kvmspace
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  29
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.82 TiB
  PE Size               4.00 MiB
  Total PE              476931
  Alloc PE / Size       0 / 0   
  Free  PE / Size       476931 / 1.82 TiB
  VG UUID               ZPSy5A-dLHa-hxEp-ENvb-76Ti-ukJT-NCNCac

Volume Group is at /dev/kvmspace

Sounds like this is simply a matter of the logical volume not being created before lvchange is trying to be ran on it?

<!-- gh-comment-id:14999740 --> @ghost commented on GitHub (Mar 16, 2013): Some additional information: ``` Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 243202 1953514583+ ee GPT ``` ``` virsh # pool-list Name State Autostart ----------------------------------------- kvmspace active no ``` ``` --- Volume group --- VG Name kvmspace System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 29 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.82 TiB PE Size 4.00 MiB Total PE 476931 Alloc PE / Size 0 / 0 Free PE / Size 476931 / 1.82 TiB VG UUID ZPSy5A-dLHa-hxEp-ENvb-76Ti-ukJT-NCNCac ``` Volume Group is at `/dev/kvmspace` Sounds like this is simply a matter of the logical volume not being created before `lvchange` is trying to be ran on it?
Author
Owner

@retspen commented on GitHub (Mar 16, 2013):

You add VG in through WebVirtMgr ?

<!-- gh-comment-id:15003332 --> @retspen commented on GitHub (Mar 16, 2013): You add VG in through WebVirtMgr ?
Author
Owner

@ghost commented on GitHub (Mar 16, 2013):

No, the volume group was created prior to WebVirtMgr being installed on the server as other data was/is on this VG before I decided to give this a spin and see how I liked it.

The pool is showing up within virsh and WebVirtManager fine, it's showing the correct space used, free space, etc.

<!-- gh-comment-id:15005984 --> @ghost commented on GitHub (Mar 16, 2013): No, the volume group was created prior to WebVirtMgr being installed on the server as other data was/is on this VG before I decided to give this a spin and see how I liked it. The pool is showing up within virsh and WebVirtManager fine, it's showing the correct space used, free space, etc.
Author
Owner

@retspen commented on GitHub (Mar 16, 2013):

$ virsh pool-dumpxml kvmspace
???

<!-- gh-comment-id:15006643 --> @retspen commented on GitHub (Mar 16, 2013): $ virsh pool-dumpxml kvmspace ???
Author
Owner

@ghost commented on GitHub (Mar 16, 2013):

As requested

[root@darkstar ~]# virsh pool-dumpxml kvmspace
<pool type='logical'>
  <name>kvmspace</name>
  <uuid>b766a659-ac63-2379-2c7b-b10d4af97d4d</uuid>
  <capacity unit='bytes'>2000393601024</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>2000393601024</available>
  <source>
    <device path='/dev/kvmspace'/>
    <name>kvmspace</name>
    <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/kvmspace</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>
<!-- gh-comment-id:15006754 --> @ghost commented on GitHub (Mar 16, 2013): As requested ``` [root@darkstar ~]# virsh pool-dumpxml kvmspace <pool type='logical'> <name>kvmspace</name> <uuid>b766a659-ac63-2379-2c7b-b10d4af97d4d</uuid> <capacity unit='bytes'>2000393601024</capacity> <allocation unit='bytes'>0</allocation> <available unit='bytes'>2000393601024</available> <source> <device path='/dev/kvmspace'/> <name>kvmspace</name> <format type='lvm2'/> </source> <target> <path>/dev/kvmspace</path> <permissions> <mode>0755</mode> <owner>-1</owner> <group>-1</group> </permissions> </target> </pool> ```
Author
Owner

@retspen commented on GitHub (Mar 16, 2013):

Must be like this:

<pool type='logical'>
  <name>lvm</name>
  <uuid>d9f1c608-9659-7999-f57a-a04de22671dc</uuid>
  <capacity unit='bytes'>3791826976768</capacity>
  <allocation unit='bytes'>128849018880</allocation>
  <available unit='bytes'>3662977957888</available>
  <source>
     <device path='/dev/sdb'/> <---- PV device 
     <name>lvm</name>
     <format type='lvm2'/>
  </source>
  <target>
    <path>/dev/lvm</path>
    <permissions>
       <mode>0700</mode>
       <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>
<!-- gh-comment-id:15007501 --> @retspen commented on GitHub (Mar 16, 2013): Must be like this: ``` <pool type='logical'> <name>lvm</name> <uuid>d9f1c608-9659-7999-f57a-a04de22671dc</uuid> <capacity unit='bytes'>3791826976768</capacity> <allocation unit='bytes'>128849018880</allocation> <available unit='bytes'>3662977957888</available> <source> <device path='/dev/sdb'/> <---- PV device <name>lvm</name> <format type='lvm2'/> </source> <target> <path>/dev/lvm</path> <permissions> <mode>0700</mode> <owner>-1</owner> <group>-1</group> </permissions> </target> </pool> ```
Author
Owner

@ghost commented on GitHub (Mar 16, 2013):

I'm able to manually create the logical volume from the command line and use it in WebVirtMgr, so I guess I'll do it that way.

lvcreate -L 20G -n vm0003 kvmspace

Then set the new VM to use vm0003 for the disk.

<!-- gh-comment-id:15007612 --> @ghost commented on GitHub (Mar 16, 2013): I'm able to manually create the logical volume from the command line and use it in WebVirtMgr, so I guess I'll do it that way. ``` lvcreate -L 20G -n vm0003 kvmspace ``` Then set the new VM to use vm0003 for the disk.
Author
Owner

@retspen commented on GitHub (Mar 16, 2013):

Ok. Do this manual. Or change xml for the storage pool:

$ virsh pool-edit kvmspace

<device path='/dev/kvmspace'/>

to

<device path='/dev/sdx'/>

where sdx - your PV device for lvm

<!-- gh-comment-id:15008624 --> @retspen commented on GitHub (Mar 16, 2013): Ok. Do this manual. Or change xml for the storage pool: $ virsh pool-edit kvmspace ``` <device path='/dev/kvmspace'/> ``` to ``` <device path='/dev/sdx'/> ``` where sdx - your PV device for lvm
Author
Owner

@ghost commented on GitHub (Mar 16, 2013):

Sorry, another update on this.

This actually looks to be a bug that RedHat has "confirmed", but doesn't appear to be resolved 100% as of yet.
https://bugzilla.redhat.com/show_bug.cgi?id=888118

Command That Fails:

virsh vol-create-as --pool kvmspace vm0006 --capacity 20480M --allocation 0

Command That Works:

virsh vol-create-as --pool kvmspace vm0006 --capacity 20480M
<!-- gh-comment-id:15011966 --> @ghost commented on GitHub (Mar 16, 2013): Sorry, another update on this. This actually looks to be a bug that RedHat has "confirmed", but doesn't appear to be resolved 100% as of yet. https://bugzilla.redhat.com/show_bug.cgi?id=888118 Command That Fails: ``` virsh vol-create-as --pool kvmspace vm0006 --capacity 20480M --allocation 0 ``` Command That Works: ``` virsh vol-create-as --pool kvmspace vm0006 --capacity 20480M ```
Author
Owner

@retspen commented on GitHub (Mar 16, 2013):

Resolv, but not for web ui.

<!-- gh-comment-id:15012455 --> @retspen commented on GitHub (Mar 16, 2013): Resolv, but not for web ui.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/webvirtmgr#31
No description provided.