Hello all,

Following this report below, I did a reboot. Now I have a real question.

I added the VG, LV and mount point to this node using the port 9090 web interface.

Now the volume group isn't active and will not mount, causing the boot to hang.

I am able to do "vgchange -ay data" and then a manual mount in rescue mode.

Any feedback on the best way to add a new volume group to an empty partition (sdb) would be appreciated. Prior to using the web interface, I was having failures using the manual tools to /dev/sdb with an error "device /dev/sdb excluded by filter" which I suspect is related.

Thanks

Matt





On 09/04/2018 01:23 PM, Matt Simonsen wrote:
Hello,

I'm running oVirt with several data centers, some with NFS storage and some with local storage.

I had problems in the past with a large pool and local storage. The problem was nodectl showed the pool being too full (I think >80%), but it was only the images that made the pool "full" -- and this storage was carefully setup such that there was no chance it would actually fill.  The LVs for oVirt itself were all under 20%, yet nodectl still reported the pool was too full.

My solution so far has been to use our RAID card tools, so that sda is the oVirt node install, and sdb is for images.  There are probably other good reasons for me to handle it this way, for example being able to use different RAID levels, but I'm hoping someone can confirm my partitioning below doesn't have some risk I'm now yet aware of.

I setup a new volume group for images, as below:


[root@node4-g8-h4 multipath]# pvs
  PV                                             VG Fmt  Attr PSize    PFree   /dev/mapper/3600508b1001c7e172160824d7b204c3b2 onn_node4-g8-h4 lvm2 a--  <119.00g  <22.85g   /dev/sdb1                                      data lvm2 a-- 1.13t <361.30g

[root@node4-g8-h4 multipath]# vgs
  VG              #PV #LV #SN Attr   VSize    VFree
  data              1   1   0 wz--n-    1.13t <361.30g
  onn_node4-g8-h4   1  13   0 wz--n- <119.00g  <22.85g

[root@node4-g8-h4 multipath]# lvs
  LV                                   VG              Attr LSize   Pool   Origin                             Data%  Meta% Move Log Cpy%Sync Convert
  images_main                          data            -wi-ao---- 800.00g
  home                                 onn_node4-g8-h4 Vwi-aotz--   1.00g pool00 4.79   ovirt-node-ng-4.2.5.1-0.20180816.0   onn_node4-g8-h4 Vwi---tz-k 64.10g pool00 root   ovirt-node-ng-4.2.5.1-0.20180816.0+1 onn_node4-g8-h4 Vwi---tz-- 64.10g pool00 ovirt-node-ng-4.2.5.1-0.20180816.0   ovirt-node-ng-4.2.6-0.20180903.0     onn_node4-g8-h4 Vri---tz-k 64.10g pool00   ovirt-node-ng-4.2.6-0.20180903.0+1   onn_node4-g8-h4 Vwi-aotz-- 64.10g pool00 ovirt-node-ng-4.2.6-0.20180903.0 4.83   pool00                               onn_node4-g8-h4 twi-aotz-- 91.10g                                           8.94 0.49   root                                 onn_node4-g8-h4 Vwi---tz-- 64.10g pool00
  swap                                 onn_node4-g8-h4 -wi-ao---- 4.00g
  tmp                                  onn_node4-g8-h4 Vwi-aotz--   1.00g pool00 4.87   var                                  onn_node4-g8-h4 Vwi-aotz-- 15.00g pool00 3.31   var_crash                            onn_node4-g8-h4 Vwi-aotz-- 10.00g pool00 2.86   var_log                              onn_node4-g8-h4 Vwi-aotz--   8.00g pool00 3.57   var_log_audit                        onn_node4-g8-h4 Vwi-aotz--   2.00g pool00                                    4.89



The images_main is setup as "Block device for filesystems" with ext4. Is there any reason I should consider pool for thinly provisioned volumes?  I don't need to over-allocate storage and it seems to me like a fixed partition is ideal. Please confirm or let me know if there's anything else I should consider.


Thanks

Matt
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/[email protected]/message/I7N547X6DC7KHHVCDGKXQGNJV6TG7E3U/
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/LJINANK6PAGVV22H5OTYTJ3M4WIWPTMV/

Reply via email to