Sahina,
Yesterday I started with a fresh install, I completely wiped clean all the
disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde
*After the OS install on the first 3 servers and setting up ssh keys, I
started the Hyperconverged deploy process:*
1.-Logged int to the first server http://host1.example.com:9090
2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
Review)
*Hosts/FQDNs:*
host1.example.com
host2.example.com
host3.example.com
*Packages:*
*Volumes:*
engine:replicate:/gluster_bricks/engine/engine
vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
data1:replicate:/gluster_bricks/data1/data1
data2:replicate:/gluster_bricks/data2/data2
*Bricks:*
engine:/dev/sdb:100GB:/gluster_bricks/engine
vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
data1:/dev/sdc:2700GB:/gluster_bricks/data1
data2:/dev/sdd:2700GB:/gluster_bricks/data2
LV Cache:
/dev/sde:400GB:writethrough
4.-After I hit deploy on the last step of the "Wizard" that is when I get
the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed", "rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml)
and the "Deployment Failed" file
Also wondering if I hit this bug?
https://bugzilla.redhat.com/show_bug.cgi?id=1635614
Thanks for looking into this.
*Adrian Quintero*
*[email protected] <[email protected]> |
[email protected] <[email protected]>*
On Mon, May 20, 2019 at 7:56 AM Sahina Bose <[email protected]> wrote:
> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expanding existing volumes as the bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>
> As to procedure to expand volumes:
> 1. Create bricks from UI - select Host -> Storage Device -> Storage
> device. Click on "Create Brick"
> If the device is shown as locked, make sure there's no signature on
> device. If multipath entries have been created for local devices, you can
> blacklist those devices in multipath.conf and restart multipath.
> (If you see device as locked even after you do this -please report back).
> 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3
> bricks created in previous step
> 3. Run Rebalance on the volume. Volume -> Rebalance.
>
>
> On Thu, May 16, 2019 at 2:48 PM Fred Rolland <[email protected]> wrote:
>
>> Sahina,
>> Can someone from your team review the steps done by Adrian?
>> Thanks,
>> Freddy
>>
>> On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <[email protected]>
>> wrote:
>>
>>> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
>>> re-attach them to clear any possible issues and try out the suggestions
>>> provided.
>>>
>>> thank you!
>>>
>>> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov <[email protected]>
>>> wrote:
>>>
>>>> I have the same locks , despite I have blacklisted all local disks:
>>>>
>>>> # VDSM PRIVATE
>>>> blacklist {
>>>> devnode "*"
>>>> wwid Crucial_CT256MX100SSD1_14390D52DCF5
>>>> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>>>> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>>>> wwid
>>>> nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001
>>>> }
>>>>
>>>> If you have multipath reconfigured, do not forget to rebuild the
>>>> initramfs (dracut -f). It's a linux issue , and not oVirt one.
>>>>
>>>> In your case you had something like this:
>>>> /dev/VG/LV
>>>> /dev/disk/by-id/pvuuid
>>>> /dev/mapper/multipath-uuid
>>>> /dev/sdb
>>>>
>>>> Linux will not allow you to work with /dev/sdb , when multipath is
>>>> locking the block device.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero <
>>>> [email protected]> написа:
>>>>
>>>>
>>>> under Compute, hosts, select the host that has the locks on /dev/sdb,
>>>> /dev/sdc, etc.., select storage devices and in here is where you see a
>>>> small column with a bunch of lock images showing for each row.
>>>>
>>>>
>>>> However as a work around, on the newly added hosts (3 total), I had to
>>>> manually modify /etc/multipath.conf and add the following at the end as
>>>> this is what I noticed from the original 3 node setup.
>>>>
>>>> -------------------------------------------------------------
>>>> # VDSM REVISION 1.3
>>>> # VDSM PRIVATE
>>>> # BEGIN Added by gluster_hci role
>>>>
>>>> blacklist {
>>>> devnode "*"
>>>> }
>>>> # END Added by gluster_hci role
>>>> ----------------------------------------------------------
>>>> After this I restarted multipath and the lock went away and was able to
>>>> configure the new bricks thru the UI, however my concern is what will
>>>> happen if I reboot the server will the disks be read the same way by the
>>>> OS?
>>>>
>>>> Also now able to expand the gluster with a new replicate 3 volume if
>>>> needed using http://host4.mydomain.com:9090.
>>>>
>>>>
>>>> thanks again
>>>>
>>>> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov <[email protected]>
>>>> wrote:
>>>>
>>>> In which menu do you see it this way ?
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
>>>> [email protected]> написа:
>>>>
>>>>
>>>> Strahil,
>>>> this is the issue I am seeing now
>>>>
>>>> [image: image.png]
>>>>
>>>> The is thru the UI when I try to create a new brick.
>>>>
>>>> So my concern is if I modify the filters on the OS what impact will
>>>> that have after server reboots?
>>>>
>>>> thanks,
>>>>
>>>>
>>>>
>>>> On Mon, Apr 22, 2019 at 11:39 PM Strahil <[email protected]> wrote:
>>>>
>>>> I have edited my multipath.conf to exclude local disks , but you need
>>>> to set '#VDSM private' as per the comments in the header of the file.
>>>> Otherwise, use the /dev/mapper/multipath-device notation - as you would
>>>> do with any linux.
>>>>
>>>> Best Regards,
>>>> Strahil NikolovOn Apr 23, 2019 01:07, [email protected] wrote:
>>>> >
>>>> > Thanks Alex, that makes more sense now while trying to follow the
>>>> instructions provided I see that all my disks /dev/sdb, /dev/sdc, /dev/sdd
>>>> are locked and inidicating " multpath_member" hence not letting me create
>>>> new bricks. And on the logs I see
>>>> >
>>>> > Device /dev/sdb excluded by a filter.\n", "item": {"pvname":
>>>> "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Creating physical volume
>>>> '/dev/sdb' failed", "rc": 5}
>>>> > Same thing for sdc, sdd
>>>> >
>>>> > Should I manually edit the filters inside the OS, what will be the
>>>> impact?
>>>> >
>>>> > thanks again.
>>>> > _______________________________________________
>>>> > Users mailing list -- [email protected]
>>>> > To unsubscribe send an email to [email protected]
>>>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> > oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> > List Archives:
>>>> https://lists.ovirt.org/archives/list/[email protected]/message/FW3IR3NMQTYZLXBT2VLOCLBKOYJS3MYF/
>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Quintero
>>>> _______________________________________________
>>>> Users mailing list -- [email protected]
>>>> To unsubscribe send an email to [email protected]
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/[email protected]/message/EW7NKT76JR3TLPP63M7DTDF2TLSMX556/
>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Quintero
>>>> _______________________________________________
>>>> Users mailing list -- [email protected]
>>>> To unsubscribe send an email to [email protected]
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/[email protected]/message/KRKR5LFARNRHFRHVQUA5IUFAHLVG2ENK/
>>>>
>>>
>>>
>>> --
>>> Adrian Quintero
>>> _______________________________________________
>>> Users mailing list -- [email protected]
>>> To unsubscribe send an email to [email protected]
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/[email protected]/message/5EQPZTGK6PWG427OSDBIDCQPT4RDY4ZC/
>>>
>>
--
Adrian Quintero
hc_nodes:
hosts:
host1.example.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdb
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sdd
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sde
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 396G
cachemetalvsize: 44G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 13G
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstore1
lvsize: 2600G
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data2
lvsize: 2700G
host2.example.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdb
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sdd
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sde
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 0.9G
cachemetalvsize: 0.1G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 13G
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstore1
lvsize: 2600G
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data2
lvsize: 2700G
host3.example.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdb
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sdd
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sde
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 0.9G
cachemetalvsize: 0.1G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 13G
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstore1
lvsize: 2600G
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data2
lvsize: 2700G
vars:
gluster_infra_disktype: JBOD
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
cluster_nodes:
- host1.example.com
- host2.example.com
- host3.example.com
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
- volname: vmstore1
brick: /gluster_bricks/vmstore1/vmstore1
arbiter: 0
- volname: data1
brick: /gluster_bricks/data1/data1
arbiter: 0
- volname: data2
brick: /gluster_bricks/data2/data2
arbiter: false
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] *********************************************************
ok: [host3.example.com]
ok: [host1.example.com]
ok: [host2.example.com]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already
started] ***
ok: [host1.example.com]
ok: [host3.example.com]
ok: [host2.example.com]
TASK [gluster.infra/roles/firewall_config : check if required variables are
set] ***
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ********
changed: [host1.example.com] => (item=2049/tcp)
changed: [host3.example.com] => (item=2049/tcp)
changed: [host2.example.com] => (item=2049/tcp)
changed: [host1.example.com] => (item=54321/tcp)
changed: [host3.example.com] => (item=54321/tcp)
changed: [host2.example.com] => (item=54321/tcp)
changed: [host1.example.com] => (item=5900/tcp)
changed: [host3.example.com] => (item=5900/tcp)
changed: [host2.example.com] => (item=5900/tcp)
changed: [host1.example.com] => (item=5900-6923/tcp)
changed: [host3.example.com] => (item=5900-6923/tcp)
changed: [host2.example.com] => (item=5900-6923/tcp)
changed: [host1.example.com] => (item=5666/tcp)
changed: [host3.example.com] => (item=5666/tcp)
changed: [host2.example.com] => (item=5666/tcp)
changed: [host1.example.com] => (item=16514/tcp)
changed: [host3.example.com] => (item=16514/tcp)
changed: [host2.example.com] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld
rules] ***
ok: [host2.example.com] => (item=glusterfs)
ok: [host3.example.com] => (item=glusterfs)
ok: [host1.example.com] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS
distribution] ***
ok: [host3.example.com]
ok: [host2.example.com]
ok: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for
debian systems.] ***
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL
systems.] ***
ok: [host1.example.com]
ok: [host2.example.com]
ok: [host3.example.com]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for
Debian systems] ***
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] ***********
ok: [host2.example.com]
ok: [host3.example.com]
ok: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] *********
skipping: [host2.example.com] => (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'})
skipping: [host2.example.com] => (item={u'vgname': u'gluster_vg_sdc',
u'pvname': u'/dev/sdc'})
skipping: [host2.example.com] => (item={u'vgname': u'gluster_vg_sdd',
u'pvname': u'/dev/sdd'})
skipping: [host3.example.com] => (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'})
skipping: [host3.example.com] => (item={u'vgname': u'gluster_vg_sdc',
u'pvname': u'/dev/sdc'})
skipping: [host3.example.com] => (item={u'vgname': u'gluster_vg_sdd',
u'pvname': u'/dev/sdd'})
skipping: [host1.example.com] => (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'})
skipping: [host1.example.com] => (item={u'vgname': u'gluster_vg_sdc',
u'pvname': u'/dev/sdc'})
skipping: [host1.example.com] => (item={u'vgname': u'gluster_vg_sdd',
u'pvname': u'/dev/sdd'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ********
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ******
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided]
***
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ******
ok: [host2.example.com]
ok: [host3.example.com]
ok: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ******
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID]
***
skipping: [host2.example.com]
skipping: [host3.example.com]
skipping: [host1.example.com]
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
failed: [host1.example.com] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [host3.example.com] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [host2.example.com] (item={u'vgname': u'gluster_vg_sdb', u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device /dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Creating physical volume '/dev/sdb' failed", "rc": 5}
failed: [host3.example.com] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg":
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [host1.example.com] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg":
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [host2.example.com] (item={u'vgname': u'gluster_vg_sdc', u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device /dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc", "vgname": "gluster_vg_sdc"}, "msg":
"Creating physical volume '/dev/sdc' failed", "rc": 5}
failed: [host1.example.com] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg":
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [host3.example.com] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg":
"Creating physical volume '/dev/sdd' failed", "rc": 5}
failed: [host2.example.com] (item={u'vgname': u'gluster_vg_sdd', u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device /dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd", "vgname": "gluster_vg_sdd"}, "msg":
"Creating physical volume '/dev/sdd' failed", "rc": 5}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit
@/usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.retry
PLAY RECAP *********************************************************************
host1.example.com : ok=8 changed=1 unreachable=0 failed=1
host2.example.com : ok=8 changed=1 unreachable=0 failed=1
host3.example.com : ok=8 changed=1 unreachable=0 failed=1
---------------------
host1.example.com
PV VG Fmt Attr PSize PFree
/dev/sda2 onn_host1 lvm2 a-- <557.88g <99.23g
Disk /dev/sda: 600GB
Disk /dev/sdb: 3001GB
Disk /dev/sdc: 3001GB
Disk /dev/sdd: 3001GB
Disk /dev/sde: 480GB
Error: /dev/mapper/onn_host1-pool00_tmeta: unrecognised disk label
---------------------
host2.example.com
PV VG Fmt Attr PSize PFree
/dev/sda2 onn_host2 lvm2 a-- <557.88g <99.23g
Disk /dev/sda: 600GB
Disk /dev/sdb: 3001GB
Disk /dev/sdc: 3001GB
Disk /dev/sdd: 3001GB
Disk /dev/sde: 480GB
Error: /dev/mapper/onn_host2-pool00_tmeta: unrecognised disk label
---------------------
host3.example.com
PV VG Fmt Attr PSize PFree
/dev/sda2 onn_host3 lvm2 a-- <557.88g <99.23g
Error: /dev/mapper/onn_host3-pool00_tmeta: unrecognised disk label
Disk /dev/sda: 600GB
Disk /dev/sdb: 3001GB
Disk /dev/sdc: 3001GB
Disk /dev/sdd: 3001GB
Disk /dev/sde: 480GB
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/OJWUV5JJ7TAU7LJZXAOOZJZIGSBQVSUK/