If the disk was previously used, you may need to 'wipefs -a /dev/sdb' to
clean out any previous partitioning, etc.

If the installer can't create the gluster PV, it is often because the drive
needs to be added to the multipath blacklist.

lsblk to find the ID and add it to the /etc/multipath.conf blacklist

[root@ovirtnode2 ~]#lsblk /dev/sdb

NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb    8:16 0 200G 0 disk
 -> 3678da6e715b018f01f1abdb887594aae 253:2 0 200G 0 mpath

edit /etc/multipath.conf and append the disk wwid to multipath.conf blacklist

blacklist {

        wwid 3678da6e715b018f01f1abdb887594aae }
then restart the multipathd service

service multipathd restart



On Tue, Feb 9, 2021 at 2:19 PM Strahil Nikolov via Users <[email protected]>
wrote:

> What is the output of 'lsblk -t' ?
>
> Best Regards,
> Strahil Nikolov
>
> Ovirt newbie here - using v 4.4.4
>
> Have been trying for days to get this installed on my HP DL380p G6. I have
> 2 disk 170GB Raid 0 for OS and 6 x 330GB disk Raid 5 for Gluster. DNS all
> set up (that took some working out), but I just can't fathom out whats
> (not) happening here. Block size is returned as 512.
>
> I've had some help on Reddit where I've been told that Ovirt is seeing my
> single local disk ass an multipath device, which it is not/??!  I think I
> removed the flag, but it still fails here.
>
> So, Gluster install fails quite early through, though it carries on
> creating all the volumes (with default settings) but then gives me the
> 'Deployment Failed' message :( Here is where it fails....
>
> Any help gratefully received!
>
> TASK [fail]
> ********************************************************************
>
> task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:62
>
> skipping: [ovirt-gluster.whichelo.com] => (item=[{'cmd': 'blockdev
> --getss /dev/sdb | grep -Po -q "512" && echo true || echo false\n',
> 'stdout': 'true', 'stderr': '', 'rc': 0, 'start': '2021-02-07
> 13:21:10.237701', 'end': '2021-02-07 13:21:10.243111', 'delta':
> '0:00:00.005410', 'changed': True, 'invocation': {'module_args':
> {'_raw_params': 'blockdev --getss /dev/sdb | grep -Po -q "512" && echo true
> || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline':
> True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable':
> None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines':
> ['true'], 'stderr_lines': [], 'failed': False, 'item': {'vgname':
> 'gluster_vg_sdb', 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'},
> {'cmd': 'blockdev --getss /dev/sdb | grep -Po -q "4096" && echo true ||
> echo false\n', 'stdout': 'false', 'stderr': '', 'rc': 0, 'start':
> '2021-02-07 13:21:14.760897', 'end': '2021-02-07 13:21:14.766395', 'delta':
> '0:00:00.005498', 'chang
> ed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss
> /dev/sdb | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell':
> True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True,
> 'argv': None, 'chdir': None, 'executable': None, 'creates': None,
> 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'],
> 'stderr_lines': [], 'failed': False, 'item': {'vgname': 'gluster_vg_sdb',
> 'pvname': '/dev/sdb'}, 'ansible_loop_var': 'item'}]) =>
> {"ansible_loop_var": "item", "changed": false, "item":
> [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss
> /dev/sdb | grep -Po -q \"512\" && echo true || echo false\n", "delta":
> "0:00:00.005410", "end": "2021-02-07 13:21:10.243111", "failed": false,
> "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/sdb |
> grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true,
> "argv": null, "chdir": null, "creates": null, "executable": null,
> "removes": null, "stdin": null, "
> stdin_add_newline": true, "strip_empty_ends": true, "warn": true}},
> "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0,
> "start": "2021-02-07 13:21:10.237701", "stderr": "", "stderr_lines": [],
> "stdout": "true", "stdout_lines": ["true"]}, {"ansible_loop_var": "item",
> "changed": true, "cm
>
> d": "blockdev --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo
> false\n", "delta": "0:00:00.005498", "end": "2021-02-07 13:21:14.766395",
> "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev
> --getss /dev/sdb | grep -Po -q \"4096\" && echo true || echo false\n",
> "_uses_shell": true, "argv": null, "chdir": null, "creates": null,
> "executable": null, "removes": null, "stdin": null, "stdin_add_newline":
> true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname":
> "/dev/sdb", "vgname": "gluster_vg_sdb"}, "rc": 0, "start": "2021-02-07
> 13:21:14.760897", "stderr": "", "stderr_lines": [], "stdout": "false",
> "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
>
>
> hc_wizard.yml excerpt:
>
>     - name: Check if block device is 4KN
>       shell: >
>         blockdev --getss {{ item.pvname }} | grep -Po -q "4096"  && echo
> true || echo false
>       register: is4KN
>       with_items: "{{ gluster_infra_volume_groups }}"
>
>     - fail: ################ THIS IS LINE 62
> #####################################
>         msg: "Mix of 4K and 512 Block devices are not allowed"
>       with_nested:
>         - "{{ is512.results }}"
>         - "{{ is4KN.results }}"
>       when: item[0].stdout|bool and item[1].stdout|bool
>
>     # logical block size of 512 bytes. To disable the check set
>     # gluster_features_512B_check to false. DELETE the below task once
>     # OVirt limitation is fixed
>     - name: Check if disks have logical block size of 512B
>       command: blockdev --getss {{ item.pvname }}
>       register: logical_blk_size
>       when: gluster_infra_volume_groups is defined and
>             item.pvname is not search("/dev/mapper") and
>             gluster_features_512B_check|default(false)
>
> Can anyone help?
> _______________________________________________
> Users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/BNDBDKC4EBFC6ALEHT75U5EGHAHKUHGG/
>
> _______________________________________________
> Users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/[email protected]/message/Z6RJWHCHNDEKIPZQI4775RUYOE6A7N7X/
>
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/I6Y5U2R7WIPQB7VHFH62NPD75TLRRSWE/

Reply via email to