Thank you Strahil. I have installed/updated:
dnf install --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"
centos-release-gluster8.noarch centos-release-storage-common.noarch
dnf upgrade --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"
Cleaned and re-ran Ansible. Still receiving the same (below). As always, if
you or anyone else has any ideas for troubleshooting -
Gratefully,
Charles
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path:
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick':
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item",
"changed": true, "cmd": ["gluster", "volume", "heal", "engine",
"granular-entry-heal", "enable"], "delta": "0:00:10.100254", "end": "2021-01-14
18:07:16.192067", "item": {"arbiter": 0, "brick":
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return
code", "rc": 107, "start": "2021-01-14 18:07:06.091813", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute
the command again after bringing all bricks online and finishing any pending
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be
down. Please execute the command again after bringing all bricks online and
finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick':
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item",
"changed": true, "cmd": ["gluster", "volume", "heal", "data",
"granular-entry-heal", "enable"], "delta": "0:00:10.103147", "end": "2021-01-14
18:07:31.431419", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data",
"volname": "data"}, "msg": "non-zero return code", "rc": 107, "start":
"2021-01-14 18:07:21.328272", "stderr": "", "stderr_lines": [], "stdout": "One
or more bricks could be down. Please execute the command again after bringing
all bricks online and finishing any pending heals\nVolume heal failed.",
"stdout_lines": ["One or more bricks could be down. Please execute the command
again after bringing all bricks online and finishing any pending heals",
"Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick':
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var":
"item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore",
"granular-entry-heal", "enable"], "delta": "0:00:10.102582", "end": "2021-01-14
18:07:46.612788", "item": {"arbiter": 0, "brick":
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero
return code", "rc": 107, "start": "2021-01-14 18:07:36.510206", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute
the command again after bringing all bricks online and finishing any pending
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be
down. Please execute the command again after bringing all bricks online and
finishing any pending heals", "Volume heal failed."]}
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/BDLFPRPYPAY3UH2R4PVFL5XG4IKOERYP/