Hi, I tried to deploy Rocky in a multinode setup but ceph-osd fails with:
failed: [xxxxxxxxxxx-poc2] (item=[0, {u'fs_uuid': u'', u'bs_wal_label': u'', u'external_journal': False, u'bs_blk_label': u'', u'bs_db_partition_num': u'', u'journal_device': u'', u'journal': u'', u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'', u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'', u'partition_num': u'1', u'bs_db_label': u'', u'bs_blk_partition_num': u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'', u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS', u'bs_blk_device': u''}]) => {
"changed": true,
"item": [
0,
{
"bs_blk_device": "",
"bs_blk_label": "",
"bs_blk_partition_num": "",
"bs_db_device": "",
"bs_db_label": "",
"bs_db_partition_num": "",
"bs_wal_device": "",
"bs_wal_label": "",
"bs_wal_partition_num": "",
"device": "/dev/nvme0n1",
"external_journal": false,
"fs_label": "",
"fs_uuid": "",
"journal": "",
"journal_device": "",
"journal_num": 0,
"partition": "/dev/nvme0n1",
"partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",
"partition_num": "1"
}
]
}
MSG:
Container exited with non-zero return code 2
We tried to debug the error message by starting the container with a
modified endpoint but we are stuck at the following point right now:
docker run -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e "OSD_BS_DEV=/dev/nvme0n1" -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1" -e "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e "OSD_INITIAL_WEIGHT=1" -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false" -v "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint /bin/bash 10.0.128.7:5000/openstack/openstack-kolla-cfg/ubuntu-source-ceph-osd:7.0.0.3
cat /var/lib/kolla/config_files/ceph.client.admin.keyring > /etc/ceph/ceph.client.admin.keyring
cat /var/lib/kolla/config_files/ceph.conf > /etc/ceph/ceph.conf(bootstrap-osd-0)[root@985e2dee22bc /]# /usr/bin/ceph-osd -d --public-addr 10.0.153.11 --cluster-addr 10.0.153.11
usage: ceph-osd -i <ID> [flags]
--osd-data PATH data directory
--osd-journal PATH
journal file or block device
--mkfs create a [new] data directory
--mkkey generate a new secret key. This is normally used in
combination with --mkfs
--convert-filestore
run any pending upgrade operations
--flush-journal flush all data out of journal
--mkjournal initialize a new journal
--check-wants-journal
check whether a journal is desired
--check-allows-journal
check whether a journal is allowed
--check-needs-journal
check whether a journal is required
--debug_osd <N> set debug level (e.g. 10)
--get-device-fsid PATH
get OSD fsid for the given block device
--conf/-c FILE read configuration from the given configuration file
--id/-i ID set ID portion of my name
--name/-n TYPE.ID set name
--cluster NAME set cluster name (default: ceph)
--setuser USER set uid to user or uid (and gid to user's gid)
--setgroup GROUP set gid to group or gid
--version show version and quit
-d run in foreground, log to stderr.
-f run in foreground, log to usual location.
--debug_ms N set message debug level (e.g. 1)
2018-09-26 12:28:07.801066 7fbda64b4e40 0 ceph version 12.2.4
(52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process
(unknown), pid 46
2018-09-26 12:28:07.801078 7fbda64b4e40 -1 must specify '-i #' where #
is the osd number
But it looks like "-i" is not set anywere?grep command /opt/stack/kolla-ansible/ansible/roles/ceph/templates/ceph-osd.json.j2 "command": "/usr/bin/ceph-osd -f --public-addr {{ hostvars[inventory_hostname]['ansible_' + storage_interface]['ipv4']['address'] }} --cluster-addr {{ hostvars[inventory_hostname]['ansible_' + cluster_interface]['ipv4']['address'] }}",
What's wrong with our setup? All the best, Flo -- EveryWare AG Florian Engelmann Systems Engineer Zurlindenstrasse 52a CH-8003 Zürich tel: +41 44 466 60 00 fax: +41 44 466 60 10 mail: mailto:[email protected] web: http://www.everyware.ch
smime.p7s
Description: S/MIME cryptographic signature
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
