> Op 26 april 2016 om 16:58 schreef Samuel Just <sj...@redhat.com>:
> 
> 
> Can you attach the OSDMap (ceph osd getmap -o <mapfile>)?
> -Sam
> 

Henrik contacted me to look at this and this is what I found:

0x0000000000b18b81 in crush_choose_firstn (map=map@entry=0x1f00200, bucket=0x0, 
weight=weight@entry=0x1f2b880, weight_max=weight_max@entry=30, 
x=x@entry=1731224833, numrep=2, type=1, out=0x7fffdc036508, outpos=0, 
out_size=2, tries=51, recurse_tries=1, local_retries=0, 
    local_fallback_retries=0, recurse_to_leaf=1, vary_r=0, out2=0x7fffdc036510, 
parent_r=0) at crush/mapper.c:345
345     crush/mapper.c: No such file or directory.

A bit more output from GDB:

#0  0x0000000000b18b81 in crush_choose_firstn (map=map@entry=0x1f00200, 
bucket=0x0, weight=weight@entry=0x1f2b880, weight_max=weight_max@entry=30, 
x=x@entry=1731224833, numrep=2, type=1, out=0x7fffdc036508, outpos=0, 
out_size=2, tries=51, recurse_tries=1, local_retries=0, 
    local_fallback_retries=0, recurse_to_leaf=1, vary_r=0, out2=0x7fffdc036510, 
parent_r=0) at crush/mapper.c:345
#1  0x0000000000b194cb in crush_do_rule (map=0x1f00200, ruleno=<optimized out>, 
x=1731224833, result=0x7fffdc036520, result_max=<optimized out>, 
weight=0x1f2b880, weight_max=30, scratch=<optimized out>) at crush/mapper.c:794
#2  0x0000000000a61680 in do_rule (weight=std::vector of length 30, capacity 30 
= {...}, maxout=2, out=std::vector of length 0, capacity 0, x=1731224833, 
rule=0, this=0x1f72340) at ./crush/CrushWrapper.h:939
#3  OSDMap::_pg_to_osds (this=this@entry=0x1f46800, pool=..., pg=..., 
osds=osds@entry=0x7fffdc036600, primary=primary@entry=0x7fffdc0365ec, 
ppps=0x7fffdc0365f4) at osd/OSDMap.cc:1417

It seems that CRUSH can't find entries in the CRUSHMap. In this case the 'root 
default' was removed while the default ruleset still refers to it.

The cluster is running 0.80.11

I extracted the CRUSHMaps from the OSDMaps on osd.0:

$ for i in {1392..1450}; do find -name "osdmap*${i}*" -exec osdmaptool 
--export-crush /tmp/crush.${i} {} \;; crushtool -d /tmp/crush.${i} -o 
/tmp/crush.${i}.txt; done

Here I see that in map 1433 the root 'default' doesn't exist, but the crush 
ruleset refers to 'bucket0'. This crushmap is attached.

rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take bucket0
        step chooseleaf firstn 0 type host
        step emit
}

The root bucket0 doesn't exist.

bucket0 seems like something which was created by Ceph/CRUSH and not by the 
user.

I'm thinking about injecting a fixed CRUSHMap into this OSDMap where bucket0 
does exist. Does that seem like a sane thing to do?

Wido


> On Tue, Apr 26, 2016 at 2:07 AM, Henrik Svensson <henrik.svens...@sectra.com
> > wrote:
> 
> > Hi!
> >
> > We got a three node CEPH cluster with 10 OSD each.
> >
> > We bought 3 new machines with additional 30 disks that should reside in
> > another location.
> > Before adding these machines we modified the default CRUSH table.
> >
> > After modifying the (default) crush table with these commands the cluster
> > went down:
> >
> > ————————————————
> > ceph osd crush add-bucket dc1 datacenter
> > ceph osd crush add-bucket dc2 datacenter
> > ceph osd crush add-bucket availo datacenter
> > ceph osd crush move dc1 root=default
> > ceph osd crush move lkpsx0120 root=default datacenter=dc1
> > ceph osd crush move lkpsx0130 root=default datacenter=dc1
> > ceph osd crush move lkpsx0140 root=default datacenter=dc1
> > ceph osd crush move dc2 root=default
> > ceph osd crush move availo root=default
> > ceph osd crush add-bucket sectra root
> > ceph osd crush move dc1 root=sectra
> > ceph osd crush move dc2 root=sectra
> > ceph osd crush move dc3 root=sectra
> > ceph osd crush move availo root=sectra
> > ceph osd crush remove default
> > ————————————————
> >
> > I tried to revert the CRUSH map but no luck:
> >
> > ————————————————
> > ceph osd crush add-bucket default root
> > ceph osd crush move lkpsx0120 root=default
> > ceph osd crush move lkpsx0130 root=default
> > ceph osd crush move lkpsx0140 root=default
> > ceph osd crush remove sectra
> > ————————————————
> >
> > After trying to restart the cluster (and even the machines) no OSD started
> > up again.
> > But ceph osd tree gave this output, stating certain OSD:s are up (but the
> > processes are not running):
> >
> > ————————————————
> > # id weight type name up/down reweight
> > -1 163.8 root default
> > -2 54.6 host lkpsx0120
> > 0 5.46 osd.0 down 0
> > 1 5.46 osd.1 down 0
> > 2 5.46 osd.2 down 0
> > 3 5.46 osd.3 down 0
> > 4 5.46 osd.4 down 0
> > 5 5.46 osd.5 down 0
> > 6 5.46 osd.6 down 0
> > 7 5.46 osd.7 down 0
> > 8 5.46 osd.8 down 0
> > 9 5.46 osd.9 down 0
> > -3 54.6 host lkpsx0130
> > 10 5.46 osd.10 down 0
> > 11 5.46 osd.11 down 0
> > 12 5.46 osd.12 down 0
> > 13 5.46 osd.13 down 0
> > 14 5.46 osd.14 down 0
> > 15 5.46 osd.15 down 0
> > 16 5.46 osd.16 down 0
> > 17 5.46 osd.17 down 0
> > 18 5.46 osd.18 up 1
> > 19 5.46 osd.19 up 1
> > -4 54.6 host lkpsx0140
> > 20 5.46 osd.20 up 1
> > 21 5.46 osd.21 down 0
> > 22 5.46 osd.22 down 0
> > 23 5.46 osd.23 down 0
> > 24 5.46 osd.24 down 0
> > 25 5.46 osd.25 up 1
> > 26 5.46 osd.26 up 1
> > 27 5.46 osd.27 up 1
> > 28 5.46 osd.28 up 1
> > 29 5.46 osd.29 up 1
> > ————————————————
> >
> > The monitor starts/restarts OK (only one monitor exists).
> > But when starting one OSD with ceph -w nothing shows.
> >
> > Here is the ceph mon_status:
> >
> > ————————————————
> > { "name": "lkpsx0120",
> >   "rank": 0,
> >   "state": "leader",
> >   "election_epoch": 1,
> >   "quorum": [
> >         0],
> >   "outside_quorum": [],
> >   "extra_probe_peers": [],
> >   "sync_provider": [],
> >   "monmap": { "epoch": 4,
> >       "fsid": "9244194a-5e10-47ae-9287-507944612f95",
> >       "modified": "0.000000",
> >       "created": "0.000000",
> >       "mons": [
> >             { "rank": 0,
> >               "name": "lkpsx0120",
> >               "addr": "10.15.2.120:6789\/0"}]}}
> > ————————————————
> >
> > Here is the ceph.conf file
> >
> > ————————————————
> > [global]
> > fsid = 9244194a-5e10-47ae-9287-507944612f95
> > mon_initial_members = lkpsx0120
> > mon_host = 10.15.2.120
> > #debug osd = 20
> > #debug ms = 1
> > auth_cluster_required = cephx
> > auth_service_required = cephx
> > auth_client_required = cephx
> > filestore_xattr_use_omap = true
> > osd_crush_chooseleaf_type = 1
> > osd_pool_default_size = 2
> > public_network = 10.15.2.0/24
> > cluster_network = 10.15.4.0/24
> > rbd_cache = true
> > rbd_cache_size = 67108864
> > rbd_cache_max_dirty = 50331648
> > rbd_cache_target_dirty = 33554432
> > rbd_cache_max_dirty_age = 2
> > rbd_cache_writethrough_until_flush = true
> > ————————————————
> >
> > Here is the decompiled crush map:
> >
> > ————————————————
> > # begin crush map
> > tunable choose_local_tries 0
> > tunable choose_local_fallback_tries 0
> > tunable choose_total_tries 50
> > tunable chooseleaf_descend_once 1
> >
> > # devices
> > device 0 osd.0
> > device 1 osd.1
> > device 2 osd.2
> > device 3 osd.3
> > device 4 osd.4
> > device 5 osd.5
> > device 6 osd.6
> > device 7 osd.7
> > device 8 osd.8
> > device 9 osd.9
> > device 10 osd.10
> > device 11 osd.11
> > device 12 osd.12
> > device 13 osd.13
> > device 14 osd.14
> > device 15 osd.15
> > device 16 osd.16
> > device 17 osd.17
> > device 18 osd.18
> > device 19 osd.19
> > device 20 osd.20
> > device 21 osd.21
> > device 22 osd.22
> > device 23 osd.23
> > device 24 osd.24
> > device 25 osd.25
> > device 26 osd.26
> > device 27 osd.27
> > device 28 osd.28
> > device 29 osd.29
> >
> > # types
> > type 0 osd
> > type 1 host
> > type 2 chassis
> > type 3 rack
> > type 4 row
> > type 5 pdu
> > type 6 pod
> > type 7 room
> > type 8 datacenter
> > type 9 region
> > type 10 root
> >
> > # buckets
> > host lkpsx0120 {
> > id -2 # do not change unnecessarily
> > # weight 54.600
> > alg straw
> > hash 0 # rjenkins1
> > item osd.0 weight 5.460
> > item osd.1 weight 5.460
> > item osd.2 weight 5.460
> > item osd.3 weight 5.460
> > item osd.4 weight 5.460
> > item osd.5 weight 5.460
> > item osd.6 weight 5.460
> > item osd.7 weight 5.460
> > item osd.8 weight 5.460
> > item osd.9 weight 5.460
> > }
> > host lkpsx0130 {
> > id -3 # do not change unnecessarily
> > # weight 54.600
> > alg straw
> > hash 0 # rjenkins1
> > item osd.10 weight 5.460
> > item osd.11 weight 5.460
> > item osd.12 weight 5.460
> > item osd.13 weight 5.460
> > item osd.14 weight 5.460
> > item osd.15 weight 5.460
> > item osd.16 weight 5.460
> > item osd.17 weight 5.460
> > item osd.18 weight 5.460
> > item osd.19 weight 5.460
> > }
> > host lkpsx0140 {
> > id -4 # do not change unnecessarily
> > # weight 54.600
> > alg straw
> > hash 0 # rjenkins1
> > item osd.20 weight 5.460
> > item osd.21 weight 5.460
> > item osd.22 weight 5.460
> > item osd.23 weight 5.460
> > item osd.24 weight 5.460
> > item osd.25 weight 5.460
> > item osd.26 weight 5.460
> > item osd.27 weight 5.460
> > item osd.28 weight 5.460
> > item osd.29 weight 5.460
> > }
> > root default {
> > id -1 # do not change unnecessarily
> > # weight 163.800
> > alg straw
> > hash 0 # rjenkins1
> > item lkpsx0120 weight 54.600
> > item lkpsx0130 weight 54.600
> > item lkpsx0140 weight 54.600
> > }
> >
> > # rules
> > rule replicated_ruleset {
> > ruleset 0
> > type replicated
> > min_size 1
> > max_size 10
> > step take default
> > step chooseleaf firstn 0 type host
> > step emit
> > }
> >
> > # end crush map
> > ————————————————
> >
> > Operating system is Debian 8.0 and the CEPH version is 0.80.7 as stated in
> > the crash log.
> >
> > We increased the log level and tried to start osd.1 as an example. All
> > OSD:s we tried to start experiencing the same problem and dies.
> >
> > The log file from OSD 1 (ceph-osd.1.log) can be found here:
> > https://www.dropbox.com/s/dqunlufh0qtked5/ceph-osd.1.log.zip?dl=0
> >
> > As of now, all systems are down including the KVM-cluster that are
> > dependent of CEPH.
> >
> > Best regards,
> > Med vänlig hälsning
> >
> > Henrik
> > ------------------------------
> > *Henrik Svensson*
> > OpIT
> > Sectra AB
> > Teknikringen 20, 58330 Linköping, Sweden
> > E-mail: henrik.svens...@sectra.com
> > Phone: +46 (0)13 352 884
> > Cellular: +46 (0)70 395141
> > Web: *www.sectra.com* <http://www.sectra.com/medical/>
> >
> > ------------------------------
> > This message is intended only for the addressee and may contain
> > information that is
> > confidential or privileged. Unauthorized use is strictly prohibited and
> > may be unlawful.
> >
> > If you are not the addressee, you should not read, copy, disclose or
> > otherwise use this
> > message, except for the purpose of delivery to the addressee. If you have
> > received
> > this in error, please delete and advise us immediately.
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host lkpsx0120 {
        id -2           # do not change unnecessarily
        # weight 54.600
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 5.460
        item osd.1 weight 5.460
        item osd.2 weight 5.460
        item osd.3 weight 5.460
        item osd.4 weight 5.460
        item osd.5 weight 5.460
        item osd.6 weight 5.460
        item osd.7 weight 5.460
        item osd.8 weight 5.460
        item osd.9 weight 5.460
}
host lkpsx0130 {
        id -3           # do not change unnecessarily
        # weight 54.600
        alg straw
        hash 0  # rjenkins1
        item osd.10 weight 5.460
        item osd.11 weight 5.460
        item osd.12 weight 5.460
        item osd.13 weight 5.460
        item osd.14 weight 5.460
        item osd.15 weight 5.460
        item osd.16 weight 5.460
        item osd.17 weight 5.460
        item osd.18 weight 5.460
        item osd.19 weight 5.460
}
host lkpsx0140 {
        id -4           # do not change unnecessarily
        # weight 54.600
        alg straw
        hash 0  # rjenkins1
        item osd.20 weight 5.460
        item osd.21 weight 5.460
        item osd.22 weight 5.460
        item osd.23 weight 5.460
        item osd.24 weight 5.460
        item osd.25 weight 5.460
        item osd.26 weight 5.460
        item osd.27 weight 5.460
        item osd.28 weight 5.460
        item osd.29 weight 5.460
}
datacenter dc1 {
        id -5           # do not change unnecessarily
        # weight 163.800
        alg straw
        hash 0  # rjenkins1
        item lkpsx0120 weight 54.600
        item lkpsx0130 weight 54.600
        item lkpsx0140 weight 54.600
}
datacenter dc2 {
        id -6           # do not change unnecessarily
        # weight 0.000
        alg straw
        hash 0  # rjenkins1
}
datacenter availo {
        id -7           # do not change unnecessarily
        # weight 0.000
        alg straw
        hash 0  # rjenkins1
}
root sectra {
        id -8           # do not change unnecessarily
        # weight 163.800
        alg straw
        hash 0  # rjenkins1
        item dc1 weight 163.800
        item dc2 weight 0.000
        item availo weight 0.000
}

# rules
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take bucket0
        step chooseleaf firstn 0 type host
        step emit
}

# end crush map
epoch 1467
fsid 9244194a-5e10-47ae-9287-507944612f95
created 2016-02-04 11:25:27.371491
modified 2016-04-26 15:29:35.699882
flags pauserd,pausewr,noout
pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 1024 pgp_num 1024 last_change 1143 flags hashpspool 
crash_replay_interval 45 stripe_width 0
pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 1024 pgp_num 1024 last_change 1145 flags hashpspool 
stripe_width 0
pool 3 'virtual-images' replicated size 3 min_size 1 crush_ruleset 0 
object_hash rjenkins pg_num 4096 pgp_num 4096 last_change 1135 flags hashpspool 
stripe_width 0
pool 4 'bareos' replicated size 2 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 1024 pgp_num 1024 last_change 1358 flags hashpspool 
stripe_width 0
pool 5 'iscsi' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 1024 pgp_num 1024 last_change 1360 flags hashpspool 
stripe_width 0
pool 6 'data-images' replicated size 3 min_size 1 crush_ruleset 0 object_hash 
rjenkins pg_num 4096 pgp_num 4096 last_change 1381 flags hashpspool 
stripe_width 0
max_osd 30
osd.0 down out weight 0 up_from 1375 up_thru 1416 down_at 1436 
last_clean_interval [1247,1374) 10.15.2.120:6800/11431 10.15.4.120:6811/1011431 
10.15.4.120:6812/1011431 10.15.2.120:6816/1011431 autoout,exists 
e7610d27-4841-46a5-8c20-09dea25a3ae7
osd.1 down out weight 0 up_from 1374 up_thru 1416 down_at 1437 
last_clean_interval [1247,1373) 10.15.2.120:6803/11596 10.15.4.120:6817/1011596 
10.15.4.120:6818/1011596 10.15.2.120:6807/1011596 autoout,exists 
7b8abf90-24e1-4f3c-81ae-b04d0e0e80fd
osd.2 down out weight 0 up_from 1374 up_thru 1415 down_at 1437 
last_clean_interval [1248,1373) 10.15.2.120:6808/11790 10.15.4.120:6807/1011790 
10.15.4.120:6808/1011790 10.15.2.120:6814/1011790 autoout,exists 
76a3100f-b754-4c49-8146-b12924a548d7
osd.3 down out weight 0 up_from 1374 up_thru 1416 down_at 1437 
last_clean_interval [1315,1373) 10.15.2.120:6804/7186 10.15.4.120:6815/1007186 
10.15.4.120:6816/1007186 10.15.2.120:6822/1007186 autoout,exists 
a134c5ee-d761-4128-8f75-f27f363dacfb
osd.4 down out weight 0 up_from 1374 up_thru 1422 down_at 1437 
last_clean_interval [1248,1373) 10.15.2.120:6817/12183 10.15.4.120:6809/1012183 
10.15.4.120:6810/1012183 10.15.2.120:6815/1012183 autoout,exists 
4d43dc0f-7d58-4d0c-a67b-bfeb01340526
osd.5 down out weight 0 up_from 1374 up_thru 1415 down_at 1437 
last_clean_interval [1248,1373) 10.15.2.120:6821/12367 10.15.4.120:6813/1012367 
10.15.4.120:6814/1012367 10.15.2.120:6819/1012367 autoout,exists 
02f12e83-23e0-4c1e-a40e-275a850fc48b
osd.6 down out weight 0 up_from 1373 up_thru 1416 down_at 1437 
last_clean_interval [1249,1372) 10.15.2.120:6826/12543 10.15.4.120:6805/1012543 
10.15.4.120:6806/1012543 10.15.2.120:6813/1012543 autoout,exists 
0b00f685-fb5f-4080-9456-026ba52351dc
osd.7 down out weight 0 up_from 1374 up_thru 1416 down_at 1437 
last_clean_interval [1249,1373) 10.15.2.120:6829/12768 10.15.4.120:6838/1012768 
10.15.4.120:6839/1012768 10.15.2.120:6823/1012768 autoout,exists 
70acf3b3-8685-4ab6-b4a4-63256f64dc96
osd.8 down out weight 0 up_from 1373 up_thru 1416 down_at 1437 
last_clean_interval [1248,1372) 10.15.2.120:6832/13013 10.15.4.120:6803/1013013 
10.15.4.120:6804/1013013 10.15.2.120:6812/1013013 autoout,exists 
f8367251-b9d3-46a7-8e2d-53ab151ed1cb
osd.9 down out weight 0 up_from 1374 up_thru 1417 down_at 1436 
last_clean_interval [1248,1373) 10.15.2.120:6835/13178 10.15.4.120:6824/1013178 
10.15.4.120:6825/1013178 10.15.2.120:6805/1013178 autoout,exists 
deab8ade-49db-4d52-b084-0a76538ff3eb
osd.10 down out weight 0 up_from 1382 up_thru 1416 down_at 1436 
last_clean_interval [1251,1381) 10.15.2.130:6800/30155 10.15.4.130:6822/1030155 
10.15.4.130:6823/1030155 10.15.2.130:6832/1030155 autoout,exists 
c8aba219-f8e5-43ee-afc4-a10a56c868bd
osd.11 down out weight 0 up_from 1382 up_thru 1415 down_at 1435 
last_clean_interval [1251,1381) 10.15.2.130:6803/30319 10.15.4.130:6800/1030319 
10.15.4.130:6824/1030319 10.15.2.130:6833/1030319 autoout,exists 
f88f0614-269c-455a-a8bc-1c8d3f29da94
osd.12 down out weight 0 up_from 1381 up_thru 1416 down_at 1437 
last_clean_interval [1252,1380) 10.15.2.130:6806/30533 10.15.4.130:6813/1030533 
10.15.4.130:6821/1030533 10.15.2.130:6831/1030533 autoout,exists 
83f49445-11dd-46e7-9582-c5956d29c916
osd.13 down out weight 0 up_from 1385 up_thru 1416 down_at 1437 
last_clean_interval [1321,1383) 10.15.2.130:6809/5196 10.15.4.130:6809/1005196 
10.15.4.130:6826/1005196 10.15.2.130:6834/1005196 autoout,exists 
decf1b0a-9988-442a-9eac-6895c86a993d
osd.14 down out weight 0 up_from 1384 up_thru 1416 down_at 1437 
last_clean_interval [1252,1383) 10.15.2.130:6812/31168 10.15.4.130:6811/1031168 
10.15.4.130:6825/1031168 10.15.2.130:6816/1031168 autoout,exists 
f76eb3a0-b57c-4a2c-b438-5171692e9ad2
osd.15 down out weight 0 up_from 1384 up_thru 1416 down_at 1437 
last_clean_interval [1252,1383) 10.15.2.130:6815/31381 10.15.4.130:6819/1031381 
10.15.4.130:6820/1031381 10.15.2.130:6828/1031381 autoout,exists 
843f17d4-0c3c-40ab-81b0-0ed69a76e45d
osd.16 down out weight 0 up_from 1382 up_thru 1415 down_at 1437 
last_clean_interval [1253,1381) 10.15.2.130:6818/31592 10.15.4.130:6805/1031592 
10.15.4.130:6808/1031592 10.15.2.130:6830/1031592 autoout,exists 
0accc058-caf2-49b4-87cf-f07c89d4233f
osd.17 down out weight 0 up_from 1253 up_thru 1415 down_at 1437 
last_clean_interval [1209,1221) 10.15.2.130:6821/31809 10.15.4.130:6815/31809 
10.15.4.130:6816/31809 10.15.2.130:6822/31809 autoout,exists 
6bfb0461-77df-4c05-89a1-eb9b730db9b6
osd.18 up   in  weight 1 up_from 1253 up_thru 1416 down_at 1221 
last_clean_interval [1210,1220) 10.15.2.130:6824/32487 10.15.4.130:6817/32487 
10.15.4.130:6818/32487 10.15.2.130:6825/32487 exists,up 
4162bb4e-2969-469c-8dca-7cd6eb284630
osd.19 up   in  weight 1 up_from 1382 up_thru 1416 down_at 1380 
last_clean_interval [1254,1381) 10.15.2.130:6827/32666 10.15.4.130:6802/1032666 
10.15.4.130:6803/1032666 10.15.2.130:6804/1032666 exists,up 
ff653e49-ef98-44e3-9eab-9b48bfd6708c
osd.20 up   in  weight 1 up_from 1377 up_thru 1416 down_at 1373 
last_clean_interval [1256,1376) 10.15.2.140:6800/9830 10.15.4.140:6800/2009830 
10.15.4.140:6813/2009830 10.15.2.140:6834/2009830 exists,up 
8b2dd5e9-388a-4cad-9887-10f011a0852b
osd.21 down out weight 0 up_from 1377 up_thru 1416 down_at 1436 
last_clean_interval [1257,1376) 10.15.2.140:6803/9993 10.15.4.140:6808/2009993 
10.15.4.140:6823/2009993 10.15.2.140:6837/2009993 autoout,exists 
4ae357c6-9979-4c62-8801-cea9d8cd446a
osd.22 down out weight 0 up_from 1378 up_thru 1419 down_at 1436 
last_clean_interval [1257,1376) 10.15.2.140:6806/10186 10.15.4.140:6828/2010186 
10.15.4.140:6829/2010186 10.15.2.140:6839/2010186 autoout,exists 
bcfc8d9b-0abd-45f4-9935-40324a588350
osd.23 down out weight 0 up_from 1378 up_thru 1416 down_at 1436 
last_clean_interval [1331,1376) 10.15.2.140:6807/23388 10.15.4.140:6804/2023388 
10.15.4.140:6822/2023388 10.15.2.140:6810/2023388 autoout,exists 
e633beb9-ed92-45b8-a567-863f1c7bd07b
osd.24 down out weight 0 up_from 1377 up_thru 1416 down_at 1433 
last_clean_interval [1257,1376) 10.15.2.140:6815/10764 10.15.4.140:6802/2010764 
10.15.4.140:6824/2010764 10.15.2.140:6816/2010764 autoout,exists 
b86adecd-afd5-4d01-9c54-67331e638726
osd.25 up   in  weight 1 up_from 1377 up_thru 1416 down_at 1373 
last_clean_interval [1258,1376) 10.15.2.140:6818/10964 10.15.4.140:6817/2010964 
10.15.4.140:6819/2010964 10.15.2.140:6823/2010964 exists,up 
e3d0c5a6-648d-4e10-bab2-8fedc7e06825
osd.26 up   in  weight 1 up_from 1378 up_thru 1421 down_at 1373 
last_clean_interval [1258,1376) 10.15.2.140:6821/11169 10.15.4.140:6814/2011169 
10.15.4.140:6826/2011169 10.15.2.140:6836/2011169 exists,up 
941aebd2-6054-4857-95c5-9821d286d0ea
osd.27 up   in  weight 1 up_from 1377 up_thru 1416 down_at 1373 
last_clean_interval [1258,1376) 10.15.2.140:6825/12084 10.15.4.140:6811/2012084 
10.15.4.140:6820/2012084 10.15.2.140:6808/2012084 exists,up 
b095e30c-9291-4b70-ad5d-641cb8c75753
osd.28 up   in  weight 1 up_from 1378 up_thru 1416 down_at 1373 
last_clean_interval [1258,1376) 10.15.2.140:6828/12337 10.15.4.140:6825/2012337 
10.15.4.140:6827/2012337 10.15.2.140:6838/2012337 exists,up 
43b3b9c8-3a1d-4d80-b550-a96461ae9833
osd.29 up   in  weight 1 up_from 1377 up_thru 1416 down_at 1373 
last_clean_interval [1259,1376) 10.15.2.140:6831/12563 10.15.4.140:6801/2012563 
10.15.4.140:6803/2012563 10.15.2.140:6835/2012563 exists,up 
1074fe1b-3b07-4f51-bf82-2e5864b5b6d5
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to