Just a note to those following along at home, discussion on this moved to
https://github.com/ceph/ceph/pull/16355 and
http://tracker.ceph.com/issues/17445

On Sat, Jul 15, 2017 at 7:32 AM 许雪寒 <xuxue...@360.cn> wrote:

> I debugged a little, and find that this might have something to do with
> the "cache evict" and "list_snaps" operations.
>
> I debugged the "core" file of the process with gdb, and confirmed that the
> object that caused the segmentation fault is
> rbd_data.d18d71b948ac7.000000000000062e, just as the following logs
> indicates:
>
> (gdb) f 4
> #4  calc_snap_set_diff (cct=<optimized out>, snap_set=...,
> start=<optimized out>, end=<optimized out>, diff=0x7ffed23a4640,
> end_size=<optimized out>, end_exists=0x7ffed23a461f)
>     at librados/snap_set_diff.cc:41
> 41            a = r->snaps[0];
> (gdb) p r
> $1 = {cloneid = 22, snaps = std::vector of length 0, capacity 0, overlap =
> std::vector of length 2, capacity 2 = {{first = 0, second = 786432}, {first
> = 1523712, second = 2670592}},
>   size = 4194304}
> (gdb) f 5
> #5  0x00007fa87a4359c4 in compute_diffs (diffs=0x7ffed23a4630,
> this=0x7fa88f196820) at librbd/DiffIterate.cc:130
> 130                            &end_exists);
> (gdb) p m_oid
> $2 = "rbd_data.d18d71b948ac7.", '0' <repeats 13 times>, "62e"
>
> Then we checked the cache tier osd's log:
>
> 2017-07-14 18:27:11.122472 7f91a365f700 10 osd.58.objecter ms_dispatch
> 0x7f91e2a9c140 osd_op_reply(2877166 rbd_data.d18d71b948ac7.000000000000062e
> [copy-get max 8388608] v0'0 uv47138 ondisk = 0) v7
> 2017-07-14 18:27:11.122514 7f91b395d700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164121 (2133'161077,2160'164121] local-les=1977 n=81 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164119 lcod
> 2160'164120 mlcod 2160'164120 active+clean] process_copy_chunk
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 tid 2877166 (0)
> Success
> 2017-07-14 18:27:11.129590 7f91b395d700 10 osd.58.objecter _op_submit oid
> rbd_data.d18d71b948ac7.000000000000062e '@8' '@8' [assert-version
> v47138,copy-get max 8388608] tid 2877168 osd.0
> 2017-07-14 18:27:11.129602 7f91b395d700  1 -- 10.142.121.179:0/24945 -->
> 10.142.121.142:6824/6246 -- osd_op(osd.58.789:2877168 8.ce3acb8b
> rbd_data.d18d71b948ac7.000000000000062e [assert-version v47138,copy-get max
> 8388608] snapc 0=[]
> ack+read+rwordered+ignore_cache+ignore_overlay+map_snap_clone+known_if_redirected
> e2160) v7 -- ?+0 0x7f91ee305180 con 0x7f921046e880
> 2017-07-14 18:27:11.133206 7f91a365f700  1 -- 10.142.121.179:0/24945 <==
> osd.0 10.142.121.142:6824/6246 149 ==== osd_op_reply(2877168
> rbd_data.d18d71b948ac7.000000000000062e [assert-version v47138,copy-get max
> 8388608] v0'0 uv47138 ondisk = 0) v7 ==== 201+0+119 (2793013310 0
> 570526743) 0x7f91fb306680 con 0x7f921046e880
> 2017-07-14 18:27:11.133220 7f91a365f700 10 osd.58.objecter ms_dispatch
> 0x7f91e2a9c140 osd_op_reply(2877168 rbd_data.d18d71b948ac7.000000000000062e
> [assert-version v47138,copy-get max 8388608] v0'0 uv47138 ondisk = 0) v7
> 2017-07-14 18:27:11.133264 7f91b395d700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164121 (2133'161077,2160'164121] local-les=1977 n=81 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164120 lcod
> 2160'164120 mlcod 2160'164120 active+clean] process_copy_chunk
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 tid 2877168 (0)
> Success
> 2017-07-14 18:27:11.133475 7f91b395d700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164121 (2133'161077,2160'164121] local-les=1977 n=81 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164120 lcod
> 2160'164120 mlcod 2160'164120 active+clean] finish_promote
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 r=0 uv47138
> 2017-07-14 18:27:11.133495 7f91b395d700 20 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164121 (2133'161077,2160'164121] local-les=1977 n=81 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164120 lcod
> 2160'164120 mlcod 2160'164120 active+clean] simple_opc_create
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> 2017-07-14 18:27:11.133529 7f91b395d700 20 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164121 (2133'161077,2160'164121] local-les=1977 n=81 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164120 lcod
> 2160'164120 mlcod 2160'164120 active+clean] finish_ctx
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 0x7f91eb158000 op
> promote
> 2017-07-14 18:27:11.133612 7f91b395d700  7 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164121 (2133'161077,2160'164121] local-les=1977 n=82 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164120 lcod
> 2160'164120 mlcod 2160'164120 active+clean] issue_repop rep_tid 29670336 o
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> 2017-07-14 18:27:11.133722 7f91b395d700  1 -- 10.143.208.51:6802/3024945
> --> 10.143.208.16:6819/4176877 -- osd_repop(osd.58.0:29670336 6.38b
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 v 2160'164122) v1
> -- ?+676 0x7f91e84e6600 con 0x7f91f58c2580
> 2017-07-14 18:27:11.133770 7f91b395d700  1 -- 10.143.208.51:6802/3024945
> --> 10.143.208.50:6800/2039335 -- osd_repop(osd.58.0:29670336 6.38b
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 v 2160'164122) v1
> -- ?+676 0x7f91e84e8a00 con 0x7f91ebf70280
> 2017-07-14 18:27:11.133959 7f91b395d700 20 snap_mapper.add_oid
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 b,16
> .
> .
> .
> .
> .
> .
> .
> .
> .
> .
> .
> 2017-07-14 18:27:14.583134 7f91b215a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164124 (2133'161077,2160'164124] local-les=1977 n=82 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 luod=2160'164122
> lua=2160'164122 crt=2160'164121 lcod 2160'164121 mlcod 2160'164121
> active+clean] agent_maybe_evict evicting
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16(2160'164122
> osd.58.0:29670336 [16,b] data_digest s 4194304 uv 47138 dd f8fc1a50)
> 2017-07-14 18:27:14.583143 7f91b215a700 20 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164124 (2133'161077,2160'164124] local-les=1977 n=82 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 luod=2160'164122
> lua=2160'164122 crt=2160'164121 lcod 2160'164121 mlcod 2160'164121
> active+clean] simple_opc_create
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> 2017-07-14 18:27:14.583209 7f91b215a700 20 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164124 (2133'161077,2160'164124] local-les=1977 n=82 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 luod=2160'164122
> lua=2160'164122 crt=2160'164121 lcod 2160'164121 mlcod 2160'164121
> active+clean] finish_ctx
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 0x7f91edf66000 op
> delete
> 2017-07-14 18:27:14.583242 7f91b215a700  7 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164124 (2133'161077,2160'164124] local-les=1977 n=81 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 luod=2160'164122
> lua=2160'164122 crt=2160'164121 lcod 2160'164121 mlcod 2160'164121
> active+clean] issue_repop rep_tid 29670402 o
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> 2017-07-14 18:27:14.583270 7f91b215a700  1 -- 10.143.208.51:6802/3024945
> --> 10.143.208.16:6819/4176877 -- osd_repop(osd.58.0:29670402 6.38b
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 v 2160'164125) v1
> -- ?+248 0x7f91e4b6ac00 con 0x7f91f58c2580
> 2017-07-14 18:27:14.583390 7f91b215a700  1 -- 10.143.208.51:6802/3024945
> --> 10.143.208.50:6800/2039335 -- osd_repop(osd.58.0:29670402 6.38b
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 v 2160'164125) v1
> -- ?+248 0x7f91e98c1000 con 0x7f91ebf70280
> 2017-07-14 18:27:14.583399 7f91b215a700 20 snap_mapper.remove_oid
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> 2017-07-14 18:27:14.583562 7f91b215a700 20 snap_mapper.get_snaps
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 b,16
> .
> .
> .
> .
> .
> .
> .
> .
> .
> .
> 2017-07-14 18:28:24.616859 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] do_op
> osd_op(client.3535109.0:1996 6.ce3acb8b
> rbd_data.d18d71b948ac7.000000000000062e [list-snaps] snapc 0=[]
> ack+read+known_if_redirected e2160) v7 may_read -> read-ordered flags
> ack+read+known_if_redirected
> 2017-07-14 18:28:24.616876 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] get_object_context: found obc
> in cache: 0x7f91f94dc280
> 2017-07-14 18:28:24.616883 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] get_object_context:
> 0x7f91f94dc280 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:head
> rwstate(none n=0 w=0) oi:
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:head(2117'157947
> client.596209.0:3574705 dirty s 4194304 uv 157947) ssc: 0x7f92032ca140
> snapset: 16=[16,b]:[16]+head
> 2017-07-14 18:28:24.616894 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] find_object_context
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:snapdir @snapdir
> oi=6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:head(2117'157947
> client.596209.0:3574705 dirty s 4194304 uv 157947)
> 2017-07-14 18:28:24.616905 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] agent_choose_mode flush_mode:
> idle evict_mode: idle num_objects: 76 num_bytes: 296751474
> num_objects_dirty: 38 num_objects_omap: 0 num_dirty: 38 num_user_objects:
> 72 num_user_bytes: 296751104 pool.info.target_max_bytes: 400000000000
> pool.info.target_max_objects: 1000000
> 2017-07-14 18:28:24.616912 7f91ba16a700 20 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] agent_choose_mode dirty
> 0.400943 full 0.759682
> 2017-07-14 18:28:24.616925 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] get_object_context: found obc
> in cache: 0x7f91f94dc500
> 2017-07-14 18:28:24.616931 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] get_object_context:
> 0x7f91f94dc500 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> rwstate(none n=0 w=0) oi:
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16(0'0 unknown.0.0:0
> [] s 0 uv 0) ssc: 0x7f92032ca140 snapset: 16=[16,b]:[16]+head
> 2017-07-14 18:28:24.616941 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean]  clone_oid
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16 obc 0x7f91f94dc500
> 2017-07-14 18:28:24.617015 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] execute_ctx 0x7f91e323ea00
> 2017-07-14 18:28:24.617023 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] do_op
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:head [list-snaps] ov
> 2117'157947
> 2017-07-14 18:28:24.617030 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean]  taking ondisk_read_lock
> 2017-07-14 18:28:24.617036 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean]  taking ondisk_read_lock for
> src 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
> 2017-07-14 18:28:24.617053 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] do_osd_op
> 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:head [list-snaps]
> 2017-07-14 18:28:24.617060 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean] do_osd_op  list-snaps
> 2017-07-14 18:28:24.617071 7f91ba16a700 20 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean]  clone 16 snaps []
> 2017-07-14 18:28:24.617079 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean]  dropping ondisk_read_lock
> 2017-07-14 18:28:24.617182 7f91ba16a700 10 osd.58 pg_epoch: 2160 pg[6.38b(
> v 2160'164136 (2133'161077,2160'164136] local-les=1977 n=76 ec=279 les/c/f
> 1977/1977/0 1975/1976/789) [58,46,35] r=0 lpr=1976 crt=2160'164134 lcod
> 2160'164135 mlcod 2160'164135 active+clean]  dropping ondisk_read_lock for
> src 6:d1d35c73:::rbd_data.d18d71b948ac7.000000000000062e:16
>
> It showed that rbd_data.d18d71b948ac7.000000000000062e:16 got promoted at
> about 2017-07-14 18:27:11, at which time the "snaps" filed of its object
> context was still [b,16]. Then, at about 2017-07-14 18:27:14, it got
> evicted.
> And then, at about 2017-07-14 18:28:24, a "list-snaps" request came in,
> and got the empty snaps for rbd_data.d18d71b948ac7.000000000000062e:16.
>
> I read the source code of cache tier of jewel version, and found that,
> when a object gets evicted from the cache, its obc isn't removed from
> "object_contexts", only got its obs.oi emptied, and when a "list-snaps"
> request comes in, the "obs.oi" field isn't checked. Meanwhile, the "snaps"
> field of "list-snaps" request's reply comes from "obc->obs.oi.snaps". So,
> when the "list-snaps" request comes in with its target object's clones just
> evicted from the cache and their obc not removed from "object_contexts"
> yet, an empty "snaps" for the clones is returned to the client.
>
> Could this be right? Thank you:-)
> ________________________________________
> 发件人: Jason Dillaman [jdill...@redhat.com]
> 发送时间: 2017年7月14日 21:43
> 收件人: 许雪寒
> Cc: ceph-users@lists.ceph.com
> 主题: Re: 答复: [ceph-users] 答复: No "snapset" attribute for clone object
>
> The only people that have experienced it seem to be using cache
> tiering. I don't know if anyone has deeply investigate it yet. You
> could attempt to evict those objects from the cache tier so that the
> snapdir request is proxied down to the base tier to see if that works.
>
> On Fri, Jul 14, 2017 at 3:02 AM, 许雪寒 <xuxue...@360.cn> wrote:
> > Yes, I believe so. Is there any workarounds?
> >
> > -----邮件原件-----
> > 发件人: Jason Dillaman [mailto:jdill...@redhat.com]
> > 发送时间: 2017年7月13日 21:13
> > 收件人: 许雪寒
> > 抄送: ceph-users@lists.ceph.com
> > 主题: Re: [ceph-users] 答复: No "snapset" attribute for clone object
> >
> > Quite possibly the same as this issue? [1]
> >
> > [1] http://tracker.ceph.com/issues/17445
> >
> > On Thu, Jul 13, 2017 at 8:13 AM, 许雪寒 <xuxue...@360.cn> wrote:
> >> By the way, we are using hammer version's rbd command to export-diff
> rbd images on Jewel version's cluster.
> >>
> >> -----邮件原件-----
> >> 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒
> >> 发送时间: 2017年7月13日 19:54
> >> 收件人: ceph-users@lists.ceph.com
> >> 主题: [ceph-users] No "snapset" attribute for clone object
> >>
> >> We are using rbd for block devices of VMs, and recently we found that
> after we created snapshots for some rbd images, there existed such objects
> for which there are clone objects who doesn't have "snapset" extensive
> attributes with them.
> >>
> >> It seems that the lack of "snapset" attributes for clone objects has
> led to segmentation faults when we try to do "export-diff".
> >>
> >> Is this a bug?
> >> We are using 10.2.5, jewel version.
> >>
> >> Thank you:-)
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Jason
>
>
>
> --
> Jason
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to