On Sun, Aug 12, 2018 at 12:13 AM Glen Baars <g...@onsitecomputers.com.au>
wrote:

> Hello Jason,
>
>
>
> Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any
> objects. Is this the correct way to view the journal objects?
>

You won't see any journal objects in the SSDPOOL until you issue a write:

$ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1       320    332.01  1359896.98
    2       736    360.83  1477975.96
    3      1040    351.17  1438393.57
    4      1392    350.94  1437437.51
    5      1744    350.24  1434576.94
    6      2080    349.82  1432866.06
    7      2416    341.73  1399731.23
    8      2784    348.37  1426930.69
    9      3152    347.40  1422966.67
   10      3520    356.04  1458356.70
   11      3920    361.34  1480050.97
elapsed:    11  ops:     4096  ops/sec:   353.61  bytes/sec: 1448392.06
$ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd_hdd --image test
rbd journal '10746b8b4567':
header_oid: journal.10746b8b4567
object_oid_prefix: journal_data.2.10746b8b4567.
order: 24 (16 MiB objects)
splay_width: 4
object_pool: rbd_ssd
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M
rbd_hdd/test --rbd-cache=false
bench  type write io_size 4096 io_threads 16 bytes 16777216 pattern random
  SEC       OPS   OPS/SEC   BYTES/SEC
    1       240    248.54  1018005.17
    2       512    263.47  1079154.06
    3       768    258.74  1059792.10
    4      1040    258.50  1058812.60
    5      1312    258.06  1057001.34
    6      1536    258.21  1057633.14
    7      1792    253.81  1039604.73
    8      2032    253.66  1038971.01
    9      2256    241.41  988800.93
   10      2480    237.87  974335.65
   11      2752    239.41  980624.20
   12      2992    239.61  981440.94
   13      3200    233.13  954887.84
   14      3440    237.36  972237.80
   15      3680    239.47  980853.37
   16      3920    238.75  977920.70
elapsed:    16  ops:     4096  ops/sec:   245.04  bytes/sec: 1003692.81
$ rados -p rbd_ssd ls | grep journal_data.2.10746b8b4567.
journal_data.2.10746b8b4567.3
journal_data.2.10746b8b4567.0
journal_data.2.10746b8b4567.2
journal_data.2.10746b8b4567.1


> rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
>
> The symptoms that we are experiencing is a huge decrease in write speed (
> 1QD 128K writes from 160MB/s down to 14MB/s ). We see no improvement when
> moving the journal to SSDPOOL ( but we don’t think it is really moving )
>

If you are trying to optimize for 128KiB writes, you might need to tweak
the "rbd_journal_max_payload_bytes" setting since it currently is defaulted
to split journal write events into a maximum of 16KiB payload [1] in order
to optimize the worst-case memory usage of the rbd-mirror daemon for
environments w/ hundreds or thousands of replicated images.


> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman <jdill...@redhat.com>
> *Sent:* Saturday, 11 August 2018 11:28 PM
> *To:* Glen Baars <g...@onsitecomputers.com.au>
> *Cc:* ceph-users <ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> On Fri, Aug 10, 2018 at 3:01 AM Glen Baars <g...@onsitecomputers.com.au>
> wrote:
>
> Hello Ceph Users,
>
>
>
> I am trying to implement image journals for our RBD images ( required for
> mirroring )
>
>
>
> rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
>
>
>
> When we run the above command we still find the journal on the SLOWPOOL
> and not on the SSDPOOL. We are running 12.2.7 and all bluestore. We have
> also tried the ceph.conf option (rbd journal pool = SSDPOOL )
>
> Has anyone else gotten this working?
>
> The journal header was on SLOWPOOL or the journal data objects? I would
> expect that the journal metadata header is located on SLOWPOOL but all data
> objects should be created on SSDPOOL as needed.
>
>
>
> Kind regards,
>
> *Glen Baars*
>
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
>
> Jason
> This e-mail is intended solely for the benefit of the addressee(s) and any
> other named recipient. It is confidential and may contain legally
> privileged or confidential information. If you are not the recipient, any
> use, distribution, disclosure or copying of this e-mail is prohibited. The
> confidentiality and legal privilege attached to this communication is not
> waived or lost by reason of the mistaken transmission or delivery to you.
> If you have received this e-mail in error, please notify us immediately.
>

[1] https://github.com/ceph/ceph/blob/master/src/common/options.cc#L6600

-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to