> The disks were under 60% utilization (not saturated).
>> 60% of bandwidth or iops? only one of the two needs to be saturated.
And which disk, journal or ledgers?
It is the disk busy percentage. Both journal and ledger disks were around 60%
(journal was more consistent).
> Are there any bench
> @Ivan, for some reasons I did not receive your reply but found it in the
> email archives.
Are you subscribed to the list? I did see one mail from you show up in
moderation.
> At 80K request/sec throttling for record size of 1K, I am getting below
> throughput. The 99th percentile of `bookkee
```> 2) If it's in milliseconds, are these numbers in expected range (see
> attached image). To me 2.5 seconds (2.5K ms) latency for add entry request
> is very high.
2.5 seconds is very high, but your write rate is also high. 100,000 *
1KB is 100MB/s. SSD should be able to take it from the journ
ng is that the above metrics are reported in micro seconds (from
BK code) and the reporters (we use statsD to collect BK metrics `codahale` and
sink it to `InfluxDB`) converts the `rates` to seconds and `duration` to
`milliseconds`
1) I wanted to confirm if the final graph values that I am seeing i
> 2) If it's in milliseconds, are these numbers in expected range (see
> attached image). To me 2.5 seconds (2.5K ms) latency for add entry request
> is very high.
2.5 seconds is very high, but your write rate is also high. 100,000 *
1KB is 100MB/s. SSD should be able to take it from the journal s
that the above metrics are reported in micro seconds (from
BK code) and the reporters (we use statsD to collect BK metrics `codahale` and
sink it to `InfluxDB`) converts the `rates` to seconds and `duration` to
`milliseconds`
1) I wanted to confirm if the final graph values that I am seeing in t