Hi Ning

>From your description, I think you actually concern more about the overall 
>performance instead of the high disk IOPs. Maybe you should first ensure 
>whether the job performance degradation is related to RocksDB's performance.

Then I would share some experience about tuning RocksDB performance. Since you 
did not cache index and filter in block cache, it's no worry about the 
competition between data blocks and index&filter blocks[1]. And to improve the 
read performance, you should increase your block cache size to 256MB or even 
512MB. What's more, writer buffer in rocksDB also acts as a role for reading, 
from our experience, we use 4 max write buffers and 32MB each, e.g.  
setMaxWriteBufferNumber(4) and setWriteBufferSize(32*1024*1024)

Best
Yun Tang

[1] 
https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks

[https://avatars0.githubusercontent.com/u/69631?s=400&v=4]<https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks>

Block Cache · facebook/rocksdb Wiki · 
GitHub<https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks>
A library that provides an embeddable, persistent key-value store for fast 
storage. - facebook/rocksdb
github.com


________________________________
From: Ning Shi <nings...@gmail.com>
Sent: Wednesday, September 26, 2018 11:25
To: user
Subject: RocksDB Read IOPs

Hi,

I'm benchmarking a job with large state in various window sizes
(hourly, daily). I noticed that it would consistently slow down after
30 minutes into the benchmark due to high disk read IOPs. The first 30
minutes were fine, with close to 0 disk IOPs. Then after 30 minutes,
read IOPs would gradually climb to as high as 10k/s. At this point,
the job was bottlenecked on disk IOPs because I'm using 2TB EBS-backed
volume.

Another thread on the mailing list mentioned potentially running into
burst IOPs credit could be the cause of slowdown. It's not that in
this case because I'm using 2TB EBS.

Someone also mentioned RocksDB compaction could potentially increase
read IOPs a lot.

I'm currently running the job with these RocksDB settings.

@Override
public DBOptions createDBOptions(DBOptions currentOptions) {
    return currentOptions
        .setIncreaseParallelism(4)
        .setUseFsync(false)
        .setMaxOpenFiles(-1);
}

@Override
public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions
currentOptions) {
    final long blockCacheSize = 64 * 1024 * 1024;
    return currentOptions
        .setTableFormatConfig(
            new BlockBasedTableConfig()
                .setBlockCacheSize(blockCacheSize)
        );
}

Any insights into how I can further diagnose this? Is there anyway to
see compaction stats or any settings I should try?

Thanks,

Ning

Reply via email to