zhixinwen commented on PR #3092:
URL: https://github.com/apache/kvrocks/pull/3092#issuecomment-3149113983
The idea is to reduce to number of fsync calls needed.
Another thing I tried is to use `WriteBatchHandler` to combine all the
`WriteBatch` within one call of `incrementBatchLoopCB` and write the merged
batch once. However, I do not see much improvement with setup. I cannot
understand this behavior because in my rocksdb local test, this method should
be even more efficient than what is implemented in this PR. Not sure if the
`WriteBatchHandler` overhead is too big.
The handler looks like this, in case anyone is interested:
```
rocksdb::Status WriteBatchMerger::PutCF(uint32_t column_family_id, const
rocksdb::Slice &key,
const rocksdb::Slice &value) {
return
write_batch_.Put(storage_->GetCFHandle(static_cast<ColumnFamilyID>(column_family_id)),
key, value);
}
rocksdb::Status WriteBatchMerger::DeleteCF(uint32_t column_family_id, const
rocksdb::Slice &key) {
return
write_batch_.Delete(storage_->GetCFHandle(static_cast<ColumnFamilyID>(column_family_id)),
key);
}
rocksdb::Status WriteBatchMerger::DeleteRangeCF(uint32_t column_family_id,
const rocksdb::Slice &begin_key,
const rocksdb::Slice
&end_key) {
return
write_batch_.DeleteRange(storage_->GetCFHandle(static_cast<ColumnFamilyID>(column_family_id)),
begin_key,
end_key);
}
rocksdb::Status WriteBatchMerger::MergeCF(uint32_t column_family_id, const
rocksdb::Slice &key,
const rocksdb::Slice &value) {
return
write_batch_.Merge(storage_->GetCFHandle(static_cast<ColumnFamilyID>(column_family_id)),
key, value);
}
void WriteBatchMerger::LogData(const rocksdb::Slice &blob) {
write_batch_.PutLogData(blob); }
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]