Hello Kurt Deschler, Yida Wu, Michael Smith, Impala Public Jenkins,

I'd like you to reexamine a change. Please visit

    http://gerrit.cloudera.org:8080/22215

to look at the new patch set (#8).

Change subject: IMPALA-13478: Sync tuple cache files to disk asynchronously
......................................................................

IMPALA-13478: Sync tuple cache files to disk asynchronously

When a tuple cache entry is first being written, we want to
sync the contents to disk. Currently, that happens on the
fast path and delays the query results, sometimes significantly.
This moves the Sync() call off of the fast path by passing
the work to a thread pool. The threads in the pool open
the file, sync it to disk, then close the file. If anything
goes wrong, the cache entry is evicted.

The tuple cache can generate writes very quickly, so this needs
a backpressure mechanism to avoid overwhelming the disk. In
particular, it needs to avoid accumulating dirty buffers to
the point that the OS throttles new writes, delaying the query
fast path. This implements a limit on outstanding writes (i.e.
writes that have not been flushed to disk). To enforce it,
writers now call UpdateWriteSize() to reserve space before
writing. UpdateWriteSize() can fail if it hits the limit on
outstanding writes or if this particular cache entry has hit
the maximum size. When it fails, the writer should abort writing
the cache entry.

Since UpdateWriteSize() is updating the charge in the cache,
the outstanding writes are being counted against the capacity,
triggering evictions. This improves the tuple cache's adherence
to the capacity limit.

The outstanding writes limits is configured via the
tuple_cache_outstanding_write_limit startup flag, which is
either a specific size string (e.g. 1GB) or a percentage of
the process memory limit. To avoid updating the cache charge
very frequently, this has an update chunk size specified
by tuple_cache_outstanding_write_chunk_bytes.

This adds counters at the daemon level:
 - outstanding write bytes
 - number of writes halted due to backpressure
 - number of syncs calls that fail (due to IO errors)
 - number of sync calls dropped due to queue backpressure
The runtime profile adds a NumTupleCacheBackpressureHalted
counter that is set when a write hits the outstanding write
limit.

This has a startup option to add randomness to the tuple cache
keys to make it easy to test a scenario with no cache hits.

Testing:
 - Added unit tests to tuple-cache-mgr-test
 - Testing with TPC-DS on a cluster with fast NVME SSDs showed
   a significant improvement in the first-run times due to the
   asynchronous syncs.
 - Testing with TPC-H on a system with a slow disk and zero cache
   hits showed improved behavior with the backpressure

Change-Id: I646bb56300656d8b8ac613cb8fe2f85180b386d3
---
M be/src/exec/tuple-cache-node.cc
M be/src/exec/tuple-cache-node.h
M be/src/exec/tuple-file-read-write-test.cc
M be/src/exec/tuple-file-writer.cc
M be/src/exec/tuple-file-writer.h
M be/src/runtime/exec-env.cc
M be/src/runtime/tuple-cache-mgr-test.cc
M be/src/runtime/tuple-cache-mgr.cc
M be/src/runtime/tuple-cache-mgr.h
M be/src/service/query-options.cc
M common/thrift/generate_error_codes.py
M common/thrift/metrics.json
12 files changed, 624 insertions(+), 68 deletions(-)


  git pull ssh://gerrit.cloudera.org:29418/Impala-ASF refs/changes/15/22215/8
--
To view, visit http://gerrit.cloudera.org:8080/22215
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-MessageType: newpatchset
Gerrit-Change-Id: I646bb56300656d8b8ac613cb8fe2f85180b386d3
Gerrit-Change-Number: 22215
Gerrit-PatchSet: 8
Gerrit-Owner: Joe McDonnell <joemcdonn...@cloudera.com>
Gerrit-Reviewer: Impala Public Jenkins <impala-public-jenk...@cloudera.com>
Gerrit-Reviewer: Joe McDonnell <joemcdonn...@cloudera.com>
Gerrit-Reviewer: Kurt Deschler <kdesc...@cloudera.com>
Gerrit-Reviewer: Michael Smith <michael.sm...@cloudera.com>
Gerrit-Reviewer: Yida Wu <wydbaggio...@gmail.com>

Reply via email to