Ahh, OK I get it now... if _any_ of these thresholds are met _and_ the files
are not active (i.e. they have grown larger than max_file_size) they'll be
merged. Thanks!
- Jeremy
On Wed, Sep 14, 2011 at 3:16 PM, Dan Reverri wrote:
> At any point in time Bitcask may have data spread across a numb
At any point in time Bitcask may have data spread across a number of data
files. Bitcask occasionally runs a merge process which reads the data from
those files and writes a merged set of data to a new file. Once completed
the old files can be removed and the new file is used for future read
operat
Ok, thanks I'll give that a try.
What does small_file_threshold do then?
- Jeremy
On Wed, Sep 14, 2011 at 12:48 PM, Dan Reverri wrote:
> Hi Jeremy,
>
> The max_file_size parameter controls when Bitcask will close the currently
> active data file and start a new data file. The active data file
Hi Jeremy,
The max_file_size parameter controls when Bitcask will close the currently
active data file and start a new data file. The active data file will not be
considered when determining if a merge should occur. The default
max_file_size is 2GBs. This means that each partition in the system ca
If I'm reading the docs correctly, only files smaller
than small_file_threshold will be included in a merge. So
if small_file_threshold must be bigger than max_file_size for a merge to
happen?
- Jeremy
On Wed, Sep 14, 2011 at 10:23 AM, Jeremy Raymond wrote:
> Maybe I just need to tweak the Bitc
Maybe I just need to tweak the Bitcask parameters to merge more often?
I have approx 17000 keys which get overwritten once an hour. After each
updated the /var/lib/riak/bitcask folder grows by 20 MB (so about 1200 bytes
per key). With the default frag_merge_trigger at 60 I should get a merge
every
I would think that the InnoDB backend would be a better backend for the use
case you're describing.
---
Jeremiah Peschka - Founder, Brent Ozar PLF, LLC
Microsoft SQL Server MVP
On Sep 14, 2011, at 8:09 AM, Jeremy Raymond wrote:
> Hi,
>
> I store data in Riak whose keys constantly get overwritte
Hi,
I store data in Riak whose keys constantly get overwritten with new data.
I'm currently using Bitcask as the back-end and recently noticed the Bitcask
data folder grow to 24GB. After restarting the nodes, which I think
triggered Bitcask merge, the data went down to 96MB. Today the data dirs ar