Hi Thanks for engaging in this discussion!
Cameron, regarding the benchmark, I need to spend some time exploring the stress tool options, but I aim to create a stress test that goes on for a period of at least 48 hours, and then run it for all strategies (with a 24-hour burst for BHCS). I want this test to be as comprehensive as possible, so I'll need some time to do it properly. The key search is optimised so I'm curious to see these tests' results. I'll share these when ready. If the tests results are not good enough to push for BHCS, I'll do what Jeff suggested (wrapper to schedule). I don't plan to push for something, just for the sake of contributing. I want to see improvements! :) However, the workaround from Ed Capriolo's presentation seems to be exactly just that. Is this workaround an established industry practice, or the last resort hack? If yes, do you think the users would still prefer to switch to a schedule configured per table (or keyspace)? A couple of things to note, correct me if I'm wrong: - The STCS major compaction creates two massive SSTables. It seems to me this is not desirable, as I've always seen the big SSTables of STCS as its main disadvantage. - LCS reduces the number of key copies using the levels, but we can still have as many key copies as levels. Wouldn't it be desirable to have a single key in a single table like BHCS does? >From the above points, and expanding on the idea of the wrapper around the existing in-tree strategies, do you think it would be an advantage to offer this process instead?: - Background compaction: it would run the background compaction of the strategy chosen by the user during the configured schedule. - Scheduled Major compaction: it would run the BHCS major compaction at the chosen time (e.g. once per day at an off-peak time), which makes keys unique across all tables (improvement over LCS), and the SSTables would have a configurable maximum size (improvement over STCS). This seems to me the optimal situation (1 key and controllable tables' size) that everyone would like to see happening on their clusters. What do you think of this approach? Pedro Gordo On 15 June 2017 at 04:01, Jeff Jirsa <jji...@gmail.com> wrote: > Hi Pedro, > > I did a quick read through of your strategy, and have a few personal > thoughts: > > First, writing a compaction strategy is a lot of work, and it's great to > see new contributors take on ambitious projects. There are even a handful > of ideas in here that may be useful to other strategies. > > The overall concept is interesting - many companies have "peak" times and > "offpeak" times, and being able to run compaction only during offpeak may > be really helpful. This concept is actually in the old wiki dating back > many years , for example Ed Capriolo gave a talk ( > https://www.slideshare.net/edwardcapriolo/m6d-cassandrapresentation - > check out slide #28 ) where he showed how to achieve this with cron > and nodetool. > > The actual logic you use to select candidates probably isn't perfect, > because it can be pretty nuanced. But rather than focus on that, if we take > advantage of the larger concept that it's useful to be able to turn > compaction on/off on a schedule, there may be another opportunity - rather > than try to re-implement some of the concepts of LCS without using LCS, you > could just make BurstHourCompactionStrategy a wrapper around any > user-specified compaction strategy. That is, it may be much less work and > much more valuable if you actually let users specify which underlying > compaction strategy to wrap, and then simply use the underlying the wrapped > getNextBackgroundCompactionTask() > > For the project to be willing to accept a 5th compaction strategy, I > imagine committers will want to see some benchmarks and hopefully some > concrete examples of how it's beneficial and solves a problem that can't be > better solved in other ways. I think at a high level many people can > understand how it's useful, but you may want to compare/contrast it to Ed's > method in the deck above (in particular, using nodetool and cron, you can > have multiple "compaction-enabled" windows during the day, and you can > throttle it so that it never fully stops, but slows down enough that it > doesn't impact latencies. > > Again, that's just my personal thought based on a quick read through. > > Nice work so far! > - Jeff > > > On Wed, Jun 14, 2017 at 2:49 PM, Pedro Gordo <pedro.gordo1...@gmail.com> > wrote: > >> Hi >> >> I've addressed the issues with Git. I believe this is what Stefan asking >> for: https://github.com/sedulam/cassandra/tree/12201 >> I've also added more tests for BHCS, including more for wide rows >> following >> Jeff's suggestion. >> >> Thanks for the directions so far! If there's something else you would like >> to see tested or some metrics, please let me know what would be relevant. >> >> All the best >> >> >> Pedro Gordo >> >> On 13 June 2017 at 15:43, Pedro Gordo <pedro.gordo1...@gmail.com> wrote: >> >> > Hi all >> > >> > Although a couple of people engaged with me directly to talk about >> BHCS, I >> > would also like to get the community opinion on this, so I thought I >> could >> > get the discussion started by saying what the advantages would be and in >> > which type of tables BHCS would do a good job. Please keep in mind that >> all >> > my assumptions are without any real world experience on Cassandra, so >> this >> > is where I expect to see some input of the C* veterans to help me steer >> > BHCS implementation in the right direction if needed. This is a long >> email, >> > so there's a TLDR if you don't want to read everything. This is intended >> > for high-level discussion. For code level discussion, please refer to >> the >> > document in JIRA. >> > I'm aware that some might not like that no compaction occurs outside of >> > the burst hour, but I thought of solutions for that, so please read the >> > planned improvements below. >> > >> > *TL;DR* >> > BHCS tries to address these issues with the current compaction >> strategies: >> > - Necessity of allocating large storage during big compactions in STCS >> -> >> > Through the sstable_max_size property of BHCS, we can keep SSTables >> below a >> > certain size, so we wouldn't have issues with size during compaction >> > - We might get to a point where to return the results of a query, we >> need >> > to read from a large number of SSTables -> BHCS addresses this by making >> > sure that the number of SSTables where a key exists will be consistently >> > maintained at a low level after every compaction. The number of SSTables >> > where a key exists is configurable, so in the limit, you could set it >> to 1 >> > for optimal read performance. >> > - Continuous high I/O of LCS -> addressed by the scheduling feature of >> > BHCS. >> > >> > *Longer explanation:* >> > >> > *Where would it be advantageous using BHCS?* >> > - Read-bound tables: due to BHCS maintaining the number of key copies >> at a >> > low level, the read speed would be consistently fast. Since there's not >> a >> > lot of writes in this type of table, even if there are new SSTables >> > produced containing that key, the number SSTables containing that key >> would >> > be set again to 1 after burst hour (BH). >> > - Write-bound tables: in this scenario, there's a lot of SSTables >> created >> > outside of BH, but few reads, so the issue with existing strategies >> would >> > be a continuous high I/O dedicated to compaction. With BHCS during these >> > active hours, we would have an increase in disk size, but I assume that >> > this disk increase outside the BH would be tolerable since a lot of >> space >> > would be released during the burst. Still, if that's a big issue, I >> plan to >> > address this with the improvement (1). >> > >> > *Where is BHCS NOT recommended and what improvements can be done to make >> > it viable?* >> > - Read and write-heavy tables: because outside BH, SSTables would >> increase >> > until the burst kicks in, there can be an increase in the read speed and >> > disk used space. This could also be solved with improvement (1), (3) or >> (5). >> > >> > *Planned Improvements:* >> > (1) - The user could indicate that he wants continuous compaction. This >> > would change the strategy in such a way that outside of the Burst Hour, >> > STCS would be used to maintain an acceptable read speed and disk used >> > space. And then when BH would kick in, it would set key copies and disk >> > size again to optimal levels. >> > (2) - During table creation the user, might not be aware of the >> compaction >> > configurable details, so a user-friendly configuration would be >> provided. >> > If the user sets the table as a Write-and-Read heavy table, then >> > improvement (1) would be activated. Otherwise, the strategy would >> default >> > to its current config to save resources during the outside the BH. >> > (3) - Instead, of just one burst hour, we could set several periods for >> > BHCS to run during the day (for instance, every 3 hours or another >> > schedule). >> > >> > *Ideas:* >> >> > (4) - Continuously evaluate how many pending compactions we have and I/O >> > status, and then based on that, we start (or not) the compaction. >> > (5) - If outside the BH, the size for all the SSTables in a family set >> > reaches a certain threshold, then background compaction can occur >> anyway. >> > This threshold should be elevated due to the high CPU usage of BHCS. >> > >> > Please let me know your thoughts on this. Thanks! >> > >> > Best regards >> > >> > Pedro Gordo >> > >> > On 10 June 2017 at 22:22, J. D. Jordan <jeremiah.jor...@gmail.com> >> wrote: >> > >> >> GitHub has some good guides on how to use git and make a pull request >> for >> >> a project. >> >> >> >> https://guides.github.com/introduction/flow/ >> >> https://guides.github.com/activities/forking/ >> >> >> >> On Jun 10, 2017, at 3:17 PM, Pedro Gordo <pedro.gordo1...@gmail.com> >> >> wrote: >> >> >> >> Hi all >> >> >> >> I've added to JIRA, a document explaining how BHCS works with code >> >> snippets, and the motivation behind it. Because I'm not sure we can >> send >> >> attachments to the mailing list, please get the document from JIRA: >> >> https://issues.apache.org/jira/browse/CASSANDRA-12201 >> >> >> >> I'll check how to address the Git history in the next days. Can you >> please >> >> point me to a repo that you merged into C*, with a good history, so I >> can >> >> check it out and replicate the format in mine? >> >> >> >> Best regards >> >> Pedro Gordo >> >> >> >> >> > >> > >