Remove my email please
De: Vitali Dyachuk [mailto:vdjat...@gmail.com]
Enviada em: quinta-feira, 6 de setembro de 2018 08:00
Para: user@cassandra.apache.org
Assunto: Re: Large sstables
What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched t
What i have done is:
1) added more disks, so the compaction will carry on
2) when i've switched to LCS from STCS the STCS queues for the processing
big sstables have remained, so i've stopped these queues with nodetool stop
-id queue_id
and LCS compaction has started to process sstables , i'm u
If there are a lot of droppable tombstones, you could also run User Defined
Compaction on that (and on other) SSTable(s).
This blog post explains it well:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html
On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
mohamadrezarosta.
Hi,Dear Vitali
The best option for you is migrating data to the new table and change portion
key patterns to a better distribution of data and you sstables become smaller
but if your data already have good distribution and your data is really big you
must add new server to your datacenter, if yo
Either of those are options, but there’s also sstablesplit to break it up a bit
Switching to LCS can be a problem depending on how many sstables /overlaps you
have
--
Jeff Jirsa
> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk wrote:
>
> Hi,
> Some of the sstables got too big 100gb and more s
Hi,
Some of the sstables got too big 100gb and more so they are not compactiong
any more so some of the disks are running out of space. I'm running C*
3.0.17, RF3 with 10 disks/jbod with STCS.
What are my options? Completely delete all data on this node and rejoin it
to the cluster, change CS to LC
On Sun, May 4, 2014 at 10:26 PM, Yatong Zhang wrote:
> 2. Is there a way to convert these old huge sstables into small 'leveled'
> ones? I tried 'sstablesplit' but always got an error with java EOF
> exception.
>
Could you be more specific about how you are attempting to use sstablesplit
and wha
Hi,
I changed compaction strategy from 'size tiered' to 'leveled' but after
running a few days C* still tries to compacting some old large sstables,
say:
1. I have 6 disks per node and 6 data directories per disk
2. There are some old huge sstables generated when using '
On Sat, Dec 1, 2012 at 9:29 AM, Radim Kolar wrote:
> from time to time people ask here for splitting large sstables, here is
> patch doing that
>
> https://issues.apache.org/jira/browse/CASSANDRA-4897
Interesting, thanks for the contribution! :D
For those who might find this thre
apply patch + recompile.
Define "max_sstable_size" compaction strategy property on CF you want to split,
then run compaction.
Could you provide more details how to use it? Let's say I already have a
huge sstable. What am i supposed to do to split it?
Thank you,
Andrey
On Sat, Dec 1, 2012 at 11:29 AM, Radim Kolar wrote:
> from time to time people ask here for splitting large sstables, here is
> patch
from time to time people ask here for splitting large sstables, here is
patch doing that
https://issues.apache.org/jira/browse/CASSANDRA-4897
12 matches
Mail list logo