Hi Ali,
The best practice is to use the noop scheduler on array of SSDs behind
your block device (Hardware RAID controller).
If you are using only one SSD disk, the deadline scheduler is the best
choice to reduce IO latency.
It is not recommended to set cfq on SSDs disks.
Regards,
Roni
Hi there,
What is the best way to downgrade a C* 2.1.3 cluster to the stable 2.0.12?
I know it's not supported, but we are getting too many issues with the 2.1.x...
It is leading us to think that the best solution is to use the stable version.
Is there a safe way to do that?
Cheers,
Roni
Hi there,
We are running C* 2.1.3 cluster with 2 DataCenters: DC1: 30 Servers /
DC2 - 10 Servers.
DC1 servers have 32GB of RAM and 10GB of HEAP. DC2 machines have 16GB
of RAM and 5GB HEAP.
DC1 nodes have about 1.4TB of data and DC2 nodes 2.3TB.
DC2 is used only for backup purposes. There are no re
r your nodes to be sure that the value is not too
high. You may get too much IO if you increase concurrent compactors
when using spinning disks.
Regards,
Roni Balthazar
On 25 February 2015 at 16:37, Ja Sam wrote:
> Hi,
> One more thing. Hinted Handoff for last week for all nodes was less t
Hi Piotr,
Are your repairs finishing without errors?
Regards,
Roni Balthazar
On 25 February 2015 at 15:43, Ja Sam wrote:
> Hi, Roni,
> They aren't exactly balanced but as I wrote before they are in range from
> 2500-6000.
> If you need exactly data I will check them tomorrow
> Piotrek.
>
> p.s. I don't know why my mail client display my name as Ja Sam instead of
> Piotr Stapp, but this doesn't change anything :)
>
>
> On Wed, Feb 25, 2015 at 5:45 PM, Roni Balthazar
> wrote:
>>
>> Hi Ja,
>>
>> How are the pendi
ool cfstats" on your nodes.
Cheers,
Roni Balthazar
On 25 February 2015 at 13:29, Ja Sam wrote:
> I do NOT have SSD. I have normal HDD group by JBOD.
> My CF have SizeTieredCompactionStrategy
> I am using local quorum for reads and writes. To be precise I have a lot of
> writes
Try repair -pr on all nodes.
If after that you still have issues, you can try to rebuild the SSTables using
nodetool upgradesstables or scrub.
Regards,
Roni Balthazar
> Em 18/02/2015, às 14:13, Ja Sam escreveu:
>
> ad 3) I did this already yesterday (setcompactionthrouput also).
you getting when running repairs.
Regards,
Roni Balthazar
On Wed, Feb 18, 2015 at 1:31 PM, Ja Sam wrote:
> Can you explain me what is the correlation between growing SSTables and
> repair?
> I was sure, until your mail, that repair is only to make data consistent
> between nodes.
pactions must decrease as well...
Cheers,
Roni Balthazar
On Wed, Feb 18, 2015 at 12:39 PM, Ja Sam wrote:
> 1) we tried to run repairs but they usually does not succeed. But we had
> Leveled compaction before. Last week we ALTER tables to STCS, because guys
> from DataStax suggest us
/dml_config_consistency_c.html
Cheers,
Roni Balthazar
On Wed, Feb 18, 2015 at 11:07 AM, Ja Sam wrote:
> I don't have problems with DC_B (replica) only in DC_A(my system write only
> to it) I have read timeouts.
>
> I checked in OpsCenter SSTable count and I have:
> 1) in DC_A same +-
(eg: driver's timeout, concurrent reads and
so on)
Regards,
Roni Balthazar
On Wed, Feb 18, 2015 at 9:51 AM, Ja Sam wrote:
> Hi,
> Thanks for your "tip" it looks that something changed - I still don't know
> if it is ok.
>
> My nodes started to do more compac
HI,
Yes... I had the same issue and setting cold_reads_to_omit to 0.0 was
the solution...
The number of SSTables decreased from many thousands to a number below
a hundred and the SSTables are now much bigger with several gigabytes
(most of them).
Cheers,
Roni Balthazar
On Tue, Feb 17, 2015
ht or when your IO is not busy.
>From http://wiki.apache.org/cassandra/NodeTool:
0 0 * * * root nodetool -h `hostname` setcompactionthroughput 999
0 6 * * * root nodetool -h `hostname` setcompactionthroughput 16
Cheers,
Roni Balthazar
On Mon, Feb 16, 2015 at 7:47 PM, Ja Sam wrote:
> On
://pastebin.com/jbAgDzVK
Thanks,
Roni Balthazar
On Fri, Jan 9, 2015 at 12:03 PM, datastax wrote:
> Hello
>
> You may not be experiencing versioning issues. Do you know if compaction
> is keeping up with your workload? The behavior described in the subject is
> typically
me know if I need to provide more information.
Thanks,
Roni Balthazar
On Thu, Jan 8, 2015 at 5:23 PM, Robert Coli wrote:
> On Thu, Jan 8, 2015 at 11:14 AM, Roni Balthazar
> wrote:
>
>> We are using C* 2.1.2 with 2 DCs. 30 nodes DC1 and 10 nodes DC2.
>>
>
> https:/
moryError:
Java heap space"
Any hints?
Regards,
Roni Balthazar
Hi,
We use Puppet to manage our Cassandra configuration. (http://puppetlabs.com)
You can use Cluster SSH to send commands to the server as well.
Another good choice is Saltstack.
Regards,
Roni
On Thu, Oct 23, 2014 at 5:18 AM, Alain RODRIGUEZ wrote:
> Hi,
>
> I was wondering about how do you
18 matches
Mail list logo