Hi all,
so after deep investigation, we found out that this is this problem
https://issues.apache.org/jira/browse/CASSANDRA-8058
Jiri Horky
On 10/20/2015 12:00 PM, Jiri Horky wrote:
> Hi all,
>
> we are experiencing a strange behavior when we are trying to bootstrap a
> new node.
Hi all,
we are experiencing a strange behavior when we are trying to bootstrap a
new node. The problem is that the Recent Write Latency goes to 2s on all
the other Cassandra nodes (which are receiving user traffic), which
corresponds to our setting of "write_request_timeout_in_ms: 2000".
We use C
ase that you feel is
> worth the extra treatment?
>
> If you are having problems with the driver balancing requests and
> properly detecting available nodes or see some room for
> improvement, make sure to the issues so that they can be fixed.
>
>
> --
Hi all,
we are thinking of how to best proceed with availability testing of
Cassandra nodes. It is becoming more and more apparent that it is rather
complex task. We thought that we should try to read and write to each
cassandra node to "monitoring" keyspace with a unique value with low
TTL. This
Hi,
thanks for the reference, I really appreciate that you shared your
experience.
Could you please share how much data you store on the cluster and what
is HW configuration of the nodes? I am really impressed that you are
able to read 100M records in ~4minutes on 4 nodes. It makes something
like
> running multiple instances on this hardware - best practices have one
> instance per host no matter the hardware size.
>
> On Thu, Feb 12, 2015 at 12:36 AM, Jiri Horky <mailto:ho...@avast.com>> wrote:
>
> Hi Chris,
>
> On 02/09/2015 04:22 PM, Chris Lohfin
load and you have over 8gb heap your GCs
> could use tuning. The bigger the nodes the more manual tweaking it
> will require to get the most out of
> them https://issues.apache.org/jira/browse/CASSANDRA-8150 also has
> some ideas.
>
> Chris
>
> On Mon, Feb 9, 2015 at 2:00 AM,
r3 extends Partitioner {override val min =
BigDecimal(-2).pow(63)override val max = BigDecimal(2).pow(63) - 1}case
object Random extends Partitioner {override val min =
BigDecimal(0)override val max = BigDecimal(2).pow(127) - 1}
On 02/11/2015 02:21 PM, Ja Sam wrote:
> Your answer looks very promisi
Well, I always wondered how Cassandra can by used in Hadoop-like
environment where you basically need to do full table scan.
I need to say that our experience is that cassandra is perfect for
writing, reading specific values by key, but definitely not for reading
all of the data out of it. Some of
The fastest way I am aware of is to do the queries in parallel to
multiple cassandra nodes and make sure that you only ask them for keys
they are responsible for. Otherwise, the node needs to resend your query
which is much slower and creates unnecessary objects (and thus GC pressure).
You can man
needs for
bookkeeping this amount of data and that this was sligtly above the 75%
limit which triggered the CMS again and again.
I will definitely have a look at the presentation.
Regards
Jiri Horky
On 02/08/2015 10:32 PM, Mark Reddy wrote:
> Hey Jiri,
>
> While I don't have any experienc
or it is possible to
tune it better?
Regards
Jiri Horky
over. I am not sure what causes that, but it is reproducible. Restart
of the affected node helps.
We have 3 datacenters (RF=1 for each datacenter) where we are moving the
tokens. This happens only in one of them.
Regards
Jiri Horky
On 12/19/2014 08:20 PM, Jiri Horky wrote:
> Hi list,
>
&
Hi list,
we added a new node to existing 8-nodes cluster with C* 1.2.9 without
vnodes and because we are almost totally out of space, we are shuffling
the token fone node after another (not in parallel). During one of this
move operations, the receiving node died and thus the streaming failed:
W
OK, ticket 7696 [1] created.
Jiri Horky
https://issues.apache.org/jira/browse/CASSANDRA-7696
On 08/05/2014 07:57 PM, Robert Coli wrote:
>
> On Tue, Aug 5, 2014 at 5:48 AM, Jiri Horky <mailto:ho...@avast.com>> wrote:
>
> What puzzles me is the fact that the authentizati
ot.
I would like to understand what could caused the problems and how to
avoid them in the future.
Any pointers would be appreciated.
Regards
Jiri Horky
Thank you both for the answers!
Jiri Horky
On 01/10/2014 02:52 AM, Aaron Morton wrote:
> We avoid mixing versions for a long time, but we always upgrade one
> node and check the application is happy before proceeding. e.g. wait
> for 30 minutes before upgrading the others.
>
>
.
Thank you in advance
Jiri Horky
/or wide
> rows.
>
> How big are the rows ? use nodetool cfstats and nodetool cfhistograms.
I will get in touch with the developers and take the data from cf*
commands in a few days (I am out of office for some days).
Thanks for the pointers, will get in touch.
Cheers
Jiri Horky
7;t really care extra memory overhead of the cache - to
be able to actual point to it with objects, but I don't really see the
reason why it should create/delete those many objects so quickly.
>
>
>> prg01.visual.vm.png
> Shows the heap growing very quickly. This could be due to wide reads
> or a high write throughput.
Well, both prg01 and prg02 receive the same load which is about ~150-250
(during peak) read requests per seconds and 100-160 write requests per
second. The only with heap growing rapidly and GC kicking in is on nodes
with row cache enabled.
>
> Hope that helps.
Thank you!
Jiri Horky
between 2.0.0 and 1.2.9 causing the described problem in the
official documentation "Changes impacting upgrade".
Thank you
Jiri Horky
Hi,
On 11/01/2013 09:15 PM, Robert Coli wrote:
> On Fri, Nov 1, 2013 at 12:47 PM, Jiri Horky <mailto:ho...@avast.com>> wrote:
>
> since we upgraded half of our Cassandra cluster to 2.0.0 and we
> use LCS,
> we hit CASSANDRA-6284 bug.
>
>
> 1) Why upgr
a range of ~100 tokens).
Based on documentation, I can only think of switching to SizeTiered
compaction, doing major compaction and then switching back to LCS.
Thanks in advance
Jiri Horky
23 matches
Mail list logo