gt;> replicated in the local DC, as specified in its replication strategy.
>>
>> Perhaps I should ask the node.js client authors about this.
>>
>>
>>> On Monday, March 28, 2016 07:47 PM, Eric Stevens wrote:
>>> > Local quorum works in the same data
28, 2016 07:47 PM, Eric Stevens wrote:
>
> > Local quorum works in the same data center as the coordinator node,
> but when an app server execute the write query, how is the coordinator
> node chosen?
>
> It typically depends on the driver, and decent drivers offer you several
&
e local DC, as specified in its replication strategy.
Perhaps I should ask the node.js client authors about this.
On Monday, March 28, 2016 07:47 PM, Eric Stevens wrote:
> Local quorum works in the same data center as the coordinator node,
but when an app server execute the write quer
> Local quorum works in the same data center as the coordinator node,
but when an app server execute the write query, how is the coordinator
node chosen?
It typically depends on the driver, and decent drivers offer you several
options for this, usually called load balancing strategy.
On Fri, Mar 25, 2016 at 1:04 PM, X. F. Li wrote:
> Suppose I have replication factor 3. If one of the node fails, will
> queries with ALL consistency fail if the queried partition is on the failed
> node? Or would they continue to work with 2 replicas during the time while
> cassandra is replicat
Hello,
Local quorum works in the same data center as the coordinator node, but
when an app server execute the write query, how is the coordinator node
chosen?
I use the node.js driver. How do the driver client determine which
cassandra nodes are in the same DC as the client node? Does it
n driver on my clients
> ,when i tried to insert 1049067 items I got an error.
>
> cassandra.WriteTimeout: code=1100 [Coordinator node timed out waiting for
> replica nodes' responses] message="Operation timed out - received only 0
> responses." info={'receive
Hi I'm running Cassandra 2.1.5 ,(single datacenter ,4 node,16GB vps each
node ),I have given my configuration below, I'm using python driver on my
clients ,when i tried to insert 1049067 items I got an error.
cassandra.WriteTimeout: code=1100 [Coordinator node timed out waiting for
rep
I would guess Hector
> is not using them.
> These logs are stored on other machines, which then reply the mutation if
> they have not been removed by a certain time.
>
>
>> I am writing data to Cassandra by thrift client (not hector) and
>> wonder what happen if the coord
ich then reply the mutation if they
have not been removed by a certain time.
>
> I am writing data to Cassandra by thrift client (not hector) and
> wonder what happen if the coordinator node fails.
How and when it fails is important.
But lets say their was an OS level OOM situation and
t client (not hector) and
>> wonder what happen if the coordinator node fails. The same question
>> applies for bulk loader which uses gossip protocol instead of thrift
>> protocol. In my understanding, the HintedHandoff only takes care of
>> the replica node fails.
>>
>> Thanks.
>>
>> --
>> Regards,
>> Jiaan
>>
>
>
at 5:11 PM, Jiaan Zeng wrote:
> Hi there,
>
> I am writing data to Cassandra by thrift client (not hector) and
> wonder what happen if the coordinator node fails. The same question
> applies for bulk loader which uses gossip protocol instead of thrift
> protocol. In my
Hi there,
I am writing data to Cassandra by thrift client (not hector) and
wonder what happen if the coordinator node fails. The same question
applies for bulk loader which uses gossip protocol instead of thrift
protocol. In my understanding, the HintedHandoff only takes care of
the replica node
Hi,
Are there any blogs/writeups anyone is aware of that talks of using
primary replica as coordinator node (rather than a random coordinator
node) in production scenarios ?
Thank you.
On Wed, Feb 16, 2011 at 10:53 AM, A J wrote:
> Thanks for the confirmation. Interesting alternatives to av
A J gmail.com> writes:
>
> >
> >
> > Makes sense ! Thanks.
> > Just a quick follow-up:
> > Now I understand the write is not made to coordinator (unless it is part
> of
> the replica for that key). But does the write column traffic 'flow' through
A J gmail.com> writes:
>
>
> Makes sense ! Thanks.
> Just a quick follow-up:
> Now I understand the write is not made to coordinator (unless it is part of
the replica for that key). But does the write column traffic 'flow' through the
coordinator node. For a 2G
There is a single point of failure for sure as there is a single proxy
in front but that pays off as the load is even between nodes. Another
plus is when a machine is out of the cluster for maintenance the proxy
handles that automatically. Originally I started it as an experiment,
there is a large
You have a single HAProxy node in front of the cluster or you have a HAProxy
node on each machine that is a client of Cassandra that points at all the
nodes in the cluster?
The former has a SPOF and bottleneck (the HAProxy instance), the latter does
not (and is somewhat common, especially for thin
We are using haproxy in TCP mode for round-robin with great succes.
It's bit unorthodox but has same real added values like logging.
Here is the relavant config for haproxy:
#
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
Makes sense ! Thanks.
Just a quick follow-up:
Now I understand the write is not made to coordinator (unless it is part of
the replica for that key). But does the write column traffic 'flow' through
the coordinator node. For a 2G column write, will I see 2G network traffic
on the coordi
It doesn't write anything to the coordinator node, it just forwards it to
nodes in the replica set for that row key.
write goes to some node (coordinator, i.e. whatever node you connected to).
coordinator looks at key, determines which nodes are responsible for it.
in parallel it forward
Thanks.
1. That is somewhat disappointing. Wish the redundancy of write on the
coordinator node could have been avoided somehow.
Does the write on the coordinator node (incase it is not part of the N
replica nodes for that key) get deleted before response of the write is
returned back to the
1. Yes, the coordinator node propagates requests to the correct nodes.
2. most (all?) higher level clients (pycassa, hector, etc) load balance for
you. In general your client and/or the caller of the client needs to catch
exceptions and retry. If you're using RRDNS and some of the node
>From my reading it seems like the node that the client connects to becomes
the coordinator node. Questions:
1. Is it true that the write first happens on the coordinator node and then
the coordinator node propagates it to the right primary node and the
replicas ? In other words if I have a
The first option, the coordinator node takes care of sending the work to the other nodes. It will return to you when the write has been acknowledged by the number of nodes you specify in the consistency level. AaronOn 07 Jul, 2010,at 06:06 PM, ChingShen wrote:Thanks aaron morton, I have an
to C node, finally,
C node forwards it to D node?
Thanks.
Shen
On Tue, Jul 6, 2010 at 8:04 PM, aaron morton wrote:
> Which ever node the client connects to is the coordinator node. It will
> take care of sending the messages to the nodes who will do the actual work.
>
> If you h
Which ever node the client connects to is the coordinator node. It will take
care of sending the messages to the nodes who will do the actual work.
If you have multiple nodes try a DNS round robin to distribute client
connections around.
Aaron
On 6 Jul 2010, at 22:10, ChingShen wrote:
>
Hi all,
I'm newbie in Cassandra, I have a question about which node is coordinator
node in my cluster.
I have A, B, C, D, E, F and G nodes. if I run a write operation on "A" node,
and key range between A and B, so the A node is responsible to write the key
to B, C and D nodes(R
28 matches
Mail list logo