Are you absolutely sure that lock is required? I could imagine that
multiple paxos rounds could be played for different partitions and these
rounds would be dependent on each other.
Performance aside, can you please elaborate where do you see such need for
lock?
On 8 Sep 2015 00:05, "DuyHai Doan"
Hi Tom,
While reading data ( even at CL LOCAL_QUORUM), if data in different nodes
required to meet CL in your local cluster doesnt match, data will be read from
remote dc for read repair if read_repair_chance is not 0.
Imp points:
1.If you are reading and writing at local_quorum you can set r
That is the state of data modeling with 2.1 and it's worked quite well for
a lot of people (especially for those using batches or streaming to
maintain the different views of the same data).
However, you should be interested in the upcoming Materialized Views in 3.0
http://www.datastax.com/dev/blo
Multi partitions LWT is not supported currently on purpose. To support it,
we would have to emulate a distributed lock which is pretty bad for
performance.
On Mon, Sep 7, 2015 at 10:38 PM, Marek Lewandowski <
marekmlewandow...@gmail.com> wrote:
> Hello there,
>
> would you be interested in having
Hi Tom,
"one drawback: the node joins the cluster as soon as the bootstrapping
begins."
I am not sure I understand this correctly. It will get tokens, but not load
data if you combine it with autobootstrap=false.
How I see it: You should be able to start all the new nodes in the new DC
with autob
Hi Christian,
No, I never tried survey mode. I didn't know it until now, but form the
info I was able to find it looks like it is meant for a different purpose.
Maybe it can be used to bootstrap a new DC, though.
On the other hand, the auto_bootstrap=false + rebuild scenario seems to be
designed
Hello there,
would you be interested in having multi-partition support for lightweight
transactions in order words to have ability to express something like:
INSERT INTO … IF NOT EXISTS AND
UPDATE … IF EXISTS AND
UPDATE … IF colX = ‘xyz’
where each statement refers to a row living potentially o
Hi Tom,
this sounds very much like my thread: "auto_bootstrap=false broken?"
Did you try booting the new node with survey-mode? I wanted to try this,
but I am waiting for 2.0.17 to come out (survey mode is broken in earlier
versions). Imho survey mode is what you (and me too) want: start a node,
Running nodetool rebuild on a node that was started with join_ring=false
does not work, unfortunately. The nodetool command returns immediately,
after a message appears in the log that the streaming of data has started.
After that, nothing happens.
Tom
On Fri, Sep 12, 2014 at 5:47 PM, Robert Coli
NetworkTopologyStrategy
On Mon, Sep 7, 2015 at 4:39 PM, Ryan Svihla wrote:
> What's your keyspace replication strategy?
>
> On Thu, Sep 3, 2015 at 3:16 PM Tom van den Berge <
> tom.vandenbe...@gmail.com> wrote:
>
>> Thanks for your help so far!
>>
>> I have some problems trying to understand the
Thanks. Based on what I know about the architecture, it seems like it should be
pretty easy to support. Thanks for the confirmation and the ticket.
On Sep 4, 2015, at 3:30 PM, Tyler Hobbs
mailto:ty...@datastax.com>> wrote:
This query would be reasonable to support, so I've opened
https://issue
Glad to hear that.
About the Proxy
Putting an HA proxy in front of Cassandra is an anti-pattern as you create
a single point of failure. You can multiply them, but why would you do that
anyway when most (all ?) of the modern Cassandra clients handle this for
you through TokenAware / RoundRobin /
I think I know where my problem is coming from. I took a look at the log of
cassandra on each node and I saw something related to bootstrap. it says
that the node is a seed so there will be no bootstraping. Actually I made a
mistake. in the cassandra.yaml file each node have two ips as seeds. the i
Thank you all for your answers.
@Alain:
Can you detail actions performed,
>>like how you load data
>>>i have a haproxy in front of my cassandra database, so i'm sure that my
application queries each time a different coordinator
>>what scaleup / scaledown are and precise if you let it decommission
Hi,
Could you show us how you measure CPU & network?
Just to be sure we're not missing something obvious
Thanks,
De : Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Envoyé : Monday, September 07, 2015 5:19 PM
À : user@cassandra.apache.org
Objet : Re: Effect of adding/removing nodes from cassand
Yes, you are right, I should not have linked my scenario with Strong
consistency. Thank you
Ibrahim
On Mon, Sep 7, 2015 at 2:01 PM, Ryan Svihla wrote:
> The condition you bring up is a misconfigured cluster period, and no
> matter how you look at it, that's the case. In other words, the scena
What's your keyspace replication strategy?
On Thu, Sep 3, 2015 at 3:16 PM Tom van den Berge
wrote:
> Thanks for your help so far!
>
> I have some problems trying to understand the jira mentioned by Rob :(
>
> I'm currently trying to set up the first node in the new DC with
> auto_bootstrap = tru
Huge differences in ability to handle compaction and read contention. I've
taken spindle servers struggling at 7k tps for the cluster with 9 node data
centers (stupidly big writes, not my app) to doing that per node just by
swapping out to SSD. This says nothing about the 100x change in latency on
If that's what tracing is telling you then it's fine and just a product of
data distribution (note your token count isn't identical anyway).
If you're doing cl one queries directly against particular nodes and
getting different results it sounds like dropped mutations, streaming
errors and or time
The normal approach is denormalization to a materialized view (the
traditional thinking of it not the new 3.0 feature coming out), which is
also true of using an RBMS at scale (joins across all data sets get
expensive once you start having to shard across different servers).
The simplistic idea is
Hi Sara,
Can you detail actions performed, like how you load data, what scaleup /
scaledown are and precise if you let it decommission fully (streams
finished, node removed from nodetool status) etc ?
This would help us to help you :).
Also, what happens if you query using "CONSISTENCY LOCAL_QUO
The condition you bring up is a misconfigured cluster period, and no matter
how you look at it, that's the case. In other words, the scenario you're
bringing up does not get to the heart of the matter of Cassandra having
"Strong Consistency" or not, your example I'm sorry to say fails in this
regar
""It you need strong consistency and don't mind lower transaction rate,
you're better off with base""
I wish you can explain more how this statment relate to the my post?
Regards,
It you need strong consistency and don't mind lower transaction rate, you're
better off with base
Sent from my iPhone
> On Sep 7, 2015, at 5:55 AM, ibrahim El-sanosi
> wrote:
>
> Ok,
>
>
>
> With LWT, I am completely understand that it will achieve total
> order/linearizability, theref
Please, don't mail me directly
I read your answer, but I cannot help anymore
And answering with "Sorry, I can't help" is pointless :)
Wait for the community to answer
De : ICHIBA Sara [mailto:ichi.s...@gmail.com]
Envoyé : Monday, September 07, 2015 11:34 AM
À : user@cassandra.apache.org
Objet
Ok,
With LWT, I am completely understand that it will achieve total
order/linearizability, therefore above scenario cannot occur.
However, when you said "the scenario will occur if your clocks are not
sync’d". This is ambiguous statement. Because both client and server sides
are likely to hav
when there's a scaledown action, i make sure to decommission the node
before. but still, I don't understand why I'm having this behaviour. is it
normal. what do you do normally to remove a node? is it related to tokens?
i'm affecting to each of my node a different token based on there ip
address (t
at the biginning it looks like this:
[root@demo-server-seed-k6g62qr57nok ~]# nodetool status
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens OwnsHost
ID Rack
UN 40.0.0.208 128.73 KB 248
Could you provide the result of :
- nodetool status
- nodetool status YOURKEYSPACE
hey there,
i'm trying to scale casandra cluster with openstack. But i'm seeing strange
behavior when there is a scaleup (new node is added) scaledown (one node is
removed). (don't worry the seeds are stable).
I start my cluster with 2 machines, one seed and one server, then create
the database wi
30 matches
Mail list logo