I will keep an eye for that if it happens again. Times at this point are
synchronized
On Wed, Jan 7, 2015 at 10:37 PM, Duncan Sands
wrote:
> Hi Anand,
>
>
> On 08/01/15 02:02, Anand Somani wrote:
>
>> Hi,
>>
>> We have a 3 node cluster (on VM). Eg. host1, host2,
Hi,
We have a 3 node cluster (on VM). Eg. host1, host2, host3. One of the VM
rebooted (host1) and when host1 came up it would see the others as down and
the others (host2 and host3) see it as down. So we restarted host2 and now
the ring seems fine(everybody sees everybody as up).
But now the clie
Hi,
It seems like it should be possible to have a keyspace replicated only to a
subset of DC's on a given cluster spanning across multiple DCs? Is there
anything bad about this approach?
Scenario
Cluster spanning 4 DC's => CA, TX, NY, UT
Has multiple keyspaces such that
* "keyspace_CA_TX" - repli
Correction credentials are stored in the system_auth table, so it is
ok/recommended to change the replication factor of that keyspace?
On Tue, Apr 29, 2014 at 10:41 PM, Anand Somani wrote:
> Hi
>
> We have enabled cassandra client authentication and have set new user/pass
> per ke
Hi
We have enabled cassandra client authentication and have set new user/pass
per keyspace. As I understand user/pass is stored in the system table, do
we need to change the replication factor of the system table so this data
is replicated? The cluster is going to be multi-dc.
Thanks
Anand
Have you tried nodetool rebuild for that node? I have seen that work when
repair failed.
On Wed, Apr 2, 2014 at 11:44 AM, Redmumba wrote:
> Cassandra 1.2.15, using commodity hardware.
>
>
> On Tue, Apr 1, 2014 at 6:37 PM, Robert Coli wrote:
>
>> On Tue, Apr 1, 2014 at 3:24 PM, Redmumba wrote:
Hi,
Scenario is a cluster spanning across datacenters and we use Local_quorum
and want to know when things are not getting replicated across data
centers. What is the best way to track/alert on that?
I was planning on using the HintedHandOffManager (JMX)
=> org.apache.cassandra.db:type=HintedHand
RF=3.
On Thu, Apr 4, 2013 at 7:08 AM, Cem Cayiroglu wrote:
> What was the RF before adding nodes?
>
> Sent from my iPhone
>
> On 04 Apr 2013, at 15:12, Anand Somani wrote:
>
> We are using a single process with multiple threads, will look at client
> side delays.
>
cesses?
>
>
> On Wed, Apr 3, 2013 at 8:49 AM, Anand Somani wrote:
>
>> Hi,
>>
>> I am running some tests trying to scale out our application from using a
>> 3 node cluster to 6 node cluster. The thing I observed is that when using a
>> 3 node cluster I was abl
Hi,
I am running some tests trying to scale out our application from using a 3
node cluster to 6 node cluster. The thing I observed is that when using a 3
node cluster I was able to handle abt 41 req/second, so I added 3 more
nodes thinking it should close to double, but instead it only goes upto
w in
> /var/lib/cassandra/data/KS_NAME/CF_NAME/SSTable.data Is the data in the
> right place ?
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 13/12/2012, at 6:
Hi,
We have a service which uses* in-vm cassandra* and creates the schema if
does not exist programmatically, this has worked for us for sometime
(including upgrades to the service) and we have been using 0.8.5.
Now we are testing the upgrade to 1.1.6 and noticed that on upgrade the
cassandra fai
Hi,
Have a requirement to do a multi dc low latency application. This will be
in an active/standby setup. So I am planning on using LOCAL_QUORUM for
writes. Now if there is a hard requirement of maximum loss of data (on a dc
destruction) to some minutes,
- In cassandra what is the recommended
In my tests I have seen repair sometimes take a lot of space (2-3 times),
cleanup did not clean it, the only way I could clean that was using major
compaction.
On Sun, Sep 18, 2011 at 6:51 PM, Yan Chunlu wrote:
> while doing repair on node3, the "Load" keep increasing, suddenly cassandra
> has e
So I should be able to do rolling upgrade from 0.7 to 1.0 (not there in the
release notes, but I assume that is work in progress).
Thanks
On Thu, Sep 15, 2011 at 1:36 PM, amulya rattan wrote:
> Isn't this levelDB implementation for Google's LevelDB?
> http://code.google.com/p/leveldb/
> From wha
. Last time I checked the Partitioner did not take the DC
> into consideration https://issues.apache.org/jira/browse/CASSANDRA-3047
>
>
> Good luck.
>
>
>
>
>
>
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http:
On Tue, Sep 13, 2011 at 3:57 PM, Peter Schuller wrote:
> > I think it is a serious problem since I can not "repair". I am
> > using cassandra on production servers. is there some way to fix it
> > without upgrade? I heard of that 0.8.x is still not quite ready in
> > production environment.
Hi,
Just trying to setup a cluster of 4 nodes for multiDC scenario - with 2
nodes in each DC. This is all on the same box just for testing the
configuration aspect. I have configured things as
- PropertyFile
- 127.0.0.4=SC:rack1
127.0.0.5=SC:rack2
127.0.0.6=AT:rack1
127
gt;
> On Tue, Sep 13, 2011 at 11:53 AM, Anand Somani wrote:
>
>> Hi,
>>
>> Upgraded from 7.4 to 7.8, noticed that StorageProxy (under cassandra.db)
>> is no longer exposed, is that intentional? So the question are these covered
>> somewhere else?
>>
>> Thanks
>> Anand
>>
>>
>>
>
Hi,
Upgraded from 7.4 to 7.8, noticed that StorageProxy (under cassandra.db) is
no longer exposed, is that intentional? So the question are these covered
somewhere else?
Thanks
Anand
e.
>
> On Thu, Sep 8, 2011 at 3:14 PM, Anand Somani wrote:
> > Hi,
> >
> > Have a requirement, where data is spread across multiple DC for disaster
> > recovery. So I would use the NTS, that is clear, but I have some
> questions
> > with this scenario
> &
Hi
Currently we are using 0.7.4 and was wondering if I should upgrade to
0.7.8/9 or move to 0.8? Is anybody using 0.8 in production and what is their
experience?
Thanks
Hi,
Have a requirement, where data is spread across multiple DC for disaster
recovery. So I would use the NTS, that is clear, but I have some questions
with this scenario
- I have 2 Data Centers
- RF - 2 (active DC) , 2 (passive DC)
- with NTS - Consistency level options are - LOCAL_QUOR
Hi,
If I have a cluster with 15-20T nodes, somethings that I know will be a
potential problem are
- Compactions taking longer
- Higher read latencies
- Long time for adding/removing nodes
What are other things that can be problematic with big nodes?
Regards
Anand
dra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 25/08/2011, at 3:22 AM, Anand Somani wrote:
>
> So I have looked at the cluster from
>
>- Cassandra-client - describe cluster => shows correctly - 3 nodes
>- used the StorageService - JMX bean =>Unrea
nd out about "phantom" nodes.
On Wed, Aug 24, 2011 at 8:01 AM, Anand Somani wrote:
> So, I restarted the cluster (not rolling), but it is still maintaining
> hints for the IP's that are no longer part of the ring. nodetool ring shows
> things correctly (as only 3 nodes
result in log files not been deleted
> https://issues.apache.org/jira/browse/CASSANDRA-2829
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 22/08/2011, at 4:56 AM, Anand Somani wrot
We have a lot of space on /data, and looks like it was flushing data fine
from file timestamps.
We did have a bit of goofup with IP's when bringing up a down node (and the
commit files have been around since then). Wonder if that is what triggered
it and we have a bunch of hinted handoff's being b
oduce node" exercise I went thru
on Saturday. Somehow can this get worse with cleanup, hinted hand off,
etc
When does the actual commit-data file get deleted.
The flush interval on all my memtables is 60 minutes
Thanks
On Sun, Aug 21, 2011 at 8:43 AM, Anand Somani wrote:
> Hi,
>
Hi,
7.4, 3 node cluster, RF=3
Load has not changed much, on 2 of the 3 nodes the commit log filled up in
less than a minute (did not give a chance to recover). Now have been running
this cluster for abt 2-3 months without any problem. At this point I do not
see any unusual load (continue to inves
.33%
113427455640312821154458202477256070485
What are my choices here, how do I clean up the ring? The other 2 nodes show
the ring fine (not even aware of 189)
Thanks
Anand
On Fri, Aug 19, 2011 at 11:53 AM, Anand Somani wrote:
> ok I will go with the IP change strategy and keep you posted. Not going to
> manually copy
0.7.4/ 3 node cluster/ RF -3 /Quorum read/write
After I re-introduced a corrupted node, followed the process as (thanks to
folks on the mailing list for helping me) listed on the operations wiki to
handle failures.
Still doing a cleanup on one node at this point. But I noticed that I am
seeing thi
ok I will go with the IP change strategy and keep you posted. Not going to
manually copy any data, just bring up the node and let it bootstrap.
Thanks
On Fri, Aug 19, 2011 at 11:46 AM, Peter Schuller <
peter.schul...@infidyne.com> wrote:
> > (Yes, this should definitely be easier. Maybe the most
Let me be specific on lost data -> lost a replica , the other 2 nodes have
replicas
I am running read/write at quorum. At this point I have turned off my
clients from talking to this node. So if that is the case I can potentially
just nodetool repair (without changing IP). But would it be better
u may want to disable Hinted Handoff while you are doing this as you are
> going to run repair anyway when the node comes back.
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 19/08/2011, at 11:57 A
Hi
I am using 0.7.4 and am seeing this exception my logs a few times a day,
should I be worried? Or is this just a intermittent network disconnect
ERROR [RequestResponseStage:257] 2011-08-19 03:05:30,706
AbstractCassandraDaemon.java (line 112) Fatal exception in thread
Thread[RequestResponseSta
Hi,
version - 0.7.4
cluster size = 3
RF = 3.
data size on a node ~500G
I want to do some disk maintenance on a cassandra node, so the process that
I came up with is
- drain this node
- back up the system data space
- rebuild the disk partition
- copy data from another node
- copy
Hi,
Using thrift and get_range_slices call with tokenrange. Using Random
Partionioner. Have only tried this on > 0.7.5
Used to work in 0.6.4 or earlier version for me , but I notice that it does
not work for me anymore. The need is to iterate over a token range to do
some bookkeeping.
The logic is
Not sure if it is that simple, a quorum can fail with writes happening on
some nodes (there is no rollback). Also there is no concept of atomic
compare-and-swap.
On Tue, Jun 21, 2011 at 2:03 PM, AJ wrote:
> **
> On 6/21/2011 2:50 PM, Stephen Connolly wrote:
>
> how important are things like tran
>From what I know you cannot create secondary indexes on SCF. You should have
gotten this => https://issues.apache.org/jira/browse/CASSANDRA-1813 on index
creation.
On Fri, May 20, 2011 at 6:56 AM, Monkey me wrote:
> Hi,
> I have a SCF, Key is string, super column is TimeUUID, and several
>
I should have clarified we have 3 copies, so in that case as long as 2 match
we should be ok?
Even if there were checksumming at the SStable level, I assume it has to
check and report these errors on compaction (or node repair)?
I have seen some JIRA open on these issues ( 47 and 1717), but if I
Hi,
Our application space is such that there is data that might not be read for
a long time. The data is mostly immutable. How should I approach
detecting/solving the bitrot problem? One approach is read data and let read
repair do the detection, but given the size of data, that does not look very
ize of nodes (in terms for data)?
> - How long have you been running?
> - Howz compaction treating you?
>
> Thanks,
> Naren
>
>
> On Thu, Jan 27, 2011 at 12:13 PM, Anand Somani wrote:
>
>> Using it for storing large immutable objects, like Aaron was suggesting we
&g
Using it for storing large immutable objects, like Aaron was suggesting we
are splitting the blob across multiple columns. Also we are reading it a few
columns at a time (for memory considerations). Currently we have only gone
upto about 300-400KB size objects.
We do have machines with 32Gb memory
It is a little slow not to the point where it concerns me (only have few
tests for now), but keeps things very clean so no surprise effects.
On Thu, Jan 20, 2011 at 6:33 PM, Roshan Dawrani wrote:
> On Fri, Jan 21, 2011 at 5:14 AM, Anand Somani wrote:
>
>> Here is what worked f
Here is what worked for me, I use testNg, and initialize and createschema in
the @BeforeClass for each test
- In the @AfterClass, I had to drop schema, otherwise I was getting the
same exception.
- After this I started getting port conflict with the second test, so I
added my own versi
One approach is to ask yourself questions as to how you would use this
information, for example
- how often to you go from user to tags
- how often would you want to go from tag->users.
- What kind of reporting would you want to do on tags and how often
- Can multiple people add the sa
Interesting idea, .
If it is like dividing the entire load on the system by 6, so if the
effective load is still the same and used SSD's for commit volume we could
get away with 1 commitlog SSD. Even if these 6 instances can handle 80% of
the load (compared to 1 on this machine), that might be acc
l query; token-based queries have to be on
> non-wrapping ranges (left token < right token), or a wrapping range of
> (mintoken, mintoken). This was changed as part of the range scan
> fixes post-0.6.5.
>
> On Mon, Nov 15, 2010 at 6:32 PM, Anand Somani
> wro
Hi
Problem:
Call - client.get_range_slices(). Using tokens (not keys), fails with
TimedoutException which I think is misleading (Read on)
Server : Works with 6.5 server, but not with 6.6 or 6.8
Client: have tried both 6.5 and 6.6
I am getting a TimedoutException when I do a get_ran
Hi,
I am trying to iterate over the entire dataset to calculate some
information. Now the way I am trying to do this is by going directly to the
node that has a data range, so here is the route I am following
- get TokenRange using - describe_ring
- then for each tokenRange pick a node and
51 matches
Mail list logo