On 07/28/2015 07:54 PM, Anuj Wadehra wrote:
Any more thoughts ? Anyone?
You could help others try to help you by including details, as
previously asked:
*From*:"sean_r_dur...@homedepot.com"
*Date*:Fri, 24 Jul, 2015 at 5:39 pm
It is a bit hard to follow. Perhaps you could include your prop
I know this is an old thread but just FYI for others having the same
problem (OpsCenter trying to connect to node that is already removed)...the
solution is to ssh into the OpsCenter node and run `sudo service opscenterd
restart`
On Thu, Jul 9, 2015 at 3:52 PM, Sid Tantia
wrote:
> Found my mista
Any more thoughts ? Anyone?
Thanks
Anuj
Sent from Yahoo Mail on Android
From:"Anuj Wadehra"
Date:Sat, 25 Jul, 2015 at 5:14 pm
Subject:Re: RE: Manual Indexing With Buckets
We are in product development and batch size depends on the customer base of
customer buying our product. Huge customers
I did already set that to the number of cores of the machines (24), but it
made no difference.
On Tue, Jul 28, 2015 at 4:44 PM, Bharatendra Boddu
wrote:
> Increase memtable_flush_writers. In cassandra.yaml, it was recommended to
> increase this setting when SSDs used for storing data.
>
> On Fri
Increase memtable_flush_writers. In cassandra.yaml, it was recommended to
increase this setting when SSDs used for storing data.
On Fri, Jul 24, 2015 at 1:55 PM, Soerian Lieve wrote:
> I was on CFQ so I changed it to noop. The problem still persisted however.
> Do you have any other ideas?
>
> O
Are you using light weight transactions anywhere?
On Wed, Jul 15, 2015 at 7:40 AM, Michael Shuler
wrote:
> On 07/15/2015 02:28 AM, Amlan Roy wrote:
>
>> Hi,
>>
>> I get the following error intermittently while writing to Cassandra.
>> I am using version 2.1.7. Not sure how to fix the actual issu
thanks. hmmm somehow I had the impression that untill B's streamingIn
finished it does not adverise itself to other servers for receiving fresh
replications. looks I'm wrong here, ler me check the code..
On Jul 28, 2015 2:07 PM, "Robert Coli" wrote:
> On Tue, Jul 28, 2015 at 1:01 PM, Yang wr
On Tue, Jul 28, 2015 at 1:01 PM, Yang wrote:
> Thanks. but I don't think having more nodes in the example changes the
> issue I outlined.
>
> say u have just key "X", rf = 3, nodes A, B, D are responsible for "X".
>
> in stable mode, the updates X=1, 2, 3, goes to all 3 servers.
>
> then at thi
Thanks. but I don't think having more nodes in the example changes the
issue I outlined.
say u have just key "X", rf = 3, nodes A, B, D are responsible for "X".
in stable mode, the updates X=1, 2, 3, goes to all 3 servers.
then at this time, node C joins, bootstraps, gets the sstables from B.
On Tue, Jul 28, 2015 at 1:31 AM, Yang wrote:
> I'm wondering how the Cassandra protocol brings a newly bootstrapped node
> "up to speed".
>
Bootstrapping nodes get "extra" replicated copies of data for the range
they are joining.
So if before the bootstrap the nodes responsible for Key "X" are
I'm running benchmark on a 2 nodes C* 2.1.8 cluster using cassandra-stress,
with the default of CL =1
Stress runs fine for some time, and than start throwing:
java.io.IOException: Operation x10 on key(s) [36333635504d4b343130]: Error
executing: (UnavailableException): Not enough replica available
I'm still struggling with finding the root cause for such CPU
utilisation patterns.
http://i58.tinypic.com/24pifcy.jpg
After a 3 weeks after C* restart CPU utilisation is going through the
roof, such situation isn't happening shortly after the restart (which
is visible at the graph).
C* is runni
I'm wondering how the Cassandra protocol brings a newly bootstrapped node
"up to speed".
for ease of illustration, let's say we just have one key, K, and the value
is continually updated: 1,2 ,3 ,4
originally we have 1 node, A, now node B joins, and needs to bootstrap and
get its newly assig
Hi,
thanks, that was the issue. I got distracted from too much debugging, when I
hunted down Usergrid username/password instead of Cassandra Username/Password.
This way I overlooked the error in simply copying the config file from the link
that Nate pointed to. One closer look would have suf
14 matches
Mail list logo