: 1000
range_request_timeout_in_ms: 1000
truncate_request_timeout_in_ms: 1000
I still have no luck. Any advise how to achieve this? I am NOT limited to
copy command. What is the best way to achieve this? Thanks in advance for
the help.
ng
lely for the attention and use of the named addressee and may be
> confidential. If you are not the intended recipient, you are reminded that
> the information remains the property of the sender. You must not use,
> disclose, distribute, copy, print or rely on this e-mail. If you have
>
sstable2json tomcat-t5-ic-1-Data.db -e
gives me
0021
001f
0020
How do I convert this (hex) to actual value of column so I can do below
select * from tomcat.t5 where c1='concerted value';
Thanks in advance for the help.
If I have configuration of two data center with one node each.
Replication factor is also 1.
Will these 2 nodes going to be mirrored/replicated?
I want to discuss the question asked by Rene last year again.
http://www.mail-archive.com/user%40cassandra.apache.org/msg28465.html
Is the following a good backup solution.
Create two data-centers:
- A live data-center with multiple nodes (commodity hardware) (6 nodes with
replication factor of
I need to make sure that all the data in sstable before taking the snapshot.
I am thinking of
nodetool cleanup
nodetool repair
nodetool flush
nodetool snapshot
Am I missing anything else?
Thanks in advance for the responses/suggestions.
ng
snapshotted files to node 1 data directory of CF
3. perform nodetool refresh on node 1
Any suggestions/advise?
ng
I am not worried about eventually consistent data. I just wanted to get
rough data in close proximate.
ng
On Wed, Jun 4, 2014 at 2:49 PM, Robert Coli wrote:
> On Wed, Jun 4, 2014 at 1:26 PM, ng wrote:
>
>> Is there any reason you would like to take snapshot of column family on
I just wanted to verify the procedures to add and remove nodes in my
environment, please feel free to comments or advise.
I have 3 node cluster N1, N2, N3 with Vnode configured as (256) on each
node. All are in one data center.
1. Procedure to Change node hardware or replace to new node machine
All,
We've been having intermittent long application pauses (version 1.2.8) and
not sure if it's a cassandra bug. During these pauses, there are dropped
messages in the cassandra log file along with the node seeing other nodes
as down. We've turned on gc logging and the following is an example o
g
the safepoint.
On Wed, Jan 29, 2014 at 1:20 PM, Shao-Chuan Wang <
shaochuan.w...@bloomreach.com> wrote:
> We had similar latency spikes when pending compactions can't keep it up or
> repair/streaming taking too much cycles.
>
>
> On Wed, Jan 29, 2014 at 10:07 AM, Frank N
s; the count is the number of safepoints to aggregate into one log
> message)
>
>
> 52s is a very extreme pause, and I would be surprised if revoke bias could
> cause this. I wonder if the VM is swapping out.
>
>
>
> On 29 January 2014 19:02, Frank Ng wrote:
>
>&
+LogVMOutput
>
> I never figured out what kills stdout for C*. It's a library we depend on,
> didn't try too hard to figure out which one.
>
>
> On 29 January 2014 21:07, Frank Ng wrote:
>
>> Benedict,
>> Thanks for the advice. I've tried turning on
Sorry, I have not had a chance to file a JIRA ticket. We have not been
able to resolve the issue. But since Joel mentioned that upgrading to
Cassandra 2.0.X solved it for them, we may need to upgrade. We are
currently on Java 1.7 and Cassandra 1.2.8
On Thu, Feb 13, 2014 at 12:40 PM, Keith Wri
ker Total (ms): Min: 163.8, Avg: 163.8, Max: 163.8, Diff:
>>>>>> 0.0, Sum: 327.6]
>>>>>> [GC Worker End (ms): Min: 222346382.1, Avg: 222346382.1, Max:
>>>>>> 222346382.1, Diff: 0.0]
>>>>>> [Code Root Fixup: 0.0 ms]
>>
We have swap disabled. Can death by paging still happen?
On Thu, Feb 27, 2014 at 11:32 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:
> That sounds a lot like death by paging.
>
>
> On 27 February 2014 16:29, Frank Ng wrote:
>
>> I just caught th
Hello,
I am on Cassandra 1.0.7. My repair processes are taking over 30 hours to
complete. Is it normal for the repair process to take this long? I wonder
if it's because I am using the ext3 file system.
thanks
h part of repair process is slow -
> network streams or verify compactions. use nodetool netstats or
> compactionstats.
>
>
> On 04/10/2012 05:16 PM, Frank Ng wrote:
>
>> Hello,
>>
>> I am on Cassandra 1.0.7. My repair processes are taking over 30 hours to
>>
ompaction on any of the column families that
> hold a lot of your data?
>
> Do your cassandra logs say you are streaming a lot of ranges?
> zgrep -E "(Performing streaming repair|out of sync)"
>
>
> On Tue, Apr 10, 2012 at 9:45 AM, Igor wrote:
>
>> On 04/10/
streaming a lot of ranges?
> zgrep -E "(Performing streaming repair|out of sync)"
>
>
> On Tue, Apr 10, 2012 at 9:45 AM, Igor wrote:
>
>> On 04/10/2012 07:16 PM, Frank Ng wrote:
>>
>> Short answer - yes.
>> But you are asking wrong question.
>>
>
when you think you should be balanced and repair never ends (I think there
> is a 48 hour timeout).
>
>
> On Tuesday, April 10, 2012, Frank Ng wrote:
>
>> I am not using tier-sized compaction.
>>
>>
>> On Tue, Apr 10, 2012 at 12:56 PM, Jonathan Rhone wrote:
&g
l building the merkle hash tree.
>
> Look at nodetool netstats . Is it streaming data ? If so all hash trees
> have been calculated.
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
>
I also noticed that if I use the -pr option, the repair process went down
from 30 hours to 9 hours. Is the -pr option safe to use if I want to run
repair processes in parallel on nodes that are not replication peers?
thanks
On Thu, Apr 12, 2012 at 12:06 AM, Frank Ng wrote:
> Thank you
Thanks for the clarification. I'm running repairs as in case 2 (to avoid
deleted data coming back).
On Thu, Apr 12, 2012 at 10:59 AM, Sylvain Lebresne wrote:
> On Thu, Apr 12, 2012 at 4:06 PM, Frank Ng wrote:
> > I also noticed that if I use the -pr option, the repair pro
b.26.1335463902287;
> Thu, 26 Apr 2012 11:11:42 -0700 (PDT)
> Received: by 10.60.143.102 with HTTP; Thu, 26 Apr 2012 11:11:42 -0700 (PDT)
> Date: Thu, 26 Apr 2012 14:11:42 -0400
> Message-ID: <
> caal7ocavuw1rtaqwlddzbnzosv7-qxqfhot7w6uj8q08m03...@mail.gmail.c
I am having the same issue in 1.0.7 with leveled compation. It seems that
the repair is flaky. It either completes relatively fast in a TEST
environment (7 minutes) or gets stuck trying to receive a merkle tree from
a peer that is already sending it the merkle tree.
Only solution is to restart c
Hi All,
We are using the Fat Client and notice that there are files written to the
commit log directory on the Fat Client. Does anyone know what these files
are storing? Are these hinted handoff data? The Fat Client has no files in
the data directory, as expected.
thanks
loper
> @aaronmorton
> http://www.thelastpickle.com
>
> On 23/06/2012, at 8:07 AM, Frank Ng wrote:
>
> Hi All,
>
> We are using the Fat Client and notice that there are files written to the
> commit log directory on the Fat Client. Does anyone know what these files
> are stori
28 matches
Mail list logo