n TTL)
>
>
> On Mon, Jun 30, 2014 at 8:43 AM, Jason Tang wrote:
>
>> Our application will use Cassandra to persistent for asynchronous tasks,
>> so in one time period, lots of records will be created in Cassandra (more
>> then 10M). Later it will be executed.
>>
>
Our application will use Cassandra to persistent for asynchronous tasks, so
in one time period, lots of records will be created in Cassandra (more then
10M). Later it will be executed.
Due to disk space limitation, the executed records will be deleted.
After gc_grace_seconds, it is expected to be
What's configuration of following parameters
memtable_flush_queue_size:
concurrent_compactors:
2013/10/30 Piavlo
> Hi,
>
> Below I try to give a full picture to the problem I'm facing.
>
> This is a 12 node cluster, running on ec2 with m2.xlarge instances (17G
> ram , 2 cpus).
> Cassandra versi
y
configuration.
And when I change GC grace seconds to 10 days. our problem solved, but
it is still a strange behavior when using index query.
2013/10/8 Jason Tang
> I have a 3 nodes cluster, replicate_factor is 3 also. Consistency level is
> Write quorum, Read quorum.
> Traf
sues.apache.org/jira/browse/CASSANDRA
>
> Thanks
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 2/07/2012, at 1:49 AM, Jason Tang wrote:
>
> For the create/update/deleteColumn/deleteRow test ca
I have a 3 nodes cluster, replicate_factor is 3 also. Consistency level is
Write quorum, Read quorum.
Traffic has three major steps
Create:
Rowkey:
Column: status=new, requests="x"
Update:
Rowkey:
Column: status=executing, requests="x"
Delete:
Rowkey:
Following case may be logical correct for Cassandra, but difficult for user.
Let's say:
Cassandra consistency level: write all, read one
replication_factor:3
For one record, rowkey:001, column:status
Client 1, insert value for rowkey 001, status:True, timestamp 11:00:05
Client 2 Slice Query, get
Hi
We are considering using Cassandra in virtualization environment. I
wonder is Cassandra using unicast/broadcast/multicast for node discover or
communication?
From the code, I find the broadcast address is used for heartbeat in
Gossiper.java, but I don't know how actually it works when nod
8/18/Sorting-Lists-For-Humans/
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 4/03/2013, at 4:30 PM, Jason Tang wrote:
>
> Hi
>
> The timestamp provided by my c
ra let you use that: you can provid your own timestamp (using unix
> timestamp is just the default). The point being, unix timestamp is the
> better approximation we have in practice.
>
> --
> Sylvain
>
>
> On Mon, Mar 4, 2013 at 9:26 AM, Jason Tang wrote:
>
>> Hi
>>
&
hat latter case,
> it's a convenience and you can force a timestamp client side if you really
> wish. In other words, Cassandra dependency on time synchronization is not a
> strong one even in that case. But again, that doesn't seem at all to be the
> problem you are trying to solve.
equests for 1 second before reading up to the moment that
> the request was received.
>
> In either of these approaches you can tune the time offset based on how
> closely synchronized you believe you can keep your clocks. The tradeoff of
> course, will be increased latency.
>
>
setMaxCompactionThreshold(0)
setMinCompactionThreshold(0)
2012/7/27 Илья Шипицин
> Hello!
>
> if we are dealing with append-only data model, so what if I disable
> compaction on certain CF ?
> any side effect ?
>
> can I do it with
>
> "update column family with compaction_strategy = null "
Hi
For some consistency problem, we can not use delete direct to delete
one row, and then we use TTL for each column of the row.
We using the Cassandra as the central storage of the stateful system.
All request will be stored in Cassandra, and marked as status;NEW, and then
we change it
g of
> QUORAM.
>
> ** **
>
> *From:* Jason Tang [mailto:ares.t...@gmail.com]
> *Sent:* Tuesday, July 17, 2012 8:24 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Replication factor - Consistency Questions
>
> ** **
>
> Hi
>
> ** **
>
> I a
Hi
I am starting using Cassandra for not a long time, and also have problems
in consistency.
Here is some thinking.
If you have Write:Any / Read:One, it will have consistency problem, and if
you want to repair, check your schema, and check the parameter "Read repair
chance: "
http://wiki.apache.o
Hi
I have a 4 nodes Cassandra cluster, and replicate factor is 3, and write
consistent level is ALL, and each write suppose to write to at least 3
nodes, right?
I check the schema, and found the parameter "Replicate on write: false",
what does this parameter mean.
How it impact the writ
Worker.run() @bci=28, line=908
(Compiled frame)
- java.lang.Thread.run() @bci=11, line=662 (Interpreted frame)
BRs
//Jason
2012/7/11 Jason Tang
> Hi
>
> I encounter the High CPU problem, Cassandra 1.0.3, happened on both
> sized and leveled compaction, 6G heap, 64bit Oracle java
Hi
I encounter the High CPU problem, Cassandra 1.0.3, happened on both
sized and leveled compaction, 6G heap, 64bit Oracle java. For normal
traffic, Cassandra will use 15% CPU.
But every half a hour, Cassandra will use almost 100% total cpu (SUSE,
12 Core).
And here is the top inform
ng" the
> row?
>
> On Thu, Jun 28, 2012 at 7:24 AM, Jason Tang wrote:
> > Hi
> >
> >First I delete one column, then I delete one row. Then try to read all
> > columns from the same row, all operations from same client app.
> >
> >The consist
Hi
First I delete one column, then I delete one row. Then try to read all
columns from the same row, all operations from same client app.
The consistency level is read/write quorum.
Check the Cassandra log, the local node don't perform the delete
operation but send the mutation to other
888639933581556,
33323130537570657254616e6730) (b20ac6ec0d29393d70e200027c094d13 vs
d41d8cd98f00b204e9800998ecf8427e)
2012/6/25 Jason Tang
> Hi
>
> I met the consistency problem when we have Quorum for both read and
> write.
>
> I use MultigetSubSliceQuery to query rows from super
Hi
I met the consistency problem when we have Quorum for both read and
write.
I use MultigetSubSliceQuery to query rows from super column limit size
100, and then read it, then delete it. And start another around.
But I found, the row which should be delete by last query, it still
sh
ur test with a clean (no files on disk) database ?
>
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 18/06/2012, at 12:36 AM, Jason Tang wrote:
>
> Hi
>
>After I change log level to DEBU
5000:
7fff0137f6340e396cfdc9fa:true:4@133986545195
BRs
//Ares
2012/6/17 Jason Tang
> Hi
>
>After running load testing for 24 hours(insert, update and delete), now
> no new traffic to Cassandra, but Cassnadra shows still have high load(CPU
> usage), from the system.log
Hi
After running load testing for 24 hours(insert, update and delete), now
no new traffic to Cassandra, but Cassnadra shows still have high load(CPU
usage), from the system.log, it shows it always perform GC. I don't know
why it work as that, seems memory is not low.
Here is some configuration
t;
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/06/2012, at 5:52 PM, Jason Tang wrote:
>
> Hi
>
> I found some information of this issue
> And seems we can have other strategy
//comments.gmane.org/gmane.comp.db.cassandra.user/7390
2012/6/12 Jason Tang
> See my post, I limit the HVM heap 6G, but actually Cassandra will use more
> memory which is not calculated in JVM heap.
>
> I use top to monitor total memory used by Cassandra.
>
> =
> -Xms
ole as it will give you much more detai
> kon what your VM is actually doing.
>
>
>
> On Mon, Jun 11, 2012 at 9:14 PM, Jason Tang wrote:
>
>> Hi
>>
>> We have some problem with Cassandra memory usage, we configure the JVM
>> HEAP 6G, but after runin
Hi
We have some problem with Cassandra memory usage, we configure the JVM HEAP
6G, but after runing Cassandra for several hours (insert, update, delete).
The total memory used by Cassandra go up to 15G, which cause the OS low
memory.
So I wonder if it is normal to have so many memory used by cass
Hi
My system is 4 nodes 64 bit cassandra cluster, 6G big per node,default
configuration (which means 1/3 heap for memtable), replicate number 3,
write all, read one.
When I run stress load testing, I got this TimedOutException, and some
operation failed, and all traffic hang for a while.
And when
I try to search one column, this column store the time as the type Long,
1,000,000 data equally distributed in 24 hours, I only want to search
certain time rang, eg from 01:30 to 01:50 or 08:00 to 12:00, but something
stranger happened.
Search 00:00 to 23:59 limit 100
It took less then 1 second sc
ope that helps.
>
> 2012/4/25 Jason Tang
>
>> And I found, if I only have the search condition "status", it only scan
>> 200 records.
>>
>> But if I combine another condition "partition" then it scan all records
>> because "partitio
uot;, even all "userName" is
same in the 1,000,000 records, it only scan 200 records.
So it impacted by scan execution plan, if we have several search
conditions, how it works? Do we have the similar execution plan in
Cassandra?
在 2012年4月25日 下午9:18,Jason Tang 写道:
> Hi
>
>
Hi
We have the such CF, and use secondary index to search for simple data
"status", and among 1,000,000 row records, we have 200 records with status
we want.
But when we start to search, the performance is very poor, and check with
the command "./bin/nodetool -h localhost -p 8199 cfstats" ,
Hi
Here is the case, if we have only two nodes, which share the data (write
one, read one),
node One node Two
| Stopped Continue working and update the
data.
| stopped stopped
| start working
36 matches
Mail list logo