I've noticed that when defining KeysCached="50%" (or KeysCached="100%" and I
didn't test other values with %) then cfstats reports Key cache capacity: 1
This looks weird... is this expected? (version 0.6.1)
For example, in the default configuration:
Keyspace: Keyspace1
If you check the other nodes you will probably see that one of them
thinks it is still trying to send to node 3. You will probably need
to restart that node, and then retry the bootstrap from 3.
Alternatively you could force 3 into the ring by restarting w/
autobootstrap off (be sure to set Initi
Hi all,
I'm currently working on translating cassandra wiki to Japanese.
Cassandra is gaining attention in Japan, too. :)
I noticed that for those who have browser locale with 'ja', accessing
top page of cassandra wiki (http://wiki.apache.org/cassandra) displays
Japanese default front page
(http:
hi,
Have you checked the load-balancing of your 20 nodes? I have the similar
experience that 3 nodes' performance is worse
than 2 nodes'. The reason was bad load-balance; after reallocating data, the
performance becomes expected.
regards,
Cao Jiguang
2010-05-24
casablinca126.com
Hi,Jonathan
I am sorry that there is something wrong with the CL I said last time, I
checked the test code agagin. The version we used is 0.6.0-Beta3, and both
the write and read CL are ConsistencyLevel.ONE.
2010/5/24 Jonathan Ellis
> ZERO hasn't been the default CL for a long time. You shoul
ZERO hasn't been the default CL for a long time. You should upgrade
to 0.6.1. (Read NEWS first to see what has changed.)
2010/5/23 史英杰 :
> The replication factor is 3, and the consistency level is default, zero.
>
> 在 2010年5月24日 上午7:25,Jonathan Shook 写道:
>>
>> It would be helpful to know the rep
The replication factor is 3, and the consistency level is default, zero.
在 2010年5月24日 上午7:25,Jonathan Shook 写道:
> It would be helpful to know the replication factor and consistency
> levels of your reads and writes.
>
>
> 2010/5/23 史英杰 :
> > Thanks for your reply!
> > //Were all of those 20 nod
It would be helpful to know the replication factor and consistency
levels of your reads and writes.
2010/5/23 史英杰 :
> Thanks for your reply!
> //Were all of those 20 nodes running real hardware (i.e. NOT VMs)?
> Yes, there are 20 real servers running in the cluster, and one Casssandra
> instance
Every system has its limits. When you say to imagine there are
billions of users without providing any other real data, it limits the
discussion strictly to the hypothetical (and hyperbolic, usually).
The only reasonable answer we could provide would be about the types
of limitations we know about
I am planning on setting up a Cassandra cluster on a small 16 node cluster
(possibly 32 way). Each machine has 8 cores 32 Gig of ram and 8 hds. My
first thought is to setup one of those hds for the commit log, 6 for data
and leave one for the OS. However I do have a concern about best utilizing
no, cache does not use soft references since they pretty much suck for
caching (the javadoc is not always right :).
you're oom-ing b/c you're making requests faster than they can be
satisfied. increasing the amount of memory available there will just
make it take longer before it OOMs, it won't f
I am disk bound, certainly. I'll try adding more keys and row caching, but I
suspect it's a short blanket, if I add more caching I'll have less free
memory so more chance to OOM again. (is the cache using soft ref so it won't
take mem from real objects?)
On Sun, May 23, 2010 at 8:15 PM, Jonathan E
On Sun, May 23, 2010 at 10:59 AM, Ran Tavory wrote:
> Is there another solution except adding capacity?
Either you need to get more performance/node or increase node count. :)
> How does the ConcurrentReads (default 8) affect that? If I expect to have
> similar number of reads and writes should
Is there another solution except adding capacity?
How does the ConcurrentReads (default 8) affect that? If I expect to have
similar number of reads and writes should I set the ConcurrentReads equal
to ConcurrentWrites (default 32) ?
thanks
On Sun, May 23, 2010 at 5:43 PM, Jonathan Ellis wrote:
looks like reads are backing up, which in turn is making deserialize back up
On Sun, May 23, 2010 at 4:25 AM, Ran Tavory wrote:
> Here's tpstats on a server with traffic that I think will get OOM shortly.
> We have 4k pending reads and 123k pending at MESSAGE-DESERIALIZER-POOL
> Is there somethin
No, it's really not designed to be a "leave the nodes down while I do
a ton of inserts."
(a) HH schema creates a column per hinted row, so you'll hit the 2GB
row limit sooner or later
(b) it goes through the hints hourly in case it missed a gossip Up notification
On Sat, May 22, 2010 at 9:07 PM,
the TTL (expiring columns) feature in 0.7 is the easiest way to do
this. Until then you'd have to delete them manually.
On Sun, May 23, 2010 at 3:35 AM, Yan Virin wrote:
> Hi
> I want to use cassandra for storing some data which gets irrelevant with
> time. There will be a lot of data and I want
Thanks for your reply!
//Were all of those 20 nodes running real hardware (i.e. NOT VMs)?
Yes, there are 20 real servers running in the cluster, and one Casssandra
instance runs on each server.
//Did your driver application(s) run on "real" hardware and how many threads
did you use?
The clients ru
On 23 May 2010 13:42, 史英杰 wrote:
> Hi, All
>I am now doing some tests on Cassandra, and I found that both writes and
> reads on 15 nodes are faster than that of 20 nodes, how many servers does
> one Cassandra system contains during the real applications?
>Thanks a lot !
>
> Yingjie
>
I'd
Hi, All
I am now doing some tests on Cassandra, and I found that both writes and
reads on 15 nodes are faster than that of 20 nodes, how many servers does
one Cassandra system contains during the real applications?
Thanks a lot !
Yingjie
I am trying to find out if Cassandra will fill my needs.
I have a data model similar to below.
Users = {
//ColumnFamily
user1 = {
//Key for Users ColumnFamily
message1 = {
//Supercolumn
Here's tpstats on a server with traffic that I think will get OOM shortly.
We have 4k pending reads and 123k pending at MESSAGE-DESERIALIZER-POOL
Is there something I can do to prevent that? (other than adding RAM...)
Pool NameActive Pending Completed
FILEUTILS-DELETE-P
Hello!
I have to 2 node cluster:
[r...@cas2 bin]# sh nodetool -h localhost ring
Address Status Load Range
Ring
47311629213338587668692978196312911227
172.19.0.32 Up 80.06 GB
1517934153089153249729554474051162 |<--|
172.19.0.30 Up 169.42
23 matches
Mail list logo