Hello all:
Does cassandra use swap size? why the physical memory is 1200932k free ,
but swap size is 31472k used. My cassandra server is 32 bit and
DiskAccessMode is standard.
top - 09:58:58 up 17 days, 17:54, 10 users, load average: 0.05, 0.08,
0.08
Tasks: 76 total, 2 running, 74 sle
Hello all:
Cassandra use MD5 hash and we can get the token range such as:
Address Status Load Range
Ring
142717619497548302610133488236729667990
10.63.59.71 Up 140.93 KB
38183556207930269037140083379529366016 |<--|
10.63.59.72 Up 133
If my application works in production and I change structure of my data (e.g.
type of column name)
I will need to process all my stored data.
As variant I can create new column family and import legacy data.
I think, that is typical task, so tool for doing this should exist, but I can't
find an
I think I will follow the advice of better balancing and I will split
the index into several pieces. Thanks everybody for your input!
Hi, I'm running a test node with 0.8, and everytime I try to do a major
compaction on one of the column families this message pops up. I have
plenty of space on disk for it and the sum of all the sstables is
smaller than the free capacity. Is there any way to force the
compaction?
bug in the 0.8.0 release version.
Cassandra splits the sstables depending on size and tries to find (by
default) at least 4 files of similar size.
If it cannot find 4 files of similar size, it logs that message in 0.8.0.
You can try to reduce the minimum required files for compaction and it wil
But decreasing min_compaction_threashold will affect on minor
compaction frequency, won't it?
maki
2011/6/10 Terje Marthinussen :
> bug in the 0.8.0 release version.
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
> If it canno
Hi Terje,
There are 12 SSTables, so I don't think that's the problem. I will try
anyway and see what happens.
El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> bug in the 0.8.0 release version.
>
>
>
> Cassandra splits the sstables depending on size and tries to find (by
>
El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> bug in the 0.8.0 release version.
>
>
> Cassandra splits the sstables depending on size and tries to find (by
> default) at least 4 files of similar size.
>
>
> If it cannot find 4 files of similar size, it logs that message
On Jun 9, 2011, at 10:04 PM, aaron morton wrote:
> I may be missing something but could you use a column for each of the last 48
> hours all in the same row for a url ?
>
> e.g.
> {
> "/url.com/hourly" : {
> "20110609T01:00:00" : 456,
> "20110609T02:00:00" : 4
You should always disable swap on server machines.
On Fri, Jun 10, 2011 at 2:28 AM, Donna Li wrote:
>
>
> Hello all:
>
> Does cassandra use swap size? why the physical memory is 1200932k free , but
> swap size is 31472k used. My cassandra server is 32 bit and DiskAccessMode
> is standard.
>
>
>
>
That would be correct for SimpleStrategy.
On Fri, Jun 10, 2011 at 3:09 AM, Donna Li wrote:
>
>
> Hello all:
>
> Cassandra use MD5 hash and we can get the token range such as:
>
> Address Status Load
> Range Ring
>
>
> 14271761949754830261013
-Original Message-
From: kundera-disc...@googlegroups.com
[mailto:kundera-disc...@googlegroups.com] On Behalf Of Vivek Mishra
Sent: Friday, June 10, 2011 7:09 PM
To: kundera-disc...@googlegroups.com
Subject: RE: Kundera for Cassandra 0.7.0 and 0.8.0
Hi,
Kundera code base and tests have
12 sounds perfectly fine in this case.
4 buckets, 3 in each bucket, the minimum default threshold _per is 4.
Terje
2011/6/10 Héctor Izquierdo Seliva
>
>
> El vie, 10-06-2011 a las 20:21 +0900, Terje Marthinussen escribió:
> > bug in the 0.8.0 release version.
> >
> >
> > Cassandra splits the s
Yes, which is perfectly fine for a short time if all you want is to compact
to one file for some reason.
I run min_compaction_threshold = 2 on one system here with SSD. No problems
with the more aggressive disk utilization on the SSDs from the extra
compactions, reducing disk space is much more im
El vie, 10-06-2011 a las 23:40 +0900, Terje Marthinussen escribió:
> Yes, which is perfectly fine for a short time if all you want is to
> compact to one file for some reason.
>
>
> I run min_compaction_threshold = 2 on one system here with SSD. No
> problems with the more aggressive disk utiliza
The O'Reilly book on Cass says this about READ consistency level ALL:
"Query all nodes. Wait for all nodes to respond, and return to the
client the record with the most recent timestamp. Then, if necessary,
perfrom a read repair in the background. If any nodes fail or respond,
fail the read o
I can't find any that gives an overview of their purpose/benefits/etc,
only how to code them. I can only guess that they are more efficient
for some reason but don't know exactly why or exactly what conditions I
would choose to use them over a regular column.
Thanks!
On Fri, Jun 10, 2011 at 1:09 PM, AJ wrote:
> It says "all nodes". Shouldn't it say "replication_factor nodes"?
My preferred phrasing would be "all replicas."
> I can understand this if the given row already exists from a previous write
> and one of the nodes that contains a replica is down. Bu
Hi AJ.
Counters are really cool for certain things..
The main benefit (from a high level perspective) is that you don't have to read
the record in to find the old value. (and stick a lock on the record to prevent
it from changing underneath you).
what I use them for is to increment page-views
My Cassandra used to work with no problems.
I was able to connect with no problems but now for some reason it doesn't
work anymore.
[default@unknown] connect localhost/9160;
Exception connecting to localhost/9160. Reason: Connection refused.
and
root# ./bin/cassandra-cli -host localhost -port 9
netstat -an | grep 9160
see anything? maybe cassandra service isn't running.?
look for hints in the log files. these are defined in the
$CASSANDRA_HOME/conf/log4j-server.properties ...
On Fri, Jun 10, 2011 at 9:23 PM, Jean-Nicolas Boulay Desjardins <
jnbdzjn...@gmail.com> wrote:
> My Cass
if I have a really short ttl, and the column expired before flush happens.
then if I query on this column, would Cassandra
recognize that it has lived past its ttl? or do I need to filter that out in
application logic?
Thanks
Yang
>
>
> I'd check you are reading the data you expect then wind back the number of
> requests and rows / columns requested. Get to a stable baseline and then add
> pressure to see when / how things go wrong.
>
I just loaded 4.8GB of similar data in another keyspace and ran the same
process as in my p
Ian,
Have you been able to measure the performance penalty of running at CL=ALL ?
Right now I'm spreading updates over such counter columns across workers so
they don't overlap keys and that way I don't go to CL=ALL but maybe that's
not worth it? Any input?
Thanks
Philippe
2011/6/10 Ian Holsman
These are filtered out server side (see
o.a.c.db.filter.QueryFilter#isRelevant and o.a.c.db.ExpiringColumn for
specifics).
On Fri, Jun 10, 2011 at 5:08 PM, Yang wrote:
> if I have a really short ttl, and the column expired before flush happens.
> then if I query on this column, would Cassandra
>
thanks Nate
On Fri, Jun 10, 2011 at 3:54 PM, Nate McCall wrote:
> These are filtered out server side (see
> o.a.c.db.filter.QueryFilter#isRelevant and o.a.c.db.ExpiringColumn for
> specifics).
>
> On Fri, Jun 10, 2011 at 5:08 PM, Yang wrote:
> > if I have a really short ttl, and the column expi
Problem:
I am attempting to compare a data model of SuperColumn family with a normal
Column Family with Secondary Indexes. I did not have insert issues with the
SuperColumn family. The problem I am having seems to be inserting into the
Column Family with indexes. Seems to be very slow and getti
I'm using thrift.CassandraServer directly within the same cassandra JVM to
accomplish my application tasks.
(I understand that this is not the normal usage mode, but the error here
may also be appearing in Cassandra server code
development, so I thought it could be of some value to look at )
I
Please take a look at this thread over in the hector-users mailing list:
http://groups.google.com/group/hector-users/browse_thread/thread/99835159b9ea1766
It looks as if the deleted columns are coming back to life when they shouldn't
be.
I don't want to open a bug on something if it's already g
Don't use destructive operations on the bytebuffer, always use e.g.
getLong(buffer.position)
On Fri, Jun 10, 2011 at 8:47 PM, Yang wrote:
> I'm using thrift.CassandraServer directly within the same cassandra JVM to
> accomplish my application tasks.
> (I understand that this is not the normal usa
All,
I was wondering if there are Cassandra python clients and which one would be
the best to use
Thanks a lot,
Carlos
pycassa..
http://pycassa.github.com/pycassa/
On Sat, Jun 11, 2011 at 4:58 AM, Carlos Sanchez wrote:
> All,
>
> I was wondering if there are Cassandra python clients and which one would
> be the best to use
>
> Thanks a lot,
>
> Carlos
>
Hi, all:
When disk is full, why ddb must reboot even after I clear the
disk?
Best Regards
Donna li
I would take a look at pycassa - https://github.com/pycassa/pycassa though
there is also a twisted client named Telephus -
http://github.com/driftx/Telephus.
The complete list of current client language options are found here:
http://wiki.apache.org/cassandra/ClientOptions
On Jun 10, 2011, at
35 matches
Mail list logo