Hi!
I have a 16 node cluster with two keyspaces and several column families
within each, all with RF=2.
All the reads/writes work with all column families but only one of them gives
me an unavailable exception, even with CL.ANY consistency. The nodetool ring
shows that three of the nodes of t
I have a cluster with three nodes, version 0.7.0 RC2. Each node has dual NIC's,
eth0 to the internet and eth1 to a private network (192.168.1.xxx). The outside
NIC on each node is frewalled using iptables, only port 22 is allowed through.
My cassandra.yaml configuration file refers only to the
First thing you are going to need is some log messages. From the machine the
client was connected to when it returned the UnavailableException.
At CL ANY even Hinted Handoff counts towards meeting the CL for a write
http://wiki.apache.org/cassandra/HintedHandoff . So as long as the client can
c
Okay. Got that. Thanks.
Bringing back just one of the nodes solved it.
What I was keen on knowing was if there exists a way to know which keys reside
on which node. Like a 'nodetool column.path' and it prints the nodelist that
the column path resides on :). I have HH disabled for some other bene
The code for nodetool appears to just pass the host value through to the
NodeProbe. Was there anything else in the stack trace ?
If you use the host name of the machine rather than ip what happens?
cassandra-env.sh includes a link to this page about getting JMX running with
firewalls
http://bl
On Dec 14, 2010, at 2:29 AM, Brandon Williams wrote:
> On Mon, Dec 13, 2010 at 6:43 PM, Daniel Doubleday
> wrote:
> Oh - well but I see that the coordinator is actually using its own score for
> ordering. I was only concerned that dropped messages are ignored when
> calculating latencies but
Peter, Jonathon - thank you for your replies.
I should probably have repeated myself in the body, but as I
mentioned in the subject line, we're running Sun Java 1.6.
On 10 December 2010 18:37, Peter Schuller wrote:
> Memory-mapped files will account for both virtual and, to the extent
> that
On 10 December 2010 18:37, Peter Schuller wrote:
> Memory-mapped files will account for both virtual and, to the extent
> that they are resident in memory, to the resident size of the process.
To clarify - in our storage-conf we have:
mmap_index_only
I know it's a matter of degree, but
On Dec 12, 2010, at 17:21, Jonathan Ellis wrote:
> http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors
I can rule out the first 3. I was running cassandra with default settings, i.e.
1GB heap and 256M memtable. So, with 3 memtables+1GB the JVM should run with
Pig seems to think my keyspace doesn't exist. I'm connecting to a remote
cassandra instance configured in the environment variables
PIG_RPC_PORT and PIG_INITIAL_ADDRESS
(an ip address)
I get the following backend logged output...
**
org.apache.pig.backend.executionengine.E
Thanks for your answers, I have checkout the 0.7 branch but still having
troubles:
*init_() takes at least 3 arguments (2 given)*
2010-12-14 14:08:36+0100 [Uninitialized] will retry in 2 seconds
2010-12-14 14:08:36+0100 [Uninitialized] Stopping factory
2010-12-14 14:08:37+0100 [Uninitialized] Un
hi,
I'm using apache-cassandra-0.7.0-rc1, and use java 1.6.0_17.
The node collapse because of java.lang.OutOfMemoryError: Java heap
space, and now it can't be restarted, becuase every time when it's
replay the commit-logs, it will collapse by OOM error.
I have 4G memory and I tried to set the bi
This is "A row has grown too large" section from that troubleshooting guide.
On Tue, Dec 14, 2010 at 5:27 AM, Timo Nentwig wrote:
>
> On Dec 12, 2010, at 17:21, Jonathan Ellis wrote:
>
> >
> http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors
>
> I can rule out t
On Tue, Dec 7, 2010 at 20:25, Edward Capriolo wrote:
> I am quite ready to be stoned for this thread but I have been thinking
> about this for a while and I just wanted to bounce these ideas of some
> guru's.
>
> ...
>
> The upsides ?
> 1) Since disk/instance failure only degrades the overall perf
On Dec 14, 2010, at 14:41, Jonathan Ellis wrote:
> This is "A row has grown too large" section from that troubleshooting guide.
Why? This is what a typical "row" (?) looks like:
[defa...@test] list tracking limit 1;
---
RowKey: 123
=> (column=key, value=foo, timestamp=129223800
On Tue, 2010-12-14 at 11:06 +, Jedd Rashbrooke wrote:
> JNA is something I'd read briefly about a while back, but now
> it might be something I need to explore further. We're using
> Cassandra 0.6.6, and our Ubuntu version offers a packaged
> release of libjna 3.2.3-1 .. rumours on the Int
Hi has anyone noticed that the documentation for the Cassandra Class is gone
from the website?
http://blog.evanweaver.com/2010/12/06/cassandra-0-8/
I was wondering if there's a way for me to count how many rows exist inside a
Column Family and a way to erase the contents of that Column Family
On Dec 14, 2010, at 15:31, Timo Nentwig wrote:
> On Dec 14, 2010, at 14:41, Jonathan Ellis wrote:
>
>> This is "A row has grown too large" section from that troubleshooting guide.
>
> Why? This is what a typical "row" (?) looks like:
>
> [defa...@test] list tracking limit 1;
> -
Thanks.
It is very helpful.
I think I'd like to write to the same column.
Would you please give me more details about your last sentence? For example,
why can't I use locking mechanism inside of cassandra?
Thanks.
Alvin
2010/12/13 Aaron Morton
> In your example is a little unclear.
>
> If yo
On Tue, Dec 14, 2010 at 1:08 AM, Rajat Chopra wrote:
> What I was keen on knowing was if there exists a way to know which keys
> reside on which node. Like a ‘nodetool column.path’ and it prints the
> nodelist that the column path resides on J. I have HH disabled for some
> other benevolent reason
> I have 4G memory and I tried to set the binary_memtable_throughput_in_mb to
> 128M and 64M, It's still not work.
> I do not have big rows, and no row cache,use default Consistency_level...
> Any ideas?
Assuming you truly don't have large rows; what's your memtable
thresholds set to? Make sure it
On Tue, Dec 14, 2010 at 8:52 AM, Gary Dusbabek wrote:
> On Tue, Dec 7, 2010 at 20:25, Edward Capriolo wrote:
>> I am quite ready to be stoned for this thread but I have been thinking
>> about this for a while and I just wanted to bounce these ideas of some
>> guru's.
>>
>> ...
>>
>> The upsides ?
Timo,
Apologies if I missed it above, but how big is the batch size you are
sending to batch_mutate?
On Tue, Dec 14, 2010 at 9:15 AM, Timo Nentwig wrote:
> On Dec 14, 2010, at 15:31, Timo Nentwig wrote:
>
>> On Dec 14, 2010, at 14:41, Jonathan Ellis wrote:
>>
>>> This is "A row has grown too larg
Could you give more details on what you're trying to do? This sounds like a
case where a UUID will give you what you need without needing to lock.
- Tyler
On Tue, Dec 14, 2010 at 10:24 AM, Alvin UW wrote:
> Thanks.
> It is very helpful.
>
> I think I'd like to write to the same column.
>
> Wou
>> Memory-mapped files will account for both virtual and, to the extent
>> that they are resident in memory, to the resident size of the process.
>
> This bears further investigation. Would you consider a 3GB overhead
> on a 4GB heap a possibility? (From a position of some naivety, this
> seem
> I can rule out the first 3. I was running cassandra with default settings,
> i.e. 1GB heap and 256M memtable. So, with 3 memtables+1GB the JVM should run
> with >1.75G (although http://wiki.apache.org/cassandra/MemtableThresholds
> considers to increase heap size only gently). Did so. 4GB mach
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
> at
> org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:261)
> at
> org.apache.cassandra.db.C
On Dec 14, 2010, at 18:45, Nate McCall wrote:
> Timo,
> Apologies if I missed it above, but how big is the batch size you are
> sending to batch_mutate?
Actually only a single row at a time for now (hector 0.7-21):
final Cluster cluster = HFactory.getOrCreateCluster("Test",
cassandraHo
On Dec 14, 2010, at 19:45, Peter Schuller wrote:
>> java.lang.OutOfMemoryError: Java heap space
>>at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39)
>>at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
>>at
>> org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtil
On Dec 14, 2010, at 19:38, Peter Schuller wrote:
> For debugging purposes you may want to switch Cassandra to "standard"
> IO mode instead of mmap. This will have a performance-penalty, but the
> virtual/resident sizes won't be polluted with mmap():ed data.
Already did so. It *seems* to run more
Hi
I'm sorry - don't want to be a pain in the neck with source questions. So
please just ignore me if this is stupid:
Isn't org.apache.cassandra.service.ReadResponseResolver suposed to throw a
DigestMismatchException if it receives a digest wich does not match the digest
of a read message?
If
>> The stack trace doesn't make sense relative to what I get checking out
>> 0.6.6. Are you *sure* this is 0.6.6, without patches or other changes?
>
> Oh, sorry, the original poster of this thread was/is actually using 0.6, I am
> (as mentioned in other posts) actually on 0.7rc2. Sorry that I did
> I just uncommented the GC JVMOPTS from the shipped cassandra start script and
> use Sun JVM 1.6.0_23. Hmm, but these "GC tuning options" are also
> uncommented. I'll comment them again and try again.
Maybe I was just too quick trying to mentally parse it and given the
jumbled line endings. You
>> For debugging purposes you may want to switch Cassandra to "standard"
>> IO mode instead of mmap. This will have a performance-penalty, but the
>> virtual/resident sizes won't be polluted with mmap():ed data.
>
> Already did so. It *seems* to run more stable, but it's still far off from
> being
> I posted mostly as a heads up for others using similar profiles (4GB
> heap on ~8GB boxes) to keep an eye out for. I expect a few people,
> particularly if they're on Amazon EC2, are running this type of setup.
>
> On the other hand, mum always said I was unique. ;)
So, now that I get that
Correct. https://issues.apache.org/jira/browse/CASSANDRA-1830 is open to
fix that. If you'd like to review the patch there, that would be very
helpful. :)
On Tue, Dec 14, 2010 at 1:55 PM, Daniel Doubleday
wrote:
> Hi
>
> I'm sorry - don't want to be a pain in the neck with source questions. So
> We are entirely IO-bound. What killed us last week were to many reads
> combined with flushes and compactions. Reducing compaction priority helped
> but it was not enough. Them main problem why we could not add nodes though
> had to do with the quorum reads we are doing:
I'm going to respond
On Tue, Dec 14, 2010 at 7:11 AM, Amin Sakka, Novapost <
amin.sa...@novapost.fr> wrote:
>
> Thanks for your answers, I have checkout the 0.7 branch but still having
> troubles:
> *init_() takes at least 3 arguments (2 given)*
Are you using the 0.7 branch of telephus too?
-Brandon
Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my db
and for some reason every time it gets to a 100 records it stops no error
report or anything apparently it keeps storing but every time I count the
number of records it stays in a hundred, it is updating two Column Fam
> Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my db
> and for some reason every time it gets to a 100 records it stops no error
> report or anything apparently it keeps storing but every time I count the
> number of records it stays in a hundred, it is updating two Co
Makes perfect sense thanks, how can I set the Count limit for an specific
Column Family?
On Dec 14, 2010, at 3:47 PM, Peter Schuller wrote:
>> Hi i'm using Cassandra 0.6.8 and Fauna, I'm running a batch to populate my
>> db and for some reason every time it gets to a 100 records it stops no er
(Btw I said "row count" in my response; that was a poor choice of
words given that "row" has a specific meaning in Cassandra. I meant
column count.)
> Makes perfect sense thanks, how can I set the Count limit for an specific
> Column Family?
Looks like you can pass a :count option to get() (I ju
>> Makes perfect sense thanks, how can I set the Count limit for an specific
>> Column Family?
>
> Looks like you can pass a :count option to get() (I just did a quick
> check, I haven't used the client myself). See cassandra.rb's
> documentation at the top.
... and that seems to apply to count_c
> Memory-mapped files will account for both virtual and, to the extent
> that they are resident in memory, to the resident size of the process.
> However, your graph:
Correcting myself in the interest of providing correct information,
this doesn't seem to be true - at least not always. I don't kno
On Dec 14, 2010, at 21:07, Peter Schuller wrote:
> In that case, based on the strack trace, I wonder if you're hitting
> what I was hitting just yesterday/earlier today:
>
> https://issues.apache.org/jira/browse/CASSANDRA-1860
>
> Which is suspected (currently being tested that it's gone with
> If it helps, I also found quite a few of these in the logs
>
> org.apache.cassandra.db.UnserializableColumnFamilyException: Couldn't find
> cfId=224101
>
> However a single cassandra instance locally (OSX, 1.6.0_22, mmap) runs just
> perfect for
> hours. No exceptions, no OOM.
Given that these
There is a truncate() function on the ruby api if you require Cassandra/0.7
this can truncate all the data in a CF. It will call the truncate function on
the thrift api.
I do not know of a precise way to get a count of rows. There is a function to
count the number of columns, see count_columns
There's an estimateKeys() function exposed via JMX that will give you an
approximate row count for the node. In jconsole this shows up under
o.a.c.db -> ColumnFamilies -> Keyspace -> CF -> Operations.
There's not a "precise" way to count rows other than to do a
get_range_slices() over the entire
Hello,
I'm using cassandra 0.7.0-rc2. When I tried to get column contents in a
super column of Super CF like below;
] get myCF['key']['scName'];
the client reply
supercolumn parameter is not optional for super CF user
It seemed to work in cassandra-0.7.0-beta2, if my memory is not wrong.
The clu
This is probably because rmi code jmx uses to listen detected wrong address.
To fix this add the following to cassandra nodes startup script instances:
-Djava.rmi.server.hostname=127.0.0.1
(change 127.0.0.1 to actual internal address of cassandra node)
50 matches
Mail list logo