RE: Upgrade Cassandra Cluster to 1.0.3

2011-11-22 Thread Michael Vaknine
Hi Jonathan, You are right I had 1 node 1.0.2 for some reason so I did the upgrade again. I have now a 4 cluster upgraded to 1.0.3 but now I get the following error on 2 nodes on the cluster: ERROR [HintedHandoff:3] 2011-11-23 06:39:31,250 AbstractCassandraDaemon.java (line 133) Fatal exception i

Re: 7199

2011-11-22 Thread Maxim Potekhin
Thanks. I'm trying to look up HttpAdaptor and what it does, can you give any pointers? Thanks. I didn't find much useful info just yet. Maxim On 11/22/2011 9:52 PM, Jeremiah Jordan wrote: Yes, that is the port nodetool needs to access. On Nov 22, 2011, at 8:43 PM, Maxim Potekhin wrote: Hel

Re: 7199

2011-11-22 Thread Jeremiah Jordan
Yes, that is the port nodetool needs to access. On Nov 22, 2011, at 8:43 PM, Maxim Potekhin wrote: > Hello, > > I have this in my cassandra-env.sh > > JMX_PORT="7199" > > Does this mean that if I use nodetool from another node, it will try to > connect to that > particular port? > > Thanks,

Re: DataCenters each with their own local data source

2011-11-22 Thread Jeremiah Jordan
Oops, I was thinking all in the same keyspace. If you made a new keyspace for each DC you could specify where to put the data and have them only be in one place. -Jeremiah On Nov 22, 2011, at 8:49 PM, Jeremiah Jordan wrote: > Cassandra's Multiple Data Center Support is meant for replicating a

Re: DataCenters each with their own local data source

2011-11-22 Thread Jeremiah Jordan
Cassandra's Multiple Data Center Support is meant for replicating all data across multiple datacenter's efficiently. You could use the Byte Order Partitioner to prefix data with a key and assign those keys to nodes in specific data centers, though the edge nodes would get tricky as those would

7199

2011-11-22 Thread Maxim Potekhin
Hello, I have this in my cassandra-env.sh JMX_PORT="7199" Does this mean that if I use nodetool from another node, it will try to connect to that particular port? Thanks, Maxim

RE: DataCenters each with their own local data source

2011-11-22 Thread Mathieu Lalonde
Hi, Thanks for the quick reply.  Sorry if my question was not clear.  I tried to provide more info. > Date: Tue, 22 Nov 2011 20:43:33 -0500 > Subject: Re: DataCenters each with their own local data source > From: md.jahangi...@gmail.com > To: user@cassandra.a

Re: DataCenters each with their own local data source

2011-11-22 Thread Jahangir Mohammed
Distributing writes to all D.C.s? or reads? If each D.C. has data specific to that particular geo, why do you have to read from remote D.C. ? You can easily incorporate logic to re-direct operation(either write/read) to appropriate(local) D.C. Still wondering why you want to do so?. Am assuming

DataCenters each with their own local data source

2011-11-22 Thread Mathieu Lalonde
Hi, I am wondering if Cassandra's features and datacenter awareness can help me with my scalability problems. Suppose that I have a 10-20 Data centers, each with their own local (massive) source of time series data.  I would like: - to avoid replication across data centers (this seems doable

Re: Issues with JMX monitoring -- v0.8.7

2011-11-22 Thread Nick Bailey
There are quite a few attributes in the org.apace.cassandra.db.StorageServiceMBean that could serve that purpose. Initialized, RPCServerRunning, OperationMode, Joined, and perhaps others Note that some of those may not exist depending on your version of cassandra, pick one appropriate for your ve

Re: Issues with JMX monitoring -- v0.8.7

2011-11-22 Thread David McNelis
Would there be a better bean to look at to ascertain the status that would be created when the server starts up? On Tue, Nov 22, 2011 at 11:47 AM, Nick Bailey wrote: > The StorageServiceMBean is only created once some reads/writes > actually go to that node. Do a couple reads/writes from the CLI

Re: Compaction -> CPU load 100% -> time out

2011-11-22 Thread Alain RODRIGUEZ
This is already a lot better. While compacting, the cpu load remain quite low. However, I still have some spikes of overload generating timeouts. Is there some others tunes I can do to make this compaction more stable ? 2011/11/22 Jonathan Ellis > m1.small is still... small. start by turning >

Re: Issues with JMX monitoring -- v0.8.7

2011-11-22 Thread Nick Bailey
The StorageServiceMBean is only created once some reads/writes actually go to that node. Do a couple reads/writes from the CLI and you should see the MBean afterwards. This also means your monitoring application should handle this error in the case of nodes restarting. On Tue, Nov 22, 2011 at 7:5

Re: Added column does not sort as the last column

2011-11-22 Thread huyle
We don't use subcomparator . We doubled checked comparator, and it look fine. There was typo in my original post. I mean to say upgrade to 1.0.2 from 0.6.13. This issue is now resolved after we upgrade cass to 1.0.3. Thanks! Huy -- View this message in context: http://cassandra-user-incubato

Assertion error during bootstraping cassandra 1.0.2

2011-11-22 Thread Ramesh Natarajan
Hi, I have a 3 node cassandra cluster. I have RF set to 3 and do reads and writes using QUORUM. Here is my initial ring configuration [root@CAP4-CNode1 ~]# /root/cassandra/bin/nodetool -h localhost ring Address DC RackStatus State Load OwnsToken 11342745

Re: Compaction -> CPU load 100% -> time out

2011-11-22 Thread Jonathan Ellis
m1.small is still... small. start by turning compaction_throughput_mb_per_sec all the way down to 1MB/s. On Tue, Nov 22, 2011 at 9:58 AM, Alain RODRIGUEZ wrote: > I followed your advice and install a 3 m1.small instance cluster. The > problem is still there. I've got less timeouts because I have

Re: Compaction -> CPU load 100% -> time out

2011-11-22 Thread Alain RODRIGUEZ
I followed your advice and install a 3 m1.small instance cluster. The problem is still there. I've got less timeouts because I have less compaction due to a bigger amount of memory usable before flushing, but when a compaction starts, I can reach 95% of the cpu used, which produces timeouts. The co

experience with 1.0 branch

2011-11-22 Thread Radim Kolar
1.0 branch is less stable then 0.8 for production. We discovered following problems: 1. memory leak in scrub (also reported on this list) 2. problem with saving key caches for super column family - CASSANDRA-3511 3. in 1.0.3 some hints are stuck in system tables. Hints to other nodes seems t

Issues with JMX monitoring -- v0.8.7

2011-11-22 Thread David McNelis
Good morning, I'm trying to set up a simple monitoring application (that is a plugin to Nagios), code can be found here: https://github.com/so-net-developer/Cassandra/blob/master/nagios/CheckNode.java However, when I try to run the CheckNode.java program I get an error that: Exception in thread "

Upgrade cassandra to 1.0.0

2011-11-22 Thread Michael Vaknine
I am upgrading Cassandra 0.7.8 to 1.0.0 got error ERROR [SSTableBatchOpen:2] 2011-11-22 09:48:00,000 AbstractCassandraDaemon.java (line 133) Fatal exception in thread Thread[SSTableBatchOpen:2,5,main], java.lang.AssertionError, org.apache.cassandra.io.sstable.SSTable.(SSTable.java:99), org.apac