Hi Jonathan,
You are right I had 1 node 1.0.2 for some reason so I did the upgrade again.
I have now a 4 cluster upgraded to 1.0.3 but now I get the following error
on 2 nodes on the cluster:
ERROR [HintedHandoff:3] 2011-11-23 06:39:31,250 AbstractCassandraDaemon.java
(line 133) Fatal exception i
Thanks. I'm trying to look up HttpAdaptor and what it does,
can you give any pointers? Thanks. I didn't find much useful
info just yet.
Maxim
On 11/22/2011 9:52 PM, Jeremiah Jordan wrote:
Yes, that is the port nodetool needs to access.
On Nov 22, 2011, at 8:43 PM, Maxim Potekhin wrote:
Hel
Yes, that is the port nodetool needs to access.
On Nov 22, 2011, at 8:43 PM, Maxim Potekhin wrote:
> Hello,
>
> I have this in my cassandra-env.sh
>
> JMX_PORT="7199"
>
> Does this mean that if I use nodetool from another node, it will try to
> connect to that
> particular port?
>
> Thanks,
Oops, I was thinking all in the same keyspace. If you made a new keyspace for
each DC you could specify where to put the data and have them only be in one
place.
-Jeremiah
On Nov 22, 2011, at 8:49 PM, Jeremiah Jordan wrote:
> Cassandra's Multiple Data Center Support is meant for replicating a
Cassandra's Multiple Data Center Support is meant for replicating all data
across multiple datacenter's efficiently.
You could use the Byte Order Partitioner to prefix data with a key and assign
those keys to nodes in specific data centers, though the edge nodes would get
tricky as those would
Hello,
I have this in my cassandra-env.sh
JMX_PORT="7199"
Does this mean that if I use nodetool from another node, it will try to
connect to that
particular port?
Thanks,
Maxim
Hi,
Thanks for the quick reply. Sorry if my question was not clear. I tried to
provide more info.
> Date: Tue, 22 Nov 2011 20:43:33 -0500
> Subject: Re: DataCenters each with their own local data source
> From: md.jahangi...@gmail.com
> To: user@cassandra.a
Distributing writes to all D.C.s? or reads?
If each D.C. has data specific to that particular geo, why do you have to
read from remote D.C. ?
You can easily incorporate logic to re-direct operation(either write/read)
to appropriate(local) D.C.
Still wondering why you want to do so?. Am assuming
Hi,
I am wondering if Cassandra's features and datacenter awareness can help me
with my scalability problems.
Suppose that I have a 10-20 Data centers, each with their own local (massive)
source of time series data. I would like:
- to avoid replication across data centers (this seems doable
There are quite a few attributes in the
org.apace.cassandra.db.StorageServiceMBean that could serve that
purpose.
Initialized, RPCServerRunning, OperationMode, Joined, and perhaps others
Note that some of those may not exist depending on your version of
cassandra, pick one appropriate for your ve
Would there be a better bean to look at to ascertain the status that would
be created when the server starts up?
On Tue, Nov 22, 2011 at 11:47 AM, Nick Bailey wrote:
> The StorageServiceMBean is only created once some reads/writes
> actually go to that node. Do a couple reads/writes from the CLI
This is already a lot better. While compacting, the cpu load remain quite
low. However, I still have some spikes of overload generating timeouts. Is
there some others tunes I can do to make this compaction more stable ?
2011/11/22 Jonathan Ellis
> m1.small is still... small. start by turning
>
The StorageServiceMBean is only created once some reads/writes
actually go to that node. Do a couple reads/writes from the CLI and
you should see the MBean afterwards.
This also means your monitoring application should handle this error
in the case of nodes restarting.
On Tue, Nov 22, 2011 at 7:5
We don't use subcomparator . We doubled checked comparator, and it look fine.
There was typo in my original post. I mean to say upgrade to 1.0.2 from
0.6.13. This issue is now resolved after we upgrade cass to 1.0.3.
Thanks!
Huy
--
View this message in context:
http://cassandra-user-incubato
Hi,
I have a 3 node cassandra cluster. I have RF set to 3 and do reads
and writes using QUORUM.
Here is my initial ring configuration
[root@CAP4-CNode1 ~]# /root/cassandra/bin/nodetool -h localhost ring
Address DC RackStatus State Load
OwnsToken
11342745
m1.small is still... small. start by turning
compaction_throughput_mb_per_sec all the way down to 1MB/s.
On Tue, Nov 22, 2011 at 9:58 AM, Alain RODRIGUEZ wrote:
> I followed your advice and install a 3 m1.small instance cluster. The
> problem is still there. I've got less timeouts because I have
I followed your advice and install a 3 m1.small instance cluster. The
problem is still there. I've got less timeouts because I have less
compaction due to a bigger amount of memory usable before flushing, but
when a compaction starts, I can reach 95% of the cpu used, which produces
timeouts. The co
1.0 branch is less stable then 0.8 for production. We discovered
following problems:
1. memory leak in scrub (also reported on this list)
2. problem with saving key caches for super column family - CASSANDRA-3511
3. in 1.0.3 some hints are stuck in system tables. Hints to other nodes
seems t
Good morning,
I'm trying to set up a simple monitoring application (that is a plugin to
Nagios), code can be found here:
https://github.com/so-net-developer/Cassandra/blob/master/nagios/CheckNode.java
However, when I try to run the CheckNode.java program I get an error that:
Exception in thread "
I am upgrading Cassandra 0.7.8 to 1.0.0 got error
ERROR [SSTableBatchOpen:2] 2011-11-22 09:48:00,000
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[SSTableBatchOpen:2,5,main],
java.lang.AssertionError,
org.apache.cassandra.io.sstable.SSTable.(SSTable.java:99),
org.apac
20 matches
Mail list logo