Does nodetool removetoken not work?
On Thu, Oct 13, 2011 at 12:59 AM, Eric Czech wrote:
> Not sure if anyone has seen this before but it's really killing me right
> now. Perhaps that was too long of a description of the issue so here's a
> more succinct question -- How do I remove nodes associat
Not sure if anyone has seen this before but it's really killing me right
now. Perhaps that was too long of a description of the issue so here's a
more succinct question -- How do I remove nodes associated with a cluster
that contain no data and have no reason to be associated with the cluster
what
Hi Aaron,
I didn't see that i can setup an account. cool.
I'll do it.
Jérémy
2011/10/12 aaron morton
> Thanks, feel free to add it if you want to.
>
> Otherwise I'll put it up.
>
> A
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickl
I am right now evaluating Cassandra 1.0.0-rc2 with a super column family with
one super column and variable (from 7 to 18) number of sub columns inside that
one super column. The column family has about 20 million rows (keys) and about
200,000 million sub columns (records) in total. I use a 3 n
I am right now evaluating Cassandra 1.0.0-rc2 with a super column family with
one super column and variable (from 7 to 18) number of sub columns inside that
one super column. The column family has about 20 million rows (keys) and about
200,000 million sub columns (records) in total. I use a 3 n
I am right now evaluating Cassandra 1.0.0-rc2 with a super column family with
one super column and variable (from 7 to 18) number of sub columns inside that
one super column. The column family has about 20 million rows (keys) and about
200,000 million sub columns (records) in total. I use a 3
strace -F -f -c java is how I use for other related issues. Haven't
used with Cassandra though.
On Wed, Oct 12, 2011 at 3:22 PM, Ashley Martens wrote:
> This is a production node on real hardware. I like the strace idea, do you
> have a workable command line for that?
>
> On Wed, Oct 12, 2011 at
Hi,
I'd like to release version 1.0.0-1 of Mojo's Cassandra Maven Plugin
to sync up with the pending 1.0.0 release of Apache Cassandra.
This version needs to be tested in conjunction with the current
staging repo for Cassandra 1.0.0
We solved 1 issue:
http://jira.codehaus.org/secure/ReleaseNote.
This is a production node on real hardware. I like the strace idea, do you
have a workable command line for that?
On Wed, Oct 12, 2011 at 1:13 PM, Mohit Anchlia wrote:
> Yes. If you have exhausted all the options I think it will be good to
> see if this issue persists accross other nodes after yo
I've never seen a JVM crash that was polite enough to run shutdown
hooks first, but it's worth a try.
On Wed, Oct 12, 2011 at 3:27 PM, Erik Forkalsrud wrote:
>
> My suggestion would be to put a recent Sun JVM on the problematic node and
> see if that eliminates the crashes.
>
> The Sun JVM appear
Thanks, feel free to add it if you want to.
Otherwise I'll put it up.
A
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 13/10/2011, at 1:01 AM, Jérémy SEVELLEC wrote:
> Hi Aaron,
>
> I think it's possible to complete the debian packag
Storage proxy will give you the total writes through the server, for all CFs.
CommitLog thread pool is not what you want. It's not designed to measure the
column or row throughput, it's just how many tasks have run through the thread
pool.
The closest thing to recording the number of columns i
My suggestion would be to put a recent Sun JVM on the problematic node
and see if that eliminates the crashes.
The Sun JVM appears the be the mainstream choice when running Cassandra,
so that's a more well tested configuration. You can search the list
archives for OpenJDK related bugs to see
Yes. If you have exhausted all the options I think it will be good to
see if this issue persists accross other nodes after you decommission
that node.
If this is not production and issue is reproducible easily you can
also try using strace with fork option to see if it gets killed at the
same plac
I guess it could be an option but I can't puppet the Oracle JDK install so I
would rather not.
On Wed, Oct 12, 2011 at 12:35 PM, Erik Forkalsrud wrote:
> On 10/12/2011 11:33 AM, Ashley Martens wrote:
>
> java version "1.6.0_20"
> OpenJDK Runtime Environment (IcedTea6 1.9.9) (6b20-1.9.9-0ubuntu1~
We have 20 nodes in this cluster.
Yes, however are you recommending that I decommission the node?
I noted the compaction because it is common for the last line in the log
file. For reference:
INFO [FlushWriter:12] 2011-10-12 18:10:09,823 Memtable.java (line 157)
Writing Memtable-HintsColumnFamily
On 10/12/2011 11:33 AM, Ashley Martens wrote:
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.9) (6b20-1.9.9-0ubuntu1~10.10.2)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
This may have been mentioned before, but is it an option to use the
Sun/Oracle JDK?
- Erik -
You mentioned this happens only on one node? How many nodes do you
have? Is it possible to turn off this node completely and run
compactions on other nodes and see if this happens there too?
Also, you mentioned this happens after compaction. Did you mean during
compaction or right after it? What l
No.
On Wed, Oct 12, 2011 at 11:46 AM, Brandon Williams wrote:
> Anything from the OOM killer in the last few lines from dmesg?
>
>
Anything from the OOM killer in the last few lines from dmesg?
On Wed, Oct 12, 2011 at 1:33 PM, Ashley Martens wrote:
> Ubuntu 10.10
>
> java version "1.6.0_20"
> OpenJDK Runtime Environment (IcedTea6 1.9.9) (6b20-1.9.9-0ubuntu1~10.10.2)
> OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
>
>
Ubuntu 10.10
java version "1.6.0_20"
OpenJDK Runtime Environment (IcedTea6 1.9.9) (6b20-1.9.9-0ubuntu1~10.10.2)
OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode)
Always the same node. No other nodes in this cluster, which all have the
same hardware and OS, have this issue.
I don't see any re
What OS? JVM version? is it always on the same node or all nodes? i had a
similar problem in the past in that the OS killed Cassandra because it felt
threatened and needed more resources.
On Wed, Oct 12, 2011 at 7:47 PM, Ashley Martens wrote:
> The thing is we only see that error once every s
The thing is we only see that error once every so often. Additional, since
Cassandra is not logging a shutdown message then it must be a violent
termination, which leaves no traces in the system logs. It's possible that
there is something wrong with the hardware, but the OS side I don't see what
wo
could you prefix the data with 3-4 bytes of a linear hash of the unencypted
data? it wouldn't be a perfect sort, but you'd have less of a range to query
to get the sorted values?
- Stephen
---
Sent from my Android phone, so random spelling mistakes, random nonsense
words and other nonsense are a
I heard cassandra may be going the direction of removing super column and
users are starting to just use prefixes in front of the column.
The reason I ask is I was going the way of only using supercolumns and then
many tables were fixed with just one supercolumn per row as the structure
for that t
I've used Traverse which seems OK (http://www.zyrion.com/products/). Nice
historical graphs, alerting, etc.
On Wed, Oct 12, 2011 at 9:27 AM, Brian Fleming wrote:
> Hi,
>
> ** **
>
> Has anybody used any solutions for harvesting and storing Cassandra JMX
> metrics for monitoring, trend analysi
I set GC_grace to 2 hours, for testing.
then I compacted the sstables using nodecmd,
but the resulting sstables contained many Deletion records older than 2 hours
"0d5e32633036663463310001":
[["0132f8820139303030303030303030303030303030303030303030303030303030
Unfortunately, that is not an option as we have to store the data in an
compressed and encrypted and therefore binary and non-sortable form.
On 10/12/2011 06:39 PM, David McNelis wrote:
Is it an option to not convert the data to binary prior to inserting
into Cassandra? Also, how large are the
Is it an option to not convert the data to binary prior to inserting into
Cassandra? Also, how large are the strings you're sorting? If its viable
to not convert to binary before writing to Cassandra, and you use one of the
string based column ordering techniques (utf8, ascii, for example), then
I personally use Munin with the jmx2munin plugin. Works perfectly for me.
On 10/12/2011 06:35 PM, David McNelis wrote:
Brian,
Have you looked at Datastax OpsCenter? I think it would do a lot of
what you're looking for.
On Wed, Oct 12, 2011 at 11:27 AM, Brian Fleming
mailto:bigbrianflem...
Brian,
Have you looked at Datastax OpsCenter? I think it would do a lot of what
you're looking for.
On Wed, Oct 12, 2011 at 11:27 AM, Brian Fleming
wrote:
> Hi,
>
> ** **
>
> Has anybody used any solutions for harvesting and storing Cassandra JMX
> metrics for monitoring, trend analysis, e
Hi there,
we are currently building a prototype based on cassandra and came into
problems on implementing sorted lists containing millions of items.
The special thing about the items of our lists is, that cassandra is not
able to sort them as the data is stored in a binary format which is not
Hi Aaron,
I cannot read the column with a slice query.
The slice query only returns data till a certain column and after that i
only get empty results.
I added log output to QueryFilter.isRelevant to see if the filter is
dropping the column(s) but it doesn't even show up there.
Next thing i will
> Hi,
>
> Has anybody used any solutions for harvesting and storing Cassandra JMX
> metrics for monitoring, trend analysis, etc.?
>
> JConsole is useful for single node monitoring/etc but not scalable & data
> obviously doesn't persist between sessions...
>
> Many thanks,
>
> Brian
I'm comfortable with saying that there's some problem with your
environment that hasn't been identified yet, because we see many many
people running 0.7.9 and it just does not die randomly.
Again, the exception you see in the log is consistent with being
killed externally (and not consistent with
Tue Oct 11 21:34:10 UTC 2011 - Fuck this Cassandra bullshit... it died again
Tue Oct 11 22:06:10 UTC 2011 - Fuck this Cassandra bullshit... it died again
Tue Oct 11 22:36:10 UTC 2011 - Fuck this Cassandra bullshit... it died again
Wed Oct 12 00:40:10 UTC 2011 - Fuck this Cassandra bullshit... it di
deploy@mobage-prod-cassandra150:~$ grep -i 'killed process'
/var/log/messages
deploy@mobage-prod-cassandra150:~$
On Tue, Oct 11, 2011 at 5:57 PM, Jonathan Ellis wrote:
> grep -i 'killed process' /var/log/messages
>
>
On Wed, Oct 12, 2011 at 3:51 PM, Jonathan Ellis wrote:
> Well, the reason you'd want to run repair is to get the tombstone on
> nodes that missed the insert. And that would only be important if you
> sometimes generate inserts that would be otherwise shadowed by the
> tombstone, right?
The initi
The disk full bug is fixed in the -final artifacts and the 1.0.0 svn branch.
On Wed, Oct 12, 2011 at 10:16 AM, Günter Ladwig wrote:
> Hi,
>
> I tried running cfstats on other nodes. It works on all except two nodes.
> Then I tried scrubbing the OSP CF on one of the nodes where it fails
> (actua
Hi,
I tried running cfstats on other nodes. It works on all except two nodes. Then
I tried scrubbing the OSP CF on one of the nodes where it fails (actually the
node where the first exception I reported happened), but got this exception in
the log:
[...]
INFO 14:58:00,604 Scrub of
SSTableRead
On 10/11/2011 12:05 PM, Eric Evans wrote:
> Let's do it. We can organize an official one, and still grab food
> together if that's not enough. :)
Great! Thanks for putting this together.
Try scrubbing the CF ("nodetool scrub") and see if that fixes it.
If not, then at least we have a reproducible problem. :)
On Tue, Oct 11, 2011 at 4:43 PM, Günter Ladwig wrote:
> Hi all,
>
> I'm seeing the same problem on my 1.0.0-rc2 cluster. However, I do not have
> 5000, but just three (comp
Well, the reason you'd want to run repair is to get the tombstone on
nodes that missed the insert. And that would only be important if you
sometimes generate inserts that would be otherwise shadowed by the
tombstone, right?
On Wed, Oct 12, 2011 at 2:17 AM, Sylvain Lebresne wrote:
> Unfortunately
On 10/11/2011 09:49 PM, Terry Cumaranatunge wrote:
Hello,
If you set a ttl and expire a column, I've read that this eventually
turns into a tombstone and will be cleaned out by the GC. Are
expirations considered a form of delete that still requires a node
repair to be run in gc_grace_period se
On Wed, Oct 12, 2011 at 9:37 AM, Amit Chavan wrote:
> Hi,
> Looking at this talk
> (http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf)
> by Sylvain Lesbresne at DataStax, I had a few questions related to my
> understanding Cassandra architecture.
> Assuming that we have
Hi Aaron,
I think it's possible to complete the debian packaging wiki too (for a best
user experience) :
When servers connect to the Internet through a proxy, It's possible to have
an other error when adding publickey
(gpg --keyserver pgp.mit.edu --recv-keys F758CE318D77295D)
the error is like
> - Serializing data is not an option, because I would like to have possibility
> to access data using console
fair enough, but I would do some tests to see the difference in performance and
disk space
> - Using Cassandra to build something like "HTTP session store" with short TTL
> is not an a
Yes, all three use SnappyCompressor.
On 12.10.2011, at 02:58, Jonathan Ellis wrote:
> Are all 3 CFs using compression?
>
> On Tue, Oct 11, 2011 at 4:43 PM, Günter Ladwig wrote:
>> Hi all,
>>
>> I'm seeing the same problem on my 1.0.0-rc2 cluster. However, I do not have
>> 5000, but just three
I only saw this error message when all Cassandra nodes are down.
How you get the Cluster and how you set the hosts?
发件人: CASSANDRA learner [mailto:cassandralear...@gmail.com]
发送时间: 2011年10月12日 14:30
收件人: user@cassandra.apache.org
主题: Re: Hector Problem Basic one
Thanks for the reply ben.
Actuall
Hi,
Looking at this talk (
http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf)
by Sylvain Lesbresne at DataStax, I had a few questions related to my
understanding Cassandra architecture.
Assuming that we have a keyspace in Cassandra with:
1. Replication Factor (RF) = 1.
Thanks for the quick replies guys!
Just to explain to you why I wanted to understand these two measures, I do
batch inserts to Cassandra but the batches are not fixed in size i.e. the
number of columns in a batch varies and also the data type of the values
placed in the columns varies (the name of
> earlier: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/how-does-compaction-throughput-kb-per-sec-affect-disk-io-td6831711.html
> might not directly throttle the disk I/O?
Again: Compaction throttling will throttle compaction, which affects
both CPU and I/O for fundamental reas
Unfortunately, expiring column are no magic bullet. If you insert
columns with ttl=1,
you're roughly doing the same thing than deleting, so the exact same
rule concerning
repair applies.
What can be said on repair and expiring columns (and that may or may
not be helpful)
is that if you have a colu
53 matches
Mail list logo