Aaron,
thanks for your help.
I ran 'nodetool scrub' and it finished after a couple of hours. But there
are no infos about
out of order rows in the logs and the compaction on the column family still
raises the same
exception.
With the row key I could identify some of the errant SSTables and remov
On Tue, Feb 12, 2013 at 6:13 PM, Edward Capriolo wrote:
>
> Are vnodes on by default. It seems that many on list are using this feature
> with small clusters.
They are not.
-Brandon
Are there any documentation/examples available for DataStax java-driver besides
what's in the GitHub repo?
-- Drew
I take that back. vnodes are useful for any size cluster, but I do not see
them as a day one requirement. It seems like many people are stumbling over
this.
On Tuesday, February 12, 2013, Edward Capriolo
wrote:
>
> Are vnodes on by default. It seems that many on list are using this
feature with s
Are vnodes on by default. It seems that many on list are using this feature
with small clusters.
I know these days anything named virtual is sexy, but they are not useful
for small clusters are they. I do not see why people are using them.
On Monday, February 11, 2013, aaron morton wrote:
> So
You always want to run upgrade stables, ASAP. Sooner or later as the tables
compact they will upgrade. I hit a bug once with large bloom filters where,
after an upgrade, reading old bloom filters caused 0 rows returned when
there was data.
Once bitten twice shy. Upgrade one node disable gossip, di
It can also happen if you have an older/non sun jvm.
On Tuesday, February 12, 2013, aaron morton wrote:
> This looks like a bug in 1.2 beta
https://issues.apache.org/jira/browse/CASSANDRA-4553
> Can you confirm you are running 1.2.1 and if you can re-create this with
a clean install please create
Yes. You need to run nodetool repair on each node. Repair calculates and
transmits the differences.
On Tuesday, February 12, 2013, S C wrote:
> I have some data in my keyspaces. When I increase replication factor of a
existing keyspaces say from 2 to 3, will a nodetool repair create a a new
repli
>
> num_tokens is only used at bootstrap
I think it's also used in this case (already bootstrapped with num_tokens =
1 and now num_tokens > 1). Cassandra will split a node's current range into
*num_tokens* parts and there should be no change to the amount of ring a
node holds before shuffling.
O
We are open sourcing the system so I don't mind at all.
We are using patterns from this web page
https://github.com/deanhiller/playorm/wiki/Patterns-Page
Realize PlayOrm is doing a huge amount of heavy lifting for us with it's
virtual tables and partioning. This will be hard to explain but I wi
Would you mind sharing your schema on the list? It would be useful to see
how you modeled your data. Or you could email me privately if you want.
Thanks
Boris
On Tue, Feb 12, 2013 at 4:11 PM, Hiller, Dean wrote:
> Yes, the limit of the width of a row is approximately in the millions,
> perhaps
I have some data in my keyspaces. When I increase replication factor of a
existing keyspaces say from 2 to 3, will a nodetool repair create a a new
replica on one of the other node in the cluster? Can somebody explain?
Thanks,SC
Yes, the limit of the width of a row is approximately in the millions, perhaps
lower than 10 million. We plan to go well above that in our use case ;). Our
widest row for indexing right now is only around 200,000 columns and we have
been in production one month(At 10 years that would be about
Restore the settings for num_tokens and intial_token to what they were before
you upgraded.
They should not be changed just because you are upgrading to 1.2, they are used
to enable virtual nodes. Which are not necessary to run 1.2.
Cheers
-
Aaron Morton
Freelance Cassandra D
This looks like a bug in 1.2 beta
https://issues.apache.org/jira/browse/CASSANDRA-4553
Can you confirm you are running 1.2.1 and if you can re-create this with a
clean install please create a ticket on
https://issues.apache.org/jira/browse/CASSANDRA
Thanks
-
Aaron Morton
Free
No, I did not run shuffle since the upgrade was not successful.
what do you mean "reverting the changes to num_tokens and inital_token"?
set num_tokens=1? initial_token should be ignored since it is not
bootstrap. right?
Thanks,
Daning
On Tue, Feb 12, 2013 at 10:52 AM, aaron morton wrote:
> We
Were you upgrading to 1.2 AND running the shuffle or just upgrading to 1.2?
If you have not run shuffle I would suggest reverting the changes to num_tokens
and inital_token. This is a guess because num_tokens is only used at bootstrap.
Just get upgraded to 1.2 first, then do the shuffle when t
Thanks Sylvain. BTW, what's the status of the java-driver? When will it be GA?
On Feb 12, 2013, at 1:19 AM, Sylvain Lebresne wrote:
> Yes, it's called atomic_batch_mutate and is used like batch_mutate. If you
> don't use thrift directly (which would qualify as a very good idea), you'll
> need
On Feb 12, 2013, at 18:58 , aaron morton wrote:
> Just checking if this sorted it's self out?
Well, partially. :) Situation at the moment:
>> 1) Is the new node not showing up in the "nodetool status" expected behavior?
Not showing still, don't know if should. :)
>> 2) Do I have to wait fo
When my EC2 instance failed I restarted it, and added the new private IP
address to the list of seed nodes (was this my error?).
Nodetool then showed 4 live nodes and one dead one (corresponding to the
old private IP address).
I'm guessing that what I should have done on the restarted node is star
> Can anyone know the impact of not running upgrade sstables? Or possible not
> running it for several days?
nodetool repair will not work.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 12/02/2013, at 11:54 AM, Mik
You have linked to the 1.2 news file, which branched from 1.1 at some point.
Look at the news file in the distribution you are installing or here
https://github.com/apache/cassandra/blob/cassandra-1.1/NEWS.txt
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aa
Cassandra handles nodes changing IP. The import thing to Cassandra is the
token, not the IP.
In your case did the replacement node have the same token as the failed one?
You can normally work around these issues using commands like nodetool
removetoken.
Cheers
-
Aaron Morton
> I see the same keys in both nodes. Replication is not enabled.
Why do you say that ?
Check the schema for Keyspace1 using the cassandra-cli.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 12/02/2013, at 9:31 AM
snapshot all nodes so you have a backup: nodetool snapshot -t corrupt
run nodetool scrub on the errant CF.
Look for messages such as:
"Out of order row detected…"
"%d out of order rows found while scrubbing %s; Those have been written (in
order) to a new sstable (%s)"
In the logs.
Cheers
We are using cassandra for time series as well with PlayOrm. A guess is
we will be doing equal reads and writes on all the data going back 10
years(currently in production we are write heavy right now). We have
60,000 virtual tables (one table per sensor we read from and yes we have
that many sen
> So is it possible to delete all the data inserted in some CF between 2 dates
> or data older than 1 month ?
No.
You need to issue row level deletes. If you don't know the row key you'll need
to do range scans to locate them.
If you are deleting parts of wide rows consider reducing the
min_
Just checking if this sorted it's self out?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 10/02/2013, at 1:15 AM, Jouni Hartikainen
wrote:
> Hello all,
>
> I have a cluster of three nodes running 1.2.1 and I'd l
> -Xms8049M
> -Xmx8049M
> -Xmn800M
That's a healthy amount of memory for the JVM.
If you are using Row Caches, reduce their size and/or ensure you are using
Serializing (off heap) caches.
Also consider changing the yaml conf flush_largest_memtables_at from 0.75 to
0.80 so it is different to
Your use case is 100% on the money for Cassandra. But let me take a
chance to slam the other NoSQLs. (not really slam but you know)
Riak is a key-value store. It is not a column family store where a
rowkey has a map of sorted values. This makes the time series more
awkward as the time series has t
Hi
I am currently evaluating Cassandra on a single node. Running the node
seems fine, it responds to Thrift (via Hector) and CQL3 requests to create
& delete keyspaces. I have not yet tested any data operations.
However, I get the following each time the node is started. This is using
the latest
Would you mind creating a new one? Since that former issue has been closed
a long time ago (and I highly suspect that the use of vnodes is not
irrelevant here, so this is kind of new (it may not be a new bug per-se,
but vnodes may make if way more likely to reproduce)).
Thanks,
Sylvain
On Tue, F
Row level isolation is always there. It work for atomic and non atomic
batch_mutate. But it's only within the same row (let's recall that even the
non atomic batch_mutate *is* atomic within a single row).
--
Sylvain
On Tue, Feb 12, 2013 at 11:13 AM, DE VITO Dominique <
dominique.dev...@thalesgro
Hi,
we are hitting a AssertionError in MerkleTree as described here in Jira:
https://issues.apache.org/jira/browse/CASSANDRA-3014. We added a comment,
however this issue has already been resolved as cannot reproduce. Is
someone looking at the comments of resolved issues or is it better to open
a n
Is Cassandra 1.1 Row Level Isolation (a kind of batch-like) related to
"traditional" batch_mutate or atomic_batch_mutate Thrift API ?
Thanks for the answer.
Dominique
De : Sylvain Lebresne [mailto:sylv...@datastax.com]
Envoyé : mardi 12 février 2013 10:19
À : user@cassandra.apache.org
Objet :
Yes, it's called atomic_batch_mutate and is used like batch_mutate. If you
don't use thrift directly (which would qualify as a very good idea), you'll
need to refer to whatever client library you are using to see if 1) support
for that new call has been added and 2) how to use it. If you are not su
36 matches
Mail list logo