Thanks Maki.
If you came across with any other book supporting latest Cassandara
versions, pls let me know.
On Thu, Sep 22, 2011 at 12:03 PM, Maki Watanabe wrote:
> The book is a bit out dated now.
> You should better to use cassandra-cli to define your application schema.
> Please refer to con
The book is a bit out dated now.
You should better to use cassandra-cli to define your application schema.
Please refer to conf/schema-sample.txt and help in cassandra-cli.
% cassandra-cli
[default@unknown] help;
[default@unknown] help create keyspace;
[default@unknown] help create column family;
Any suggestions on this, folks?
On Tue, Sep 6, 2011 at 12:59 AM, tushar pal wrote:
> Hi,
> I am facing some problem using Thrift 7.
> I downloaded the tar file.I downloaded the windows exe too for . Created a
> thrift jar from the lib java path and then generated the java class from
> tutor
Hi all,
Im refering to the book authored by Eben Hewitt, named Cassandra The
Definitive Guide
There, in the Sample Application chapter (Chapter 4), example 4.1, a sample
schema is given in a file named "cassandra.yaml". (I have mentioned it
below)
I'm using Cassandra 0.8.6 version.
My question
Thanks Jonathan. where is the data directory configured? since I did not
any permission problem.
Daning
On 09/21/2011 11:01 AM, Jonathan Ellis wrote:
Means Cassandra couldn't create an empty file in the data directory
designating a sstable as compacted. I'd look for permissions
problems.
Sho
Okay, this is leaning more towards getting Brisk into our environment and
making sure we can get it all working.
We plan on deploying to production Cassandra 0.8.5/6, however, Brisk only
works on 0.8.1 (in the 0.8.x release)
Can we have a Brisk node operating as part of the ring to still do our d
How much data is on the nodes in cluster 1 and how much disk space on cluster 2
? Be aware that Cassandra 0.8 has an issue where repair can go crazy and use a
lot of space.
If you are not regularly running repair I would also repair before the move.
The repair after the copy is a good idea but
What are the token assignments and what is the RF ?
Without knowing the details I would guess…
Make the RF changes you want for DC 1 and 2 and repair.
decomission the nodes in DC3 one at a time.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thela
It all depends on what sort of disasters you are planning for and how valuable
your data is.
The cheap and cheerful approach is to snapshot and then rsync / copy off the
node. Or you can do something like https://github.com/synack/tablesnap .
Cheers
-
Aaron Morton
Freelance Ca
Hello,
We're currently running on a 3-node RF=3 cluster. Now that we have a better
grip on things, we want to replace it with a 12-node RF=3 cluster of
"smaller" servers. So I wonder what the best way to move the data to the new
cluster would be. I can afford to stop writing to the current cluster
Thanks guys, that was precisely the problem. I also upped
the thrift_max_message_length_in_mb setting, since I guessed the default of
16MB would also block larger messages.
On Tue, Sep 20, 2011 at 6:47 PM, Jim Ancona wrote:
> Pete,
>
> See this thread
>
> http://groups.google.com/group/hector-us
+1 (non binding but lgtm)
On Wed, Sep 21, 2011 at 2:27 AM, Stephen Connolly
wrote:
> Hi,
>
> I'd like to release version 0.8.6-1 of Mojo's Cassandra Maven Plugin
> to sync up with the recent 0.8.6 release of Apache Cassandra.
>
>
> We solved 2 issues:
> http://jira.codehaus.org/secure/ReleaseNote
Means Cassandra couldn't create an empty file in the data directory
designating a sstable as compacted. I'd look for permissions
problems.
Short term there is no dire consequence, although it will keep
re-compacting that sstable. Longer term you'll run out of disk space
since nothing gets delete
I got this exception in system.log several times a day, do you have idea
what caused this problem and what is the consequence?
ERROR [CompactionExecutor:12] 2011-09-15 14:29:19,578
AbstractCassandraDaemon.java (line 139) Fatal exception in thread
Thread[CompactionExecutor:12,1,main]
java.io.
Hi,
We have cassandra in production in three data centers with five nodes each. We
are getting rid of one of the data centers, so I want to reconfig my cluster to
two data centers, seven nodes each. I first need to shut down DC3. I have
already shut down traffic to DC3 and am running repairs. M
On Wed, Sep 21, 2011 at 6:19 PM, Radim Kolar wrote:
> Sam,
>
> thank you for your detailed problem description. What is reason why delete
> cant remove old counter value from memtable?
The reason is: even if it was doing this, incrementing after a deletion would
still not work correctly. So we've
Sam,
thank you for your detailed problem description. What is reason why
delete cant remove old counter value from memtable? Because currently we
need to code workaround in our applications.
It would be nice to copy your description of this problem to:
http://wiki.apache.org/cassandra/Counter
On Wed, Sep 21, 2011 at 5:49 PM, Radim Kolar wrote:
> Dne 21.9.2011 14:44, David Boxenhorn napsal(a):
>>
>> The reason why counters work is that addition is commutative, but deletes
>> are not commutative
>
> This is not my case. if you look at my 2 posts.
> 1st post seems to be CLI bug because
Hi Radim,
This is the current behaviour you will see if you are inserting a row
tombstone in a counter CF, as the row tombstone does not remove the old
counter value from the memtable. When you increment the counter after
deleting the row, the counter now has a timestamp later than the row
tombsto
Dne 21.9.2011 14:44, David Boxenhorn napsal(a):
The reason why counters work is that addition is commutative, but
deletes are not commutative
This is not my case. if you look at my 2 posts.
1st post seems to be CLI bug because same operation from program
works fine.
In 2nd post i alrea
you can snapshot individual CFs. sstable2json is primarily for debugging.
On Wed, Sep 21, 2011 at 9:17 AM, David McNelis
wrote:
> When planning a DR strategy, which option is going to, most consistently,
> take the least amount of disk space, be fastest to recover from, least
> complicated recov
https://issues.apache.org/jira/browse/CASSANDRA-3235
On Wed, Sep 21, 2011 at 4:34 AM, liangfeng wrote:
>> Do you mind opening a ticket on JIRA for this ?
>
> Yes,if it is a mistake. Thanks!
>
>
>
>
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for profe
When planning a DR strategy, which option is going to, most consistently,
take the least amount of disk space, be fastest to recover from, least
complicated recovery, ect?
I've read through the Operations documents and my take is this so far. If I
have specific column families I want to snapshot
Yeah, looks like it should just pass through to where writeConnected
can handle it more gracefully like it did pre-1788.
On Wed, Sep 21, 2011 at 3:48 AM, Sylvain Lebresne wrote:
> On Wed, Sep 21, 2011 at 9:59 AM, liangfeng
> wrote:
>> In cassandra1.0.0, I found that OutboundTcpConnection will t
Looks like I have same problem as here:
https://issues.apache.org/jira/browse/CASSANDRA-2868
But, it's been fixed in 0.8.5 and I'm using 0.8.5 ...
Evgeny.
The reason why counters work is that addition is commutative, i.e.
x + y = y + x
but deletes are not commutative, i.e.
x + delete ≠ delete + x
so the result depends on the order in which the messages arrive.
2011/9/21 Radim Kolar
> Dne 21.9.2011 12:07, aaron morton napsal(a):
>
> see techn
Dne 21.9.2011 12:07, aaron morton napsal(a):
see technical limitations for deleting counters
http://wiki.apache.org/cassandra/Counters
For instance, if you issue very quickly the sequence "increment, remove,
increment" it is possible for the removal to be lost (if for some reason
the remove hap
see technical limitations for deleting counters
http://wiki.apache.org/cassandra/Counters
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 21/09/2011, at 9:42 PM, Radim Kolar wrote:
> Another problem with counters: Counter delete + incr d
Another problem with counters: Counter delete + incr does not set value
to 1 but to old value before deletion + 1
[default@whois] list ipbans;
Using default limit of 100
---
RowKey: 10.0.0.7
1 Row Returned.
[default@whois] incr ipbans['10.0.0.7']['hits'];
Value incremented.
[def
> Do you mind opening a ticket on JIRA for this ?
Yes,if it is a mistake. Thanks!
Initial state: 3 nodes, RF=3, version = 0.7.8, some queries are with
CL=QUORUM
1. Add node with with correct token for 4 nodes, repair
2. Move first node to balance 4 nodes, repair
3. Move second node
===> Start getting timeouts, Hector warning: WARNING - Error:
me.prettyprint.hector.api.exceptio
If you have node 1 with token 0 and a total token range to 10 then…
* add node 2 with token 3 and it will only take ownership of the range 0 to 3.
Node 1 will own 3 to 0
* add node 3 with token 6 and it will take ownership of the range 3 to 6. Node
1 will now own 6 to 0
Unless you have an urge
I cant get counter from CLI deleted:
[default@whois] get ipbans['10.0.0.7'];
=> (counter=hits, value=18)
Returned 1 results.
[default@whois] del ipbans['10.0.0.7'];
null
[default@whois] get ipbans['10.0.0.7'];
=> (counter=hits, value=18)
Returned 1 results.
[default@whois]
On Wed, Sep 21, 2011 at 9:59 AM, liangfeng wrote:
> In cassandra1.0.0, I found that OutboundTcpConnection will throw
> RuntimeException when it encounters an IOException(in write()) .In this case,
> OutboundTcpConnection as a thread will stop working--do not send any message
> to
> other nodes.
>
In cassandra1.0.0, I found that OutboundTcpConnection will throw
RuntimeException when it encounters an IOException(in write()) .In this case,
OutboundTcpConnection as a thread will stop working--do not send any message to
other nodes.
This is not reasonable,isn't it?
35 matches
Mail list logo