thanks! is the load info also a bug? node1 supposed to have 80MB.
bash-3.2$ bin/nodetool -h localhost ring
Address DC RackStatus State LoadOwns
Token
93798607613553124915572813490354413064
node2 datacenter1 rack1 Up Normal 86.03 MB
There is no direct way to do that, but reading a CSV and inserting
rows in Java is really easy.
But you may want have a look at the new bulk loading tool,
sstableloader, described here :
http://www.datastax.com/dev/blog/bulk-loading
Small detail, it seems you still write email at the incubator ML
Hi everyone,
I noticed this line in the API docs,
The method is not O(1). It takes all the columns from disk to calculate the
answer. The only benefit of the method is that you do not need to pull all
the columns over Thrift interface to count them.
Does this mean if a row has a large number of c
There is no real performance difference between the two partitions.
Yes and no. Yes each replica will have the same data load if you set RF to the
same number of nodes. No it's still not a good idea to have an unbalanced key
range, you can still have throughput hot spots.
Cheers
--
yes.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 19/09/2011, at 7:16 AM, Tharindu Mathew wrote:
> Hi everyone,
>
> I noticed this line in the API docs,
> The method is not O(1). It takes all the columns from disk to calculate the
>
This is fixed in 1.0
https://issues.apache.org/jira/browse/CASSANDRA-2894
On Sun, Sep 18, 2011 at 2:16 PM, Tharindu Mathew wrote:
> Hi everyone,
>
> I noticed this line in the API docs,
>
> The method is not O(1). It takes all the columns from disk to calculate the
> answer. The only benefit of
Cool
Thanks, A
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 19/09/2011, at 9:55 AM, Jake Luciani wrote:
> This is fixed in 1.0
> https://issues.apache.org/jira/browse/CASSANDRA-2894
>
>
> On Sun, Sep 18, 2011 at 2:16 PM, Tharindu M
while doing repair on node3, the "Load" keep increasing, suddenly cassandra
has encountered OOM, and the "Load" stopped at 140GB, after cassandra came
back, I tried node cleanup but it seems not working
does node repair generate many temp sstables? how to get rid of them?
thanks!
Address
this comment in JIRA mentions it
https://issues.apache.org/jira/browse/CASSANDRA-1969?focusedCommentId=12985038&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12985038
but in the end, it's not immediately clear. could someone give a
summary of its advantages?
(if it
In my tests I have seen repair sometimes take a lot of space (2-3 times),
cleanup did not clean it, the only way I could clean that was using major
compaction.
On Sun, Sep 18, 2011 at 6:51 PM, Yan Chunlu wrote:
> while doing repair on node3, the "Load" keep increasing, suddenly cassandra
> has e
so does major compaction actually "clean it" or "merge it", I am afraid it
give me a single large file
On Mon, Sep 19, 2011 at 10:26 AM, Anand Somani wrote:
> In my tests I have seen repair sometimes take a lot of space (2-3 times),
> cleanup did not clean it, the only way I could clean that
Thanks Aaron and Jake for the replies.
Any chance of a possible workaround to use for Cassandra 0.7?
On Mon, Sep 19, 2011 at 3:48 AM, aaron morton wrote:
> Cool
>
> Thanks, A
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On
12 matches
Mail list logo