>
> It splits into a contiguous range, because truly upgrading to vnode
> functionality
> is another step.
That confuses me. As I understand it, there is no point in having 256
tokens on same node if I don't commit the shuffle
On Fri, Nov 2, 2012 at 11:10 AM, Brandon Williams wrote:
> On Thu,
On Thu, Nov 1, 2012 at 10:05 PM, Manu Zhang wrote:
>
>> it will migrate you to virtual nodes by splitting the existing partition
>> 256 ways.
>
>
> Out of curiosity, is it for the purpose of avoiding streaming?
It splits into a contiguous range, because truly upgrading to vnode
functionality is a
> it will migrate you to virtual nodes by splitting the existing partition
> 256 ways.
Out of curiosity, is it for the purpose of avoiding streaming?
the former would require you to perform a shuffle to achieve that.
Is there a nodetool option or are there other ways "shuffle" could be done
a
Thoughts, please ?
On Thu, Nov 1, 2012 at 7:12 PM, Ertio Lew wrote:
> Would that do any harm or are there any downsides, if I store columns with
> composite names or Integer type names in a column family with bytesType
> comparator & validator. I have observed that bytesType comparator would
>
Note that 1.0.7 came out before 1.1 and I know there were
some compatibility issues that were fixed in later 1.0.x releases which
could affect your upgrade. I think it would be best to first upgrade to
the latest 1.0.x release, and then upgrade to 1.1.x from there.
-Bryan
On Thu, Nov 1, 2012 a
It seems like CASSANDRA-3442 might be an effective fix for this issue
assuming that I'm reading it correctly. It sounds like the intent is to
automatically compact SSTables when a certain percent of the columns are
gcable by being deleted or with expired tombstones. Is my understanding
correct?
On Thu, Nov 1, 2012 at 1:43 AM, Sylvain Lebresne wrote:
> on all your columns), you may want to force a compaction (using the
> JMX call forceUserDefinedCompaction()) of that sstable. The goal being
> to get read of a maximum of outdated tombstones before running the
> repair (you could also alter
The other nodes all have copies of the same data. To optimize performance,
all of them stream different parts of the data, even though 102 has all the
data that 108 needs. (I think. I'm not an expert.) -Brennan
On Thu, Nov 1, 2012 at 9:31 AM, Ramesh Natarajan wrote:
> I am trying to bootstrap c
> Hello,
>
> My name is Davor Vuković. I am a Student on a "Specialist Professional
> Graduate Study of Information Science and Technology in Business Systems"
> in Croatia. I was wondering if you could help me a bit regarding Database
> Management in Cassandra? I would be very happy if you could e
Having a problem diagnosing an issue with row caching. It seems like row
caching is not working (very few items stored), despite it being enabled, using
JNA, and the key cache being super hot. I assume I'm missing something
obvious, but I would expect to have more items stored in the row cache
bryce, did you resolve this? i'm interested in the outcome.
when you write does it help to use CL = LOCAL_QUORUM?
On Mon, Oct 29, 2012 at 12:52 AM, aaron morton wrote:
> Outbound messages for other DC's are grouped and a single instance is sent
> to a single node in the remote DC. The remote no
2 questions
1. What are people using for logging servers for their web tier logging?
2. Would anyone be interested in a new logging server(any programming
language) for web tier to log to your existing cassandra(it uses up disk space
in proportion to number of web servers and just has a roll
"Can you try it thought, or run a repair ?"
Repairing didn't help
"My first thought is to use QUOURM"
This fix the problem. However, my data is probably still inconsistent, even
if I read now always the same value. The point is that I can't handle a
crash with CL.QUORUM, I can't even restart a n
> Is this a feature or a bug?
Neither really. Repair doesn't do any gcable tombstone collection and
it would be really hard to change that (besides, it's not his job). So
if you when you run repair there is sstable with tombstone that could
be collected but are not yet, then yes, they will be stre
> "What CL are you using ?"
>
> I think this can be what causes the issue. I'm writing and reading at CL ONE.
> I didn't drain before stopping Cassandra and this may have produce a fail in
> the current counters (those which were being written when I stopped a server).
My first thought is to use
I've not run it myself, but upgrading is part of the design.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 1/11/2012, at 10:43 AM, Wei Zhu wrote:
> I heard about virtual nodes. But it doesn't come out until 1.2. Is it easy to
> convert
Hi Sylvain,
Simple as that!!! Using the 1.1.5 nodetool version works as expected. My
mistake.
Many thanks,
Brian
On Thu, Nov 1, 2012 at 8:24 AM, Sylvain Lebresne wrote:
> The first thing I would check is if nodetool is using the right jar. I
> sounds a lot like if the server has been corre
The first thing I would check is if nodetool is using the right jar. I
sounds a lot like if the server has been correctly updated but
nodetool haven't and still use the old classes.
Check the nodetool executable, it's a shell script, and try echoing
the CLASSPATH in there and check it correctly poi
Thanks. Yep, I think OS + CL (2 drive RAID1) will provide the best balance
of reduced headaches / performance. I'll also be pondering 1 drive OS, 1
drive CL as well.
On Wed, Oct 31, 2012 at 9:27 PM, aaron morton wrote:
> Good question.
>
> The is a comment on the DS blog or docs somewhere that s
Hi Rob,
Thank you for your reply.
Our scenario is like this, we have 3 clusters, each has 1 or 2 keyspaces
in it,
and each cluster has 3 nodes.
Now we're considering integrating these 3 clusters of 9 nodes to a
single cluster of 9 nodes.
This new cluster will contain all keyspaces and their da
the following comment in the code describes them very clearly:
* LOCAL_QUORUM Returns the record with the most recent timestamp once a
majority of replicas within the local datacenter have replied.
* EACH_QUORUM Returns the record with the most recent timestamp once a
majority of replicas wi
21 matches
Mail list logo