owcache is not
recommended.
-JD
On Fri, Nov 5, 2010 at 11:49 AM, Brandon Williams wrote:
> On Fri, Nov 5, 2010 at 1:41 PM, Jeremy Davis > wrote:
>
>> I saw in the Riptano "Tuning Cassandra" slide deck that the row cache can
>> be detrimental if there are a
I saw in the Riptano "Tuning Cassandra" slide deck that the row cache can be
detrimental if there are a lot of updates to the cached row. Is this because
the cache is not write through, and every update necessitates creation of a
new row?
I see there is an open issue:
https://issues.apache.org/jira
Thanks for this reply. I'm wondering about the same issue... Should I bucket
things into Wide rows (say 10M rows), or narrow (say 10K or 100K)..
Of course it depends on my access patterns right...
Does anyone know if a partial row cache is a feasible feature to implement?
My use case is something
Ok, cool. I thought all the nodetool commands were non blocking.
Thanks,
-JD
On Fri, Oct 1, 2010 at 2:02 PM, Jonathan Ellis wrote:
> snapshot blocks until it's finished. it's just fast. :)
>
> On Fri, Oct 1, 2010 at 3:52 PM, Jeremy Davis
> wrote:
> >
> >
Since "nodetool snapshot" returns immediately, how do I know when the
snapshot process has completed. When is it safe to start copying/taring the
files to back them up elsewhere?
arts of the JVM <https://issues.apache.org/jira/browse/CASSANDRA-1214>"
> on following link:
> http://www.riptano.com/blog/whats-new-cassandra-065
>
> -Naren
>
>
> On Wed, Sep 29, 2010 at 5:21 PM, Jeremy Davis <
> jerdavis.cassan...@gmail.com> wrote:
&g
Did anyone else see this article on preventing swapping? Seems like it would
also apply to Cassandra.
http://jcole.us/blog/archives/2010/09/28/mysql-swap-insanity-and-the-numa-architecture/
-JD
Currently multiget_slice allows you to specify multiple Keys but only one
slice. In my specific scenario it becomes difficult/impossible to iterate
across the data set unless I can also specify the slice per key. This is
because if one of the Key's doesn't have the same amount of data, then the
con
Thanks for all the feedback, I'll be back in 2 weeks and pick up then.
-JD
On Tue, Aug 10, 2010 at 3:45 PM, Peter Schuller wrote:
> > Yeah, it has a BBU, and it is charged and on..
> > Very odd behavior, I'm stumped.
>
> I advise double-checking raid volume settings and ensuring that policy
> is
Yeah, it has a BBU, and it is charged and on..
Very odd behavior, I'm stumped.
-JD
On Tue, Aug 10, 2010 at 12:28 AM, Peter Schuller <
peter.schul...@infidyne.com> wrote:
> I have no explanation for the slower reads, but I have an hypothesis
> on the writes.
>
> Your iostat shows:
>
> > Device:
I have a weird one to share with the list, Using a separate commit log drive
dropped my performance a lot more than I would expect...
I'm doing perf tests on 3 identical machines but with 3 different drive
sets. (SAS 15K,10K, and SATA 7.5K)
Each system has a single system disk (Same as the data se
I've been wondering about this question as well, but from a different angle.
More along the lines of should I bother to compress myself? Specifically in
cases where I might want to take several small columns and compress into 1
more compact column. Each column by itself is pretty spartan and won't
That is an interesting statistic. 1 TB per node?
Care to share any more info on the specs of this cluster? Drive types/Cores
per node/etc...
-JD
On Tue, Jul 6, 2010 at 12:01 PM, Prashant Malik wrote:
> This is a ridiculous statement by some newbie I guess , We today have a 150
> node Cassandra
The approach that I took incorporates some of the ideas listed here...
Basically each message in the queue was assigned a sequence number (needs to
be unique and increasing per queue), and then read out in sequence number
order.
The Message CF is logically one row per queue, with each column being
eads from really big rows will be
> slower (since the row index will take longer to read) and this patch
> does not change this.
>
> On Fri, Jun 4, 2010 at 5:47 PM, Jeremy Davis
> wrote:
> >
> > https://issues.apache.org/jira/browse/CASSANDRA-16
> >
> >
https://issues.apache.org/jira/browse/CASSANDRA-16
Can someone (Jonathan?) help me understand the performance characteristics
of this patch?
Specifically: If I have an open ended CF, and I keep inserting with ever
increasing column names (for example current Time), will things generally
work out
ther.
>
>
>
> On Thu, May 27, 2010 at 8:36 PM, Jeremy Davis <
> jerdavis.cassan...@gmail.com> wrote:
>
>>
>> I agree, I had more than filter results in mind.
>> Though I had envisioned the results to continue to use the
>> List (and not JSON). You coul
I agree, I had more than filter results in mind.
Though I had envisioned the results to continue to use the
List (and not JSON). You could still create new result
columns that do not in any way exist in Cassandra, and you could still stuff
JSON in to any of result columns.
I had envisioned:
list g
Are there any thoughts on adding a more complex query to Cassandra?
At a high level what I'm wondering is: Would it be possible/desirable/in
keeping with the Cassandra plan, to add something like a javascript blob on
to a get range slice etc, that does some further filtering on the results
before
Quick question:
There is an open issue with ColumnFamilies growing too large to fit in
memory when compacting..
Does this same limit also apply to SCF? As long as each sub CF is
sufficiently small, etc.
-JD
age broker/pub-sub/topic/ etc...
-JD
On Thu, Apr 1, 2010 at 9:43 PM, Jeremy Davis
wrote:
>
> You are correct, it is not a queue in the classic sense... I'm storing the
> entire "conversation" with a client in perpetuity, and then playing it back
> in the order receive
am i misunderstanding your needs here?
>
> -keith
>
> On Thu, Apr 1, 2010 at 6:32 PM, Jeremy Davis
> wrote:
> > I'm in the process of implementing a Totally Ordered Queue in Cassandra,
> and
> > wanted to bounce my ideas off the list and also see if there are any
&g
I'm in the process of implementing a Totally Ordered Queue in Cassandra, and
wanted to bounce my ideas off the list and also see if there are any other
suggestions.
I've come up with an external source of ID's that are always increasing (but
not monotonic), and I've also used external synchronizat
23 matches
Mail list logo