> Is anyone using it with Cassandra?
Yes, we use it with cassandra 0.6. Had to implement service wrapper tanuki
style
for cassandra by myself to make it shudown correctly.
Probably this is because of mmapped io access mode, which is enabled by default
in 64-bit VMs - RAM is occupied by data files.
If you have such a tight memory reqs, you can turn on standard access mode in
storage-conf.xml, but dont expect it to work fast then:
David Strauss davidstrauss.net> writes:
>
> You can actually already perform "manual conflict resolution" in
> Cassandra by naming your columns so that they don't squash each other in
> Cassandra's internal replication. Then, ensure the code that accesses
> Cassandra reads all columns with data
mcasandra gmail.com> writes:
>
> Thanks! I think it still is a good idea to enable HiugePages and use
> UseLargePageSize option in JVM. What do you think?
I experimented with it. It was about 10% performance improvement. But this was
on 100% row cache hit. On smaller cache hit ratios the perfor
Are your test client talks to single node or to both ?
Sylvain Lebresne datastax.com> writes:
> However, if that simple conflict detection/resolution mechanism is not good
enough for some of your use case and you need to keep two concurrent updates,
it
is easy enough. Just make sure that the update don't end up in the same column.
This is easily
> From the article I linked:
>
> "But wait, some might say, you can avoid all this by using vectors in
> a different way – to prevent update conflicts by issuing conditional
> writes which specify a version (vector) and only succeed if that
> version is still current. Sorry, but no, or at least no
> Basically: vector clocks tell you there was a conflict, but not how to
> resolve it (that is, you simply don't have enough information to
> resolve it even if you push that back to the client a la Dynamo).
> What dynamo-like systems mostly VC for is the trivial case of "client
> X updated field 1
A J gmail.com> writes:
>
>
> Makes sense ! Thanks.
> Just a quick follow-up:
> Now I understand the write is not made to coordinator (unless it is part of
the replica for that key). But does the write column traffic 'flow' through the
coordinator node. For a 2G column write, will I see 2G netwo
Huy Le springpartners.com> writes:
> Our CMS settings are: -XX:CMSInitiatingOccupancyFraction=35 \
-XX:+UseCMSInitiatingOccupancyOnly \
>
Occupancy Fraction = 35 seems very low value. You instructed GC to make
collection as soon as memory usage is at 35% - i.e. about 1G. This see
> Is there a possibility that my read operation may miss the data that just got
inserted?
If write operation did not resulted in exception and there was no other clients
writing to the same row/column concurrently - you will read exactly what you
just written.
>
> Since there are no DB transactio
>
> You're right when you say it's unlikely that 2 threads have the same
> timestamp, but it can. So it could work for user creation, but maybe
> not on a more write intensive problem.
Um, sorry I thought you re solving exact case of duplicate user creation. If
youre trying to solve the concurre
Benoit Perroud noisette.ch> writes:
>
> My idea to solve such use case is to have both thread writing the
> username, but with a colum like "lock-", and then read
> the row, and find out if the first lock column appearing belong to the
> thread. If this is the case, it can continue the process,
This is probably because rmi code jmx uses to listen detected wrong address.
To fix this add the following to cassandra nodes startup script instances:
-Djava.rmi.server.hostname=127.0.0.1
(change 127.0.0.1 to actual internal address of cassandra node)
Mark gmail.com> writes:
> Caused by: java.lang.RuntimeException: Insufficient disk space to flush
> at
> >
> On 12/7/10 8:44 PM, Mark wrote:
> > 3 Node cluster and I just ran a nodetool cleanup on node #3. 1 and 2
> > are now at 100% disk space. What should I do?
>
>
Is there files w
>
> The goal is actually getting the rows in the range of "start","end"The order
is not important at all.But what I can see is, this does not seem to be possible
at all using RP. Am I wrong?
Simpler solution is just compare MD5 of both keys and set start to one with
lesser md5 and end to key with
kannan chandrasekaran yahoo.com> writes:
> Hi All,I have a query regarding the insert operation. The insert operation by
default inserts an new row or updates an existing row. Is it possible to
prevent an update but allow only inserts automatically ( especially when
multiple clients are writing
Matthew Dennis riptano.com> writes:
> Yes, please file it to Jira. It seems like it would be pretty useful for
various things and fairly easy to change the code to move it to another
directory whenever C* thinks it should be deleted...
Here it is for 0.6.4 version. Should work on a 0.6.5 as well
> Is it possible to retain the commit logs?
In off-the-shelf cassandra 0.6.5 this is not possible, AFAIK.
I developed a patch we use internally in our company for commit
log archivation and replay.
I can share a patch with you, if you dare patching cassandra
sources by yourself ;-)
PS. Are o
>
> Hi All,We're currently starting to get OOM exceptions in our cluster. I'm
trying to push the limiations of our machines. Currently we have 1.7 G memory
(ec2-medium)I'm wondering if by tweaking some of cassandra's configuration
settings, is it possible to make it live in peace and less memory.
Rana Aich gmail.com> writes:
>
> Yet my nodetool shows the following:
>
> 192.168.202.202Down 319.94 GB 7200044730783885730400843868815072654
|<--|
> 192.168.202.4 Up 382.39 GB 23719654286404067863958492664769598669
| ^
> 192.168.202.2 Up 106.81 GB 3
You can change these attrs using JMX interface. Take a look at
org.apache.cassandra.tools.NodeProbe setCacheCapacities method.
Kamil Gorlo gmail.com> writes:
>
> So I've got more reads from single MySQL with 400GB of data than from
> 8 machines storing about 266GB. This doesn't look good. What am I
> doing wrong? :)
The worst case for cassandra is random reads. You should ask youself a question,
do you really have this
23 matches
Mail list logo