JDK on ARM yet, but it sounds
promising). You could probably do ok on Solaris, too, with a custom Snappy jar
and some JNA concessions.
- .Dustin
On Sep 5, 2012, at 10:36 PM, Rob Coli wrote:
> On Sun, Jul 29, 2012 at 7:40 PM, Dustin Wenz wrote:
>> We've just set up a new
a week. That
issue alone makes this cluster configuration unsuitable for production use.
- .Dustin
On Jul 30, 2012, at 2:04 PM, Dustin Wenz wrote:
> Thanks for the pointer! It sounds likely that's what I'm seeing. CFStats
> reports that the bloom filter size is currentl
io of RAM to
> disk is where I think most people want to be, unless their system is
> carrying SSD disks.
>
> Again you have to keep your bloom filters in java heap memory so and
> design that tries to create a quatrillion small rows is going to have
> memory issues as well.
>
>
I'm trying to determine if there are any practical limits on the amount of data
that a single node can handle efficiently, and if so, whether I've hit that
limit or not.
We've just set up a new 7-node cluster with Cassandra 1.1.2 running under
OpenJDK6. Each node is 12-core Xeon with 24GB of RA
escribe cluster" shows that all node
schemas are consistent.
Are there any other ways that I could potentially force Cassandra to accept
these changes?
- .Dustin
On Jul 13, 2012, at 10:02 AM, Dustin Wenz wrote:
> It sounds plausible that is what we are running into. All o
gt;> > -
>> > Aaron Morton
>> > Freelance Developer
>> > @aaronmorton
>> > http://www.thelastpickle.com
>> > On 13/07/2012, at 7:39 AM, Dustin Wenz wrote:
>> >
>> > We recently increased the replication factor of a keyspace in our
We recently increased the replication factor of a keyspace in our cassandra
1.1.1 cluster from 2 to 4. This was done by setting the replication factor to 4
in cassandra-cli, and then running a repair on each node.
Everything seems to have worked; the commands completed successfully and disk
usa
is taking significant time without it
being reported?
- .Dustin
On Jun 27, 2012, at 1:31 AM, Igor wrote:
> Hello
>
> Too much GC? Check JVM heap settings and real usage.
>
> On 06/27/2012 01:37 AM, Dustin Wenz wrote:
>> We occasionally see fairly poor compaction
We occasionally see fairly poor compaction performance on random nodes in our
7-node cluster, and I have no idea why. This is one example from the log:
[CompactionExecutor:45] 2012-06-26 13:40:18,721 CompactionTask.java
(line 221) Compacted to
[/raid00/cassandra_data/main/basic/main-bas
We observed a JRE crash on one node in a seven node cluster about a half hour
after upgrading to version 1.1.1 yesterday. Immediately after the upgrade,
everything seemed to be working fine. The last item in the cassandra log was a
info-level notification that compaction had started on a data fi
10 matches
Mail list logo