you can use tools like chef along side vagrant to bring a cassandra. I
personally prefer LXC containers, as they mimic full blown vms, along side
chef-lxc which provides chef's awesome DSL for container customization
(similar dockerfile, and you wont install chef inside the container), for
our scen
Hi Kevin
We are using a similar solution than horschi. In the past we use CassandraUnit
(https://github.com/jsevellec/cassandra-unit) but truncate the tables after and
before each test works better for us. We also set gc_grace_seconds to zero.
From: horschi [mailto:hors...@gmail.com]
Sent: maa
Thanks Alex for inputs. As you said, choosing right consistency levels
will be critical here.
On Mon, Oct 13, 2014 at 8:37 PM, Alex Major wrote:
> Just make sure you understand the effect of consistency levels on your
> performance. You (probably) don't want to be going over the WAN for reads
> e
FWIW we run a 3 node cluster with ccm on Travis to regression test the
gocql driver - here's the descriptor:
https://github.com/gocql/gocql/blob/master/.travis.yml
On Mon, Oct 13, 2014 at 9:04 PM, Philip Thompson <
philip.thomp...@datastax.com> wrote:
> Kevin,
>
> Have you looked at the Cassandra
On Mon, Oct 13, 2014 at 1:04 PM, S C wrote:
> I have started repairing a 10 node cluster with one of the table having >
> 1TB of data. I notice that the validation compaction actually shows >3 TB
> in the "nodetool compactionstats" bytes total. However, I have less than
> 1TB data on the machine.
On Mon, Oct 13, 2014 at 12:26 PM, Sholes, Joshua <
joshua_sho...@cable.comcast.com> wrote:
> I thought setcompactionthroughput just adjusted the compaction speed
> when the server is online? I'm looking for something like a scrub (which
> as far as I know does not do this) that will compat the t
Hi Kevin,
I run my tests against my locally running Cassandra instance. I am not
using any framework, but simply truncate all my tables after/before each
test. With which I am quite happy.
You have to enable the unsafeSystem property, disable durable writes on the
CFs and disable auto-snapshot in
Kevin,
Have you looked at the Cassandra integration tests used by the Cassandra
development team: https://github.com/riptano/cassandra-dtest ? They make
use of CCM for integration testing: https://github.com/pcmanus/ccm
Philip Thompson
On Mon, Oct 13, 2014 at 2:50 PM, Kevin Burton wrote:
> Cur
I have started repairing a 10 node cluster with one of the table having > 1TB
of data. I notice that the validation compaction actually shows >3 TB in the
"nodetool compactionstats" bytes total. However, I have less than 1TB data on
the machine. If I take into consideration of 3 replicas then 3T
Curious to see if any of you have an elegant solution here.
Right now I”m using cassandra unit;
https://github.com/jsevellec/cassandra-unit
for my integration tests.
The biggest problem is that it doesn’t support shutdown. so I can’t stop
or cleanup after cassandra between tests.
I have other
I thought setcompactionthroughput just adjusted the compaction speed when the
server is online? I'm looking for something like a scrub (which as far as I
know does not do this) that will compat the tables appropriately while the
Cassandra daemon is down.
And I know why I've got heap pressure--
On Sun, Oct 12, 2014 at 8:34 AM, Timmy Turner wrote:
> It would be great if there was some way to be able to load trigger code
> from the client-side without having to manipulate the code and/or libraries
> on the server-side for scenarios where Cassandra is being used as a hosted
> service (PaaS
On Mon, Oct 13, 2014 at 8:27 AM, Sholes, Joshua <
joshua_sho...@cable.comcast.com> wrote:
> My question is this: Is there a command that I'm missing that I could
> use to force that node to do compaction on those tables and clean up some
> of the thousands of 100-500byte tables while Cassandra
I feel like similar questions have been asked recently but not in this specific
way:
I have a cluster that has some I/O capacity issues, which I know is what's
really causing this, but I've got the case where (using leveled compaction
strategy) my SSTables are piling up at the lowest level. I
Just make sure you understand the effect of consistency levels on your
performance. You (probably) don't want to be going over the WAN for reads
etc.
We run across the US/EU AWS regions and don't have any problems with higher
RTT.
On Mon, Oct 13, 2014 at 2:51 AM, Siddharth Karandikar <
siddharth.
15 matches
Mail list logo