Hi,
you should check the "snapshot" directories on your nodes - it is very
likely there are some old ones from failed operations taking up some space.
Am 15.04.2016 um 01:28 schrieb kavya:
Hi,
We are running a 6 node cassandra 2.2.4 cluster and we are seeing a
spike in the disk Load as per
Thank you Jack.
Jean
On 14 Apr 2016, at 22:00 , Jack Krupansky
mailto:jack.krupan...@gmail.com>> wrote:
Normally, since 3.5 just came out, it would be wise to see if people report any
problems over the next few weeks.
But... the new tick-tock release process is designed to assure that these
od
Hi Jan,
were you able to resolve your Problem?
We are trying the same and also see a lot of WriteTimeouts:
WriteTimeoutException: Cassandra timeout during write query at consistency
SERIAL (2 replica were required but only 1 acknowledged the write)
How many clients were competing for a lock in y
Also, what type of data were you reading/writing?
Regards,
Denise
Sent from mi iPad
> On Apr 15, 2016, at 8:29 AM, horschi wrote:
>
> Hi Jan,
>
> were you able to resolve your Problem?
>
> We are trying the same and also see a lot of WriteTimeouts:
> WriteTimeoutException: Cassandra timeout
Hi Denise,
in my case its a small blob I am writing (should be around 100 bytes):
CREATE TABLE "Lock" (
lockname varchar,
id varchar,
value blob,
PRIMARY KEY (lockname, id)
) WITH COMPACT STORAGE
AND COMPRESSION = { 'sstable_compression' : 'S
Thanks for that, that helps a lot. The next thing to check might be
whether or not your application actually has access to the other nodes.
With that topology, and assuming all the nodes you included in your
original graph are in the 'WDC' data center, I'd be inclined to look for a
network issue o
My thinking was that due to the size of the data that there maybe I/O issues.
But it sounds more like you're competing for locks and hit a deadlock issue.
Regards,
Denise
Cell - (860)989-3431
Sent from mi iPhone
> On Apr 15, 2016, at 9:00 AM, horschi wrote:
>
> Hi Denise,
>
> in my case its
Hi--
It's trivial to do this in Kubernetes, even without Ubernetes. Please feel
free to send me a note and I'll walk you through it.
Disclosure: I work on Google on Kubernetes.
On Thu, Apr 14, 2016 at 9:10 AM Joe Stein wrote:
> You can do that with the Mesos scheduler
> https://github.com/elod
It sounds as if Kubernetes is oriented towards a single data center, DC in
Cassandra parlance, and maybe Ubernetes attempts to address that. If there
is indeed some simple, obvious, way to trick Kubernetes into
Cassandra-style multi-DC operation, it sure would be helpful if it was
documented more o