Hi Jean,
Glad to hear it worked this way.
Some other people provided (and continue providing) similar help to me,
just trying to give back to the community as much as I received from it.
See you around.
Alain
2015-06-26 8:44 GMT+02:00 Jean Tremblay :
> Good morning,
> Alain, thank you so muc
Hi,
I am trying to write into Cassandra via the CqlBulkOutputFormat from an
apache flink program. The program succeeds to write into a cassandra-cluster
while the program is running locally on my pc.
However, when trying to run the program on the cluster, it seems to get
stuck at SSTableSimple
Regards, Susanne.
Which version of Java are you using here?
Have you tested this with more recent versions of Cassandra?
These new version have a lot of improvements related to SSTable reading
and writing, and much more.
I recommend you that you should use at least a 2.1.x version.
Best,
--
M
Hi,
I am using Java 7.
The cassandra version I use is actually 2.1.5, not 1.5. Sorry for the
confusion.
I also tried cassandra 2.1.6, but the problem stays the same.
Best regards,
Susanne
Von: Marcos Ortiz [mailto:mlor...@uci.cu]
Gesendet: Freitag, 26. Juni 2015 15:34
An: susanne..
I strongly disagree with recommending to use version 2.1.x. It only very
recently became more or less stable. Anything before 2.1.5 was unusable.
You might be better of with a recent 2.0.n version.
Best regards,
Nathan
On Fri, Jun 26, 2015 at 3:36 PM Marcos Ortiz wrote:
> Regards, Susanne.
>
Dear colleagues,
We are using incremental repair and have noticed that every few repairs,
the cluster experiences pauses.
We run the repair with the following command: nodetool repair -par -inc
I have tried to run it not in parallel, but get the following error:
"It is not possible to mix sequen
"It is not possible to mix sequential repair and incremental repairs."
I guess that is a system limitation, even if I am not sure of it (I don't
have used C*2.1 yet)
I would focus on tuning your repair by :
- Monitoring performance / logs (see why the cluster hangs)
- Use range repairs (as a work
We are using the Spark Cassandra driver, version 1.2.0 (Spark 1.2.1)
connecting to a 6 node bare metal (16gb ram, Xeon E3-1270 (8core), 4x 7,2k
SATA disks) Cassandra cluster. Spark runs on a separate Mesos cluster.
We are running a transformation job, where we read the complete contents of
a table
> We notice incredibly slow reads, 600mb in an hour, we are using quorum
LOCAL_ONE reads.
> The load_one of Cassandra increases from <1 to 60! There is no CPU wait,
only user & nice.
Without seeing the code and query, it's hard to tell, but I noticed
something similar when we had a client incorrec
I want to follow up on this thread to describe what I was able to get
working. My goal was to switch a cluster to vnodes, in the process
preserving the data for a single table, endpoints.endpoint_messages.
Otherwise, I could afford to start from a clean slate. As should be
apparent, I could also af
Thanks for the suggestion, will take a look.
Our code looks like this:
val rdd = sc.cassandraTable[EventV0](keyspace, "test")
val transformed = rdd.map{e => EventV1(e.testId, e.ts, e.channel,
e.groups, e.event)}
transformed.saveToCassandra(keyspace, "test_v1")
Not sure if this code might transl
Thank you, Alain, for the response. We're using 2.1 indeed. I've lowered
compaction threshhold from 18 to 10mb/s. Will see what happens.
> I hope you have a monitoring tool up and running and an easy way to
detect errors on your logs.
We do not have this. What do you use for this?
Thank you,
Ca
Hello, I haven’t been able to find any documentation for best practices on
this…is it okay to set up opscenter as a smaller node than the rest of the
cluster.
For instance, on AWS can I have 3 m3.medium nodes for Cassandra and 1 t2.micro
node for OpsCenter?
It doesn't need to be the same size. It's not part of the cluster.
On Fri, Jun 26, 2015 at 1:34 PM Sid Tantia
wrote:
> Hello, I haven’t been able to find any documentation for best practices
> on this…is it okay to set up opscenter as a smaller node than the rest of
> the cluster.
>
> For instan
Hi Sid,
I would recommend you to use either c3s or m3s instances for Opscenter and
for Cassandra nodes it depends on your use case.
You can go with either c3s or i2s for Cassandra nodes. But i would
recommend you to run performance tests before selecting the instance type.
If your use case require
On Fri, Jun 26, 2015 at 1:20 PM, Sid Tantia
wrote:
> For instance, on AWS can I have 3 m3.medium nodes for Cassandra and 1
> t2.micro node for OpsCenter?
>
m3.medium is below the minimum size I would use for Cassandra doing
anything meaningful, for the record.
=Rob
Here is something I wrote some time ago:
http://planetcassandra.org/blog/interview/video-advertising-platform-teads-chose-cassandra-spm-and-opscenter-to-monitor-a-personalized-ad-experience/
Monitoring absolutely necessary to understand what is happening in the
system. There is no magic in there
Alain,
The reduction of compaction is having significant impact lowering response
time, especially at the 90th percentile level, for us.
For the record, we are using AWS's i2.2xl instance types (these are ssd).
We were running compaction_throughput_mb_per_sec at 18. Now we are running
at 10. Late
18 matches
Mail list logo