Still looking for help! We have stopped almost ALL traffic to the cluster and
still some nodes are showing almost 1000% CPU for cassandra with no iostat
activity. We were running cleanup on one of the nodes that was not showing
load spikes however now when I attempt to stop cleanup there via
Thrift will allow for more large, free-form batch contstruction. The
increase will be doing a lot more in the same payload message. Otherwise
CQL is more efficient.
If you do build those giant string, yes you should see a performance
improvement.
On Tue, Aug 20, 2013 at 8:03 PM, Keith Freeman <8
Thanks. Can you tell me why would using thrift would improve performance?
Also, if I do try to build those giant strings for a prepared batch
statement, should I expect another performance improvement?
On 08/20/2013 05:06 PM, Nate McCall wrote:
Ugh - sorry, I knew Sylvain and Michaël had wor
Hi - I was reading some blogs on implementation of secondary indexes in
Cassandra and they say that "the read requests are sent sequentially to all the
nodes" ?
So if I have a query to fetch ALL records with the secondary index filter, will
the co-ordinator node send the requests to nodes one b
Hi all,
We are using C* 1.2.4 with Vnodes and SSD. We have seen behavior recently
where 3 of our nodes get locked up in high load in what appears to be a GC
spiral while the rest of the cluster (7 total nodes) appears fine. When I run
a tpstats, I see the following (assuming tpstats retur
Ugh - sorry, I knew Sylvain and Michaël had worked on this recently but it
is only in 2.0 - I could have sworn it got marked for inclusion back into
1.2 but I was wrong:
https://issues.apache.org/jira/browse/CASSANDRA-4693
This is indeed an issue if you don't know the column count before hand (or
Thanks Audrey for your reply.
Is anyone using Cassandra 1.2.* (1.2.5 in particular) in production? If
yes, what is the Priam version that you use?
I read at https://github.com/Netflix/Priam/wiki/Compatibility
that priam vnodes is a Work In Progress and vnodes is a major feature of
cassandra 1.2*.
So I tried inserting prepared statements separately (no batch), and my
server nodes load definitely dropped significantly. Throughput from my
client improved a bit, but only a few %. I was able to *almost* get
5000 rows/sec (sort of) by also reducing the rows/insert-thread to 20-50
and elimin
latest versions of Priam use default properties defined in this file
https://github.com/Netflix/Priam/blob/master/priam/src/main/resources/Priam.properties
you can override all of them in SDB. I had the problem with
priam.cass.startscript
which points to /mnt/cassandra.
Also check tomcat process p
Hi,
This is more of a Priam question, but asking it in the Cassandra forum
since many of you may be using Priam to backup data from Cassandra.
We are planning to migrate to Cassandra 1.2.5 in production. Which is the
most stable version of Priam which is compatible with Cassandra 1.2.5 and
is pr
The Cassandra team is pleased to announce the release of the second release
candidate for the future Apache Cassandra 2.0.0.
As a release candidate, we're working to establish that this is relatively
free
of obvious bugs before calling it Done. Bottom line, if you aren't willing
to
test pre-produc
AFAIK, batch prepared statements were added just recently:
https://issues.apache.org/jira/browse/CASSANDRA-4693 and many client
libraries are not supporting it yet. (And I believe that the problem is
related to batch operations).
On Tue, Aug 20, 2013 at 4:43 PM, Nate McCall wrote:
> Thanks for
Thanks for putting this up - sorry I missed your post the other week. I
would be real curious as to your results if you added a prepared statement
for those inserts.
On Tue, Aug 20, 2013 at 9:14 AM, Przemek Maciolek wrote:
> I had similar issues (sent a note on the list few weeks ago but nobody
I had similar issues (sent a note on the list few weeks ago but nobody
responded). I think there's a serious bottleneck with using wide rows and
composite keys. I made a trivial benchmark, which you check here:
http://pastebin.com/qAcRcqbF - it's written in cql-rb, but I ran the test
using astyana
John makes a good point re:prepared statements (I'd increase batch sizes
again once you did this as well - separate, incremental runs of course so
you can gauge the effect of each). That should take out some of the
processing overhead of statement validation in the server (some - that load
spike st
Ok, I'll try prepared statements. But while sending my statements
async might speed up my client, it wouldn't improve throughput on the
cassandra nodes would it? They're running at pretty high loads and only
about 10% idle, so my concern is that they can't handle the data any
faster, so some
16 matches
Mail list logo