I am trying to run below code, but it gives this error. It compiles without
any errors. Kindly help me.
(source of the code :
http://posulliv.github.io/2011/02/27/libcassandra-sec-indexes/)
terminate called after throwing an instance of
'org::apache::cassandra::InvalidRequestException'
what():
In our case we have continuous flow of data to be cached. Every second
we're receiving tens of PUT requests. Every request has 500Kb payload in
average and TTL about 20 minutes.
On the other side we have the similar flow of GET requests. Every GET
request is transformed to "get by key" query f
On Mon, Jul 1, 2013 at 10:06 PM, Mike Heffner wrote:
>
> The only changes we've made to the config (aside from dirs/hosts) are:
>
Forgot to include we've changed this as well:
-partitioner: org.apache.cassandra.dht.Murmur3Partitioner
+partitioner: org.apache.cassandra.dht.RandomPartitioner
Ch
I've seen the same thing
From: Sylvain Lebresne
Reply-To:
Date: Tue, 2 Jul 2013 08:32:06 +0200
To: "user@cassandra.apache.org"
Subject: Re: very inefficient operation with tombstones
This is https://issues.apache.org/jira/browse/CASSANDRA-5677.
--
Sylvain
On Tue, Jul 2, 2013 at 6:04 AM
Franc,
We manage our schema through the Astyanax driver. It runs in a listener at
application startup. We read a self-defined schema version, update the
schema if needed based on the version number, and then write the new schema
version number. There is a chance two or more app servers will try to
On Tue, Jul 2, 2013 at 7:18 AM, Eric Marshall wrote:
>
> My query: Should a Cassandra node be able to recover from too many writes
> on its own? And if it can, what do I need to do to reach such a blissful
> state?
>
In general applications running within the JVM are unable to recover when
the J
This was a problem pre vnodes. I had several JIRA for that but some of them
were voted down saying the performance will improve with vnodes.
The main problem is that it streams one sstable at a time and not in
parallel.
Jira 4784 can speed up the bootstrap performance. You can also do a zero
copy
Makes sense - I will confirm.
Thanks again for the help.
Cheers,
Eric
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: Tuesday, July 02, 2013 12:53 PM
To: user@cassandra.apache.org
Subject: Re: Does cassandra recover from too many writes?
On Tue, Jul 2, 2013 at 7:18 AM, Eric Marshall
mai
Sankalp,
Parallel sstableloader streaming would definitely be valuable.
However, this ring is currently using vnodes and I was surprised to see
that a bootstrapping node only streamed from one node in the ring. My
understanding was that a bootstrapping node would stream from multiple
nodes in the
As a test, adding a 7th node in the first AZ will stream from both the two
existing nodes in the same AZ.
Aggregate streaming bandwidth at the 7th node is approximately 12 MB/sec
when all limits are set at 800 MB/sec, or about double what I saw streaming
from a single node. This would seem to indi
Have you tried running your code in GDB to find which line is causing the
error? That would be what I'd do first.
Aaron Turner
http://synfin.net/ Twitter: @synfinatic
https://github.com/synfinatic/tcpreplay - Pcap editing and replay tools for
Unix & Windows
Those who would give up essenti
I dont know much about streaming in vnodes but you might be hitting this
https://issues.apache.org/jira/browse/CASSANDRA-4650
On Tue, Jul 2, 2013 at 12:43 PM, Mike Heffner wrote:
> As a test, adding a 7th node in the first AZ will stream from both the two
> existing nodes in the same AZ.
>
> Ag
If this is a tombstone problem as suggested by some, and it is ok to turn of
replication as suggested by others, it may be an idea to do an optimization in
cassandra where
if replication_factor < 1:
do not create tombstones
Terje
On Jul 2, 2013, at 11:11 PM, Dmitry Olshansky
wrote:
>
If you are using 1.2, I would checkout https://github.com/mstump/libcql
-Jeremiah
On Jul 2, 2013, at 5:18 AM, Shubham Mittal wrote:
> I am trying to run below code, but it gives this error. It compiles without
> any errors. Kindly help me.
> (source of the code :
> http://posulliv.github.io/
Hi All,
Using JDBC prepareStatement when I use secondary index and use that in
preparedStatement I get no rows back. If I replace the ? with a integer I get
the rows back I expect. If I use setObject() instead of setInt() I get the
following exception:
encountered object of class: class java.la
Hi All,
We're having a problem with our cassandra cluster and are at a loss as to the
cause.
We have what appear to be columns that disappear for a little while, then
reappear. The rest of the row is returned normally during this time. This is,
of course, very disturbing, and is wreaking havoc
16 matches
Mail list logo