Good catch since that bug also would have shut us down.
The original problem is that previous to Cass 1.1.10 it looks like
cassandra.yaml values
* thrift_framed_transport_size_in_mb
* thrift_max_message_length_in_mb
were ignored (in favor of effectively no limits). We went from 1.1.5 to 1.
I've submitted a patch that fixes the issue for 1.2.3:
https://issues.apache.org/jira/browse/CASSANDRA-5504
Maybe guys know a better way to fix it, but that helped me in a meanwhile.
On Mon, Apr 22, 2013 at 1:44 AM, Oleksandr Petrov <
oleksandr.pet...@gmail.com> wrote:
> If you're using Cassand
If you're using Cassandra 1.2.3, and new Hadoop interface, that would make
a call to next(), you'll have an eternal loop reading same things all over
again from your cassandra nodes (you may see it if you enable Debug output).
next() is clearing key() which is required for Wide Row iteration.
Set
Tried to isolate the issue in testing environment,
What I currently have:
That's a setup for test:
CREATE KEYSPACE cascading_cassandra WITH replication = {'class' :
'SimpleStrategy', 'replication_factor' : 1};
USE cascading_cassandra;
CREATE TABLE libraries (emitted_at timestamp, additional_info
I can confirm running same problem.
Tried ConfigHelper.setThriftMaxMessageLengthInMb();, and tuning server
side, reducing/increasing batch size.
Here's stacktrace from Hadoop/Cassandra, maybe it could give a hint:
Caused by: org.apache.thrift.protocol.TProtocolException: Message length
exceeded:
It's slow going finding the time to do so but I'm working on that.
We do have another table that has one or sometimes two columns per row. We can
run jobs on it without issue. I looked through org.apache.cassandra.hadoop
code and don't see anything that's really changed since 1.1.5 (which was
Can you reproduce this in a simple way ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 18/04/2013, at 5:50 AM, Lanny Ripple wrote:
> That was our first thought. Using maven's dependency tree info we verified
>
That was our first thought. Using maven's dependency tree info we verified
that we're using the expected (cass 1.2.3) jars
$ mvn dependency:tree | grep thrift
[INFO] | +- org.apache.thrift:libthrift:jar:0.7.0:compile
[INFO] | \- org.apache.cassandra:cassandra-thrift:jar:1.2.3:compile
I've also
Can you confirm the you are using the same thrift version that ships 1.2.3 ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 16/04/2013, at 10:17 AM, Lanny Ripple wrote:
> A bump to say I found this
>
>
> http:/
A bump to say I found this
http://stackoverflow.com/questions/15487540/pig-cassandra-message-length-exceeded
so others are seeing similar behavior.
From what I can see of org.apache.cassandra.hadoop nothing has changed since
1.1.5 when we didn't see such things but sure looks like there's a
Maybe you should enable the wide row support that uses get_paged_slice
instead of get_range_slice and possibly will not have the same issue.
On Wed, Apr 10, 2013 at 7:29 PM, Lanny Ripple wrote:
> We are using Astyanax in production but I cut back to just Hadoop and
> Cassandra to confirm it's a
We are using Astyanax in production but I cut back to just Hadoop and Cassandra
to confirm it's a Cassandra (or our use of Cassandra) problem.
We do have some extremely large rows but we went from everything working with
1.1.5 to almost everything carping with 1.2.3. Something has changed. Per
I also saw this when upgrading from C* 1.0 to 1.2.2, and from hector 0.6 to 0.8
Turns out the Thrift message really was too long.
The mystery to me: Why no complaints in previous versions? Were some checks
added in Thrift or Hector?
-Original Message-
From: Lanny Ripple [mailto:la...@spot
13 matches
Mail list logo