Hector version 1.0-3
What is the reason for the second exception, BTW?
Thanks,
Dushyant
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Wednesday, March 21, 2012 10:46 PM
To: user@cassandra.apache.org
Subject: Re: Exceptions related to thrift transport
1. org.apache.thrift.TExcep
are you perhaps trying to send a large batch mutate? i've seen broken
pipes etc in cassandra 0.7 (currently in the process of upgrading to
1.0.8) when a large batch mutate is sent.
On 22/03/2012 07:09, Tiwari, Dushyant wrote:
Hector version 1.0-3
What is the reason for the second exception,
Yes the queries are batch mutate. I can understand this for exception#1 but not
sure what is causing exception#2 (server side).
From: Guy Incognito [mailto:dnd1...@gmail.com]
Sent: Thursday, March 22, 2012 1:06 PM
To: user@cassandra.apache.org
Subject: Re: Exceptions related to thrift transport
Hi *,
My Cassandra installation runs on flowing system:
- Linux with Kernel 2.6.32.22
- jna-3.3.0
- Java 1.7.0-b147
Sometimes we are getting following error:
*** glibc detected *** /var/opt/java1.7/bin/java: free(): invalid
pointer: 0x7f66088a6000 ***
=== Backtrace: =
/
On 3/21/12 10:47 PM, Viktor Jevdokimov wrote:
This
is a known issue(s) to be fixed (can’t find exact tickets on
the tracker).
Controller,
that receives truncate command, checks all nodes up and
Hi guys,
Based on what you are saying there seems to be a tradeoff that developers
have to handle between:
"keep your rows under a certain size" vs
"keep data that's queried together, on disk together"
How would you handle this tradeoff in my case:
I monitor about
Hi,
I have tried few experiments with Composite (first, as columns, and next, as
rows).
I have followed the paths described by
http://www.datastax.com/dev/blog/introduction-to-composite-columns-part-1
My composite is (UTF8, UTF8): (folderId, filename)
And I have inserted for all tests, the foll
Just tested 1.0.8 before upgrading from 1.0.7: tombstones created by TTL or by
delete operation are perfectly deleted after either compaction or cleanup.
Have no idea about any other settings than gc_grace_seconds, check you schema
from cassandra-cli.
Best regards/ Pagarbiai
Viktor Jevdo
Sounds like a race condition in the off heap caching while calling
Unsafe.free().
Do you use cache ? What is your use case when you encounter this error
? Are you able to reproduce it ?
2012/3/22 Maciej Miklas :
> Hi *,
>
> My Cassandra installation runs on flowing system:
>
> Linux with Kernel
Hi,
How to find a column family from a cfId? I got a bunch of exception, want
to find out which CF has problem.
java.io.IOError:
org.apache.cassandra.db.UnserializableColumnFamilyException: Couldn't find
cfId=1744830464
at
org.apache.cassandra.service.AbstractRowResolver.preprocess(Abstra
Could you create a bug report here
https://issues.apache.org/jira/browse/CASSANDRA and attached the DEBUG level
log from the startup to when the error happens.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 22/03/2012, at 5:17 AM, Wenjun
> java.io.IOError: org.apache.cassandra.db.UnserializableColumnFamilyException:
> Couldn't find cfId=-387130991
Schema may have diverged between nodes.
use cassandra-cli and run describe cluster; to see how many schema versions you
have.
Cheers
-
Aaron Morton
Freelance Develop
Can you provide the full error message ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 22/03/2012, at 9:54 PM, Tiwari, Dushyant wrote:
> Yes the queries are batch mutate. I can understand this for exception#1 but
> not sure what is caus
> Will adding a few tens of wide rows like this every day cause me problems on
> the long term? Should I consider lowering the time bucket?
IMHO yeah, yup, ya and yes.
> From experience I am a bit reluctant to create too many rows because I see
> that reading across multiple rows seriously affe
Use a version of the Java 6 runtime, Cassandra hasn't been tested at all
with the Java 7 runtime.
On Thu, Mar 22, 2012 at 1:27 PM, Benoit Perroud wrote:
> Sounds like a race condition in the off heap caching while calling
> Unsafe.free().
>
> Do you use cache ? What is your use case when you enc
Thanks Aaron, I'll lower the time bucket, see how it goes.
Cheers,
Alex
On Thu, Mar 22, 2012 at 10:07 PM, aaron morton wrote:
> Will adding a few tens of wide rows like this every day cause me problems
> on the long term? Should I consider lowering the time bucket?
>
> IMHO yeah, yup, ya and ye
Thanks Aaron. when I do describe cluster, always there are "UNREACHABLE",
but nodetool ring is fine. it is pretty busy cluster, read 3K/sec
$ cassandra-cli -h localhost -u root -pw cassy
Connected to: "Production Cluster" on localhost/9160
Welcome to the Cassandra CLI.
Type 'help;' or '?' for hel
Hi Victor,
Thanks for your response.
Is there a possibility that continual deletions during compact could be
blocking removal of the tombstones? The full manual compact takes about 4
hours per node for our data, so there is a large number of deletes
occurring during that time.
This is the descr
18 matches
Mail list logo