Thanks again for the help. I upgraded my JVM to update 22 but I'm still
getting the same error just as before, and just as, if not more,
frequently. I'm thinking that the best course of action at this point is to
replace the hardware. I would try the test builds, but I can't imagine they
wouldn'
Thanks for the advice. Follow up questions:
a) is 0.6.6 compactable with 0.6.1? Do we need to change the config? How about
the data in the current system?
b) Should we wait for 0.7? If so, same questions above.
Thanks.
Henry
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmai
Am guessing but it looks like cassandra returned an error and the client then had trouble reading the error. However if I look at the Beta 2 java thrift interface in Cassandra.java, line 544 is not in recv_get_slice. May be nothing. Perhaps check the server for an error and double check your client
Clock was present in beta 1 and then removed. The Beta2 thrift client does not have this check in it. Double check your install and make sure it's all beta 2. AaronOn 15 Oct, 2010,at 11:49 AM, Michael Moores wrote:My Hadoop TaskTracker is using the Cassandra CplumnFamilyInputFormat, and appears to
On 10/14/10 12:44 PM, B. Todd Burruss wrote:
> >> INFO 16:46:06,875 DiskAccessMode 'auto' determined to be mmap,
>> indexAccessMode is mmap
thx, it does say that in the log, but that is probably just a reflection
of whatever is read from cassandra.yaml.
Having read the relevant code, the log me
10:10:21,787 ERROR ~ Error getting Sensor
org.apache.thrift.TApplicationException: Internal error processing get_slice
at org.apache.thrift.TApplicationException.read(
TApplicationException.java:108)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(
Cassandra.java:544)
at org.apac
I bet it's the deserializing of responses into Row objects that is
taking most of the time. (This would also fit the resolve time
growing proportionately with the cfstats time -- larger results take
longer to serialize/deserialize.)
Attached is a patch that adds extra debug information to verify
Dynamic endpoint snitch only works with one keyspace in 0.6. (This
was true in 0.6.5 as well, so if you are only seeing it now, you were
running into the 0.6.5 bug that left the dynamic snitch disabled
unless you added an extra option,
https://issues.apache.org/jira/browse/CASSANDRA-1543.)
On Thu
My Hadoop TaskTracker is using the Cassandra CplumnFamilyInputFormat, and
appears to be finding records (the data is serialized below in the log output),
but the cassandra Column class is throwing a validation exception indicating
"Required field 'clock' was not present!".
My Cassandra cluster v
We have upgraded from 0.6.5 to 0.6.6 and our nodes will not come up. See
error below. Did something change that we need to change in the config
files?
Thanks.
INFO 22:13:37,761 JNA not found. Native methods will be disabled.
INFO 22:13:38,083 DiskAccessMode isstandard, indexAccessMode is mmap
E
Yes, on linux atleast, lsof would show you that. lsof -d mem -p . You
can also look at /proc//maps, again linux centric.
Sridhar
On Thu, Oct 14, 2010 at 3:44 PM, B. Todd Burruss wrote:
> thx, it does say that in the log, but that is probably just a reflection
> of whatever is read from cassan
Ah, I see code in /thrift/ThriftValidation .javathrow new InvalidRequestException("Deletion does not yet support SliceRange predicates.");Sorry about that, did not fully understand what you were saying. I've done something similar where I did a get slice get_slice then sent a single batch_mutate to
Have you read NEWS? Framed mode is on by default in 0.7.
On Thu, Oct 14, 2010 at 3:44 PM, Brayton Thompson wrote:
> Ok i made the changes, now im running into a thrift exception on
> set_keyspace().
>
> $VAR1 = bless( {
> 'code' => 0,
> 'message' => 'TSocket: Co
Aaron, Thanks for your response.
I use a custom UUID generator so that the second part is randomly generated (no
MAC address). I actually want this to be random since I could potentially have
multiple values for the same ticker, measure and time and I do not want to
override.
I didn't realize
Ok i made the changes, now im running into a thrift exception on set_keyspace().
$VAR1 = bless( {
'code' => 0,
'message' => 'TSocket: Could not read 4 bytes from
xxx.xxx.xxx.xxx:9160'
}, 'Thrift::TException' );
xxx.xxx.xxx.xxx is ip of the machin
a) 0.6.1 is ancient, upgrade to 0.6.6 (see
http://www.riptano.com/blog/whats-new-cassandra-066 for links to all
the improvements since 0.6.1 -- the links to older versions are at the
bottom)
b) increase the memtable flush thresholds to reduce the need for
compaction (8x the defaults is a decent st
Bonjour,
PayPal cherche constamment à assurer la
sécurité en examinant régulièrement les comptes sur son
système. Nous avons récemment examiné votre compte et nous
avons besoin de plus d'i
We have a five node cluster, using replication factor of 3. The application is
only sending write requests at this point - we'd like to gain some operation
experience with it first before start read from it.
We are seeing over a hundred compaction activities on each server, some of them
are fo
I SOLVED the problem.
It was my misunderstanding of how the cassandra connection is being used for
calling getSlices().
On Oct 14, 2010, at 10:06 AM, Michael Moores wrote:
Ok I moved back to hadoop 20.2 and the WordCount example is doing better.
But I am still seeing a problem, that may be due t
Can someone help to determine the anatomy of a quorum read? We are trying to
understand why CFSTATS reports one time and the client actual gets data back
almost 4x slower. Below are the debug logs from a read that all 3 nodes
reported < 2.5secs response time in cfstats but the client did not get da
You have to call the Ghostbusters!!!
On Oct 14, 2010, at 2:44 PM, Jesse McConnell wrote:
> you have to call set_keyspace on the connection now
>
> cheers,
> jesse
>
> --
> jesse mcconnell
> jesse.mcconn...@gmail.com
>
>
> On Thu, Oct 14, 2010 at 14:41, Brayton Thompson wrote:
> Was there a
Hi all,
I'm trying to figure out whether I should migrate from 0.6.5 to 0.6.6 or go
directly to 0.7 when it's production-ready.
Any new word on when the 0.7 stable release will be?
Also, once 0.7 is officially released, will 0.6 still be maintained (sort of
like Ubuntu's long-term releases), or
Hi!
I've been reading the wiki and some posts to this mailing list and writing
some tests to discover if Cassandra can be made to fit my needs.
For the most part, things are looking good. However, I have one issue that
I am currently having problems with and it's making me think that maybe
Cassa
awesome thank you.
On Oct 14, 2010, at 3:44 PM, Brandon Williams wrote:
>
>
> On Thu, Oct 14, 2010 at 2:41 PM, Brayton Thompson
> wrote:
> Was there a change to the API in 0.7?
>
> Yes, many.
>
> example...
> from the api wikki
>
>
> Use http://wiki.apache.org/cassandra/API07 for 0.7.
>
I would recommend using epoch time for your timestamp and comparing as LongType. The version 1 UUID includes the MAC of the machine that generated it, it two different machines will create different UUID's for the some time. They are meant to be unique after all http://en.wikipedia.org/wiki/Univers
you have to call set_keyspace on the connection now
cheers,
jesse
--
jesse mcconnell
jesse.mcconn...@gmail.com
On Thu, Oct 14, 2010 at 14:41, Brayton Thompson wrote:
> Was there a change to the API in 0.7?
>
> example...
> from the api wikki
>
> insert
>
>-
>
>
>void insert(string keys
On Thu, Oct 14, 2010 at 2:41 PM, Brayton Thompson wrote:
> Was there a change to the API in 0.7?
>
Yes, many.
> example...
> from the api wikki
>
>
Use http://wiki.apache.org/cassandra/API07 for 0.7.
> This is not a huge issue, I can look at the module to determine the new
> ordering of argum
thx, it does say that in the log, but that is probably just a
reflection of whatever is read from cassandra.yaml.
i am wondering if some unix tool can tell me if my process is mmap'ing
files. maybe lsof?
On 10/14/2010 12:07 PM, Rob Coli wrote:
On 10/14/10 10:59 AM, B. Todd Burruss wrote:
Was there a change to the API in 0.7?
example...
from the api wikki
insert
void insert(string keyspace, string key, ColumnPath column_path, binary value,
i64 timestamp, ConsistencyLevel consistency_level)
Now from the thrift generated perl library for the 0.7 beta 2 download.
sub insert{
m
I have not benchmarked this. I suggest trying both and letting us know. :)
On Thu, Oct 14, 2010 at 2:03 PM, Narendra Sharma
wrote:
> Thanks Jonathan.
>
> Another related question is if I need to fetch only 1 row then what will be
> the difference between the performance of get_slice vs get_range
On 10/14/10 10:59 AM, B. Todd Burruss wrote:
0.7.0-beta2
top is reporting my cassandra process as using 11g. i have set
"disk_access_mode: standard" and Xmx8G (verified via JMX)
i have only noticed using more RAM than Xmx when using mmap i/o. this
leads me to believe that disk_access_mode was n
Thanks Jonathan.
Another related question is if I need to fetch only 1 row then what will be
the difference between the performance of get_slice vs get_range_slices.
The reason for this question is that we are using some code that uses
get_range_slices. We have option of forcing it to use count=1
We've had plenty of Good Stuff[1] go into the 0.6 branch since the
release of 0.6.5, so I'm pleased to announce the release of 0.6.6. And,
for a more detailed breakdown of what's changed, I encourage you to
checkout the excellent Riptano writeup at
http://www.riptano.com/blog/whats-new-cassandra-
Hello All,
I am testing Cassandra 0.7 with the Avro api on a single machine as a financial
time series server, so my setup looks something like this:
keyspace = timeseries, column family = tickdata, key = ticker, super column =
field (price, volume, high, low), column = timestamp.
So a single v
On Thu, 2010-10-14 at 13:42 -0500, Eric Evans wrote:
> This list is for the development of Cassandra directly, your question is
> better posed on user@cassandra.apache.org (moving it there).
>
> Before following up though, you might want to check the wiki and list
> archives, questions about creat
This list is for the development of Cassandra directly, your question is
better posed on user@cassandra.apache.org (moving it there).
Before following up though, you might want to check the wiki and list
archives, questions about creating TimeUUIDs from Java have been pretty
common. For example:
get_range_slices never does "searching."
the performance of those two predicates is equivalent, assuming a row
"start key" actually exists.
On Thu, Oct 14, 2010 at 1:09 PM, Narendra Sharma
wrote:
> Hi,
>
> I am using Cassandra 0.6.5. Our application uses the get_range_slices to get
> rows in the
Thanks, that is a good set of data points!
On Thu, Oct 7, 2010 at 6:27 PM, Corey Hulen wrote:
>
> I recently posted a blog article about Cassandra and EC2 performance testing
> for small vs large, EBS vs ephemeral storage, compared to real HW with and
> without an SSD. Hope people find it intere
Hi,
I am using Cassandra 0.6.5. Our application uses the get_range_slices to get
rows in the given range.
Could someone please explain how get_range_slices works internally esp when
a count parameter (value = 1) is also specified in the SlicePredicate? Does
Cassandra first search all in the given
what does it report when you do allow mmap'd i/o to be used? (which
you should always do anyway if you care about performance.)
On Thu, Oct 14, 2010 at 12:59 PM, B. Todd Burruss wrote:
> 0.7.0-beta2
>
> top is reporting my cassandra process as using 11g. i have set
> "disk_access_mode: standar
0.7.0-beta2
top is reporting my cassandra process as using 11g. i have set
"disk_access_mode: standard" and Xmx8G (verified via JMX)
i have only noticed using more RAM than Xmx when using mmap i/o. this
leads me to believe that disk_access_mode was not set properly, even
though it is in t
Ok I moved back to hadoop 20.2 and the WordCount example is doing better.
But I am still seeing a problem, that may be due to my lack of experience w/
hadoop.
I am running "hadoop jar..." on my JobTracker/NameNode machine, which is not
running Cassandra.
I have DataNode/TaskTracker running on all
10:10:21,787 ERROR ~ Error getting Sensor
org.apache.thrift.TApplicationException: Internal error processing get_slice
at org.apache.thrift.TApplicationException.read(
TApplicationException.java:108)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(
Cassandra.java:544)
at org.apac
On Oct 14, 2010, at 10:37 PM, Eric Evans wrote:
>> sorry to say, your best bet is to upgrade
>
> I would actually start with some large test builds, kernels work well
> for this. Use a high concurrency (> 4).
Whether or not those fail, assuming x86, download memtest86+ and boot it.
Symptoms li
On Wed, 2010-10-13 at 22:41 -0700, B. Todd Burruss wrote:
> that type of error report indicates a bug in the JVM. something
> that
> should *never* occur if the JVM is operating properly. corrupt
> cassandra data, auto-bootstrapping should never cause that kind of
> crash.
>
> the SIGSEGV in
Pycassa should just take your long and do the right thing with it
(packing it into a binary string) before passing it off to thrift.
The system tests in the source (test/system/test_thrift_server.py)
will give you a very good indication of how to do this. The long is
packed into a string using st
How to insert a value to a column family CompareWith Longtype in python.I'm
using pycassa.
What should the type of the column be?Thx
47 matches
Mail list logo