The root cause was as I described. System tables were creating while
running OpenJDK. Files were written to disk using snappy compression.
Cassandra was later restarted with IBM Java. With the IBM JRE on a 32 bit
arch, the native snappy library is not found; consequently, Cassandra is
not able to r
On Fri, May 3, 2013 at 11:07 AM, John Sanda wrote:
> The machine where this error occurred had both OpenJDK and IBM's Java
> installed. The only way I have been able to reproduce is by installing
> Cassandra with OpenJDK, shutting it down, the starting it back up with IBM
> Java.
Maybe the root c
Unfortunately not, I've moved on to trying to add the nodes the current
cluster and then decommission the "old" ones.
But even that is not working, this is the strangest of things : while
trying to add a new node, I
- set its token to an existing value+1
- ensure the yaml (clutser name, partiti
What constitutes an "extreme write" ?
On 2013-05-03 15:45:33
+, Edward Capriolo said:
If your writes are so extreme that metables are flushing all the time,
the best you can do is turn off all caches, do bloom filters off heap,
and then instruct cassandra to use large portions of the heap
Hi
I was wondering if anyone has used or evaluated Cassandra on Joyent
(either SmartOS or Linux). Price Performance, data transfer and
availability is so promising. I was wondering if it is to good to be true.
Thanks in advance
Shahryar
Sorry Sri, I've never used hector. How ever it's straightforward in
astyanax. There are examples on the github page.
On 3 May 2013 18:50, "Sri Ramya" wrote:
> Can you tell me how to do this in hector. Can you give me some example.
>
> On Fri, May 3, 2013 at 10:29 AM, Sri Ramya wrote:
>
>> than
The machine where this error occurred had both OpenJDK and IBM's Java
installed. The only way I have been able to reproduce is by installing
Cassandra with OpenJDK, shutting it down, the starting it back up with IBM
Java. Snappy compression is enabled with OpenJDK so SSTables, including for
system
Can you tell me how to do this in hector. Can you give me some example.
On Fri, May 3, 2013 at 10:29 AM, Sri Ramya wrote:
> thank you very much. i will try and let you know whether its working or not
>
>
>
> On Thu, May 2, 2013 at 7:04 PM, Jabbar Azam wrote:
>
>> Hello Sri,
>>
>> As far as I kn
Thanks Jabbar and Aaron.
Aaron - for broadcast_address , looks it is only working with
EC2MultiRegionSnitch.
but in our case, we will have one center in colo, and one center in
ec2(sorry, did not make that clear, we'd like to replicate data from colo
to EC2)
So can we still use broadcast_address?
I am still trying to sort this out. When I run with Oracle's JRE, it does
in fact look like compression is enabled for system tables.
cqlsh> DESCRIBE TABLE system.schema_columnfamilies ;
CREATE TABLE schema_columnfamilies (
keyspace_name text,
columnfamily_name text,
bloom_filter_fp_chance
If your writes are so extreme that metables are flushing all the time, the
best you can do is turn off all caches, do bloom filters off heap, and then
instruct cassandra to use large portions of the heap as memtables.
On Fri, May 3, 2013 at 11:40 AM, Bryan Talbot wrote:
> It's true that a 16GB h
It's true that a 16GB heap is generally not a good idea; however, it's not
clear from the data provided what problem you're trying to solve.
What is it that you don't like about the default settings?
-Bryan
On Fri, May 3, 2013 at 4:27 AM, Oleg Dulin wrote:
> Here is my question. It can't pos
Hi,
I'm using Pig to calculate the sum of a columns from a columnfamily (scan of
all rows) and I've read that input data locality is supported at
http://wiki.apache.org/cassandra/HadoopSupport
However when I execute my Pig script Hadoop assigns only one mapper to the task
and not one mapper on
I did not know the system tables were compressed. That would seem like an
odd decision you would think that the system tables are small and would not
benefit from compression much. Is it a static object static object that
requires initialization even though it is not used?
On Fri, May 3, 2013 at
Is there a way to change the sstable_compression for system tables? I am
trying to deploy Cassandra 1.2.2 on a platform with IBM Java and 32 bit
arch where the snappy-java native library fails to load. The error I get
looks like,
ERROR [SSTableBatchOpen:1] 2013-05-02 14:42:42,485
CassandraDaemon.j
Hi - I have 2 data centres (DC1 and DC2) and I have local_quorum set as the CL
for reads. Say there is a RF factor = 2. (so 2 copies each in DC).
If both nodes which own the data in DC1 are down and I do a read with CL as
"local_quorum" , will I get an error back to the application ? or will
Ca
Hi,
I have created a 2 node test cluster in Cassandra version 1.2.3
with Simple Strategy, Replication Factor 2 and
ByteOrderedPartitioner(so as to get Range Query functionality).
When i am using a range query on a secondary index in CQLSH, I am
getting the error :
"Bad Request: No in
I get this:
Running rpm_check_debug
ERROR with rpm_check_debug vs depsolve:
apache-cassandra11 conflicts with apache-cassandra11-1.1.11-1.noarch
I'm using Centos. Problem with my OS, or problem with the package? (And
how can it conflict with itself??)
will
Thanks!
The creation of the new CF worked pretty well and fast! Unfortunately, I was
unable to trace the request made using secondary indexes:
cqlsh:Sessions> select * from "Items" where key = '687474703a2f2f6573706f7';
key| mahoutItemid
+---
Here is my question. It can't possibly be a good set up to use 16gig
heap space, but this is the best I can do. Setting it to default never
worked well for me, setting it to 8g doesn't work well either. It can't
keep up with flushing memtables. It is possibly that someone at some
point may have
Sure, I can do that.
My main concern is write latency and the write timeouts we are experiencing.
Read latency is secondary, as long as we do not introduce timeouts on read and
do not exceed our sampling intervals (see below).
We are running Cassandra 1.2.1 on Ubuntu 12.04 with JDK 1.7.0_17 (64
Hi Aaron,
We're running 1.2.4, so with vNodes
We ran scrub but saw the issue again when repairing
nodetool status -
Datacenter: DC01
=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID
Rack
UN 10.70.48.23
22 matches
Mail list logo