You can configure sstable size by sstable_size_in_mb parameter for LCS.
The default value is 5MB.
You should better to check you don't have many pending compaction tasks
with nodetool tpstats and compactionstats also.
If you have enough IO throughput, you can increase
compaction_throughput_mb_per_s
I had this happen when I had really poorly generated tokens for the ring.
Cassandra seems to accept numbers that are too big. You get hot spots
when you think you should be balanced and repair never ends (I think there
is a 48 hour timeout).
On Tuesday, April 10, 2012, Frank Ng wrote:
> I am no
Hi Aaron,
Thanks for the quick answer, I'll build a prototype to benchmark each
approach next week.
Here are more questions based on your reply:
a) "These queries are not easily supported on standard Cassandra"
select * from book where price < 992 order by price descending limit 30;
This is
There doesn't seem to be an open JIRA ticket for it - can you please make one
at https://issues.apache.org/jira/browse/CASSANDRA? That ensures that at some
point someone will take a look at it and it just won't be forgotten in the
endless barrage of emails...
Yup, I did the composite columns s
some time back, I created the account cassandra_jobs on twitter. if you email
the user list or better yet just cc cassandra_jobs on twitter, I'll retweet it
there so that the information can get out to more people.
https://twitter.com/#!/cassandra_jobs
cheers,
Jeremy
I don't understand config for sstableloader. I thought sstableloader just takes
cassandra.yaml file and does it. pleae throw some more light on same.
From: aaron morton [aa...@thelastpickle.com]
Sent: 10 April 2012 11:37 PM
To: user@cassandra.apache.org
Subject: Re
I am not using tier-sized compaction.
On Tue, Apr 10, 2012 at 12:56 PM, Jonathan Rhone wrote:
> Data size, number of nodes, RF?
>
> Are you using size-tiered compaction on any of the column families that
> hold a lot of your data?
>
> Do your cassandra logs say you are streaming a lot of ranges
Thanks Aaron, will seek help from hector team.
On Tue, Apr 10, 2012 at 3:41 AM, aaron morton wrote:
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:1
I have 12 nodes with approximately 1TB load per node. The RF is 3. I am
considering moving to ext4.
I checked the ranges and the numbers go from 1 to the 9000s .
On Tue, Apr 10, 2012 at 12:56 PM, Jonathan Rhone wrote:
> Data size, number of nodes, RF?
>
> Are you using size-tiered compaction
also - JVM heap size, and anything related to memory pressure
On 04/10/2012 07:56 PM, Jonathan Rhone wrote:
Data size, number of nodes, RF?
Are you using size-tiered compaction on any of the column families
that hold a lot of your data?
Do your cassandra logs say you are streaming a lot of r
Data size, number of nodes, RF?
Are you using size-tiered compaction on any of the column families that
hold a lot of your data?
Do your cassandra logs say you are streaming a lot of ranges?
zgrep -E "(Performing streaming repair|out of sync)"
On Tue, Apr 10, 2012 at 9:45 AM, Igor wrote:
> O
On 04/10/2012 07:16 PM, Frank Ng wrote:
Short answer - yes.
But you are asking wrong question.
I think both processes are taking a while. When it starts up,
netstats and compactionstats show nothing. Anyone out there
successfully using ext3 and their repair processes are faster than this?
I think both processes are taking a while. When it starts up, netstats and
compactionstats show nothing. Anyone out there successfully using ext3 and
their repair processes are faster than this?
On Tue, Apr 10, 2012 at 10:42 AM, Igor wrote:
> Hi
>
> You can check with nodetool which part of r
LCS explicitly tries to keep sstables under 5MB to minimize extra work
done by compacting data that didn't really overlap across different
levels.
On Tue, Apr 10, 2012 at 9:24 AM, Romain HARDOUIN
wrote:
>
> Hi,
>
> We are surprised by the number of files generated by Cassandra.
> Our cluster cons
Turns out if you read to the bottom of the tutorial you find answers.
Please disregard this mail, I found my answer.
:)
On 4/10/12 10:41 AM, "Mucklow, Blaine (GE Energy)"
wrote:
>Hi all,
>
>I had a lot of success using Java+Hector, but was trying to migrate to
>pycassa and was having some 'simp
also i suggest to setup disk_access_mode: mmap_index_only
2012/4/9 Omid Aladini
> Thanks. Yes it's due to mmappd SSTables pages that count as resident size.
>
> Jeremiah: mmap isn't through JNA, it's via java.nio.MappedByteBuffer I
> think.
>
> -- Omid
>
> On Mon, Apr 9, 2012 at 4:15 PM, Jeremia
mmap doesn't depend on jna
2012/4/9 Jeremiah Jordan
> He says he disabled JNA. You can't mmap without JNA can you?
>
> On Apr 9, 2012, at 4:52 AM, aaron morton wrote:
>
> see http://wiki.apache.org/cassandra/FAQ#mmap
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
Hi
You can check with nodetool which part of repair process is slow -
network streams or verify compactions. use nodetool netstats or
compactionstats.
On 04/10/2012 05:16 PM, Frank Ng wrote:
Hello,
I am on Cassandra 1.0.7. My repair processes are taking over 30 hours
to complete. Is it
Hi all,
I had a lot of success using Java+Hector, but was trying to migrate to pycassa
and was having some 'simple' issues. What I am trying to do is create a column
family where the following occurs:
KEY-> String ColumnName-> LongType ColumnValue-> DoubleType
Basically this is time series dat
By the way, I am using Cassandra 1.0.7, CL = ONE (R/W), RF = 2, 2 EC2
c1.medium nodes cluster
Alain
2012/4/10 Alain RODRIGUEZ
> Hi, I'm experimenting a strange and very annoying phenomena.
>
> I had a problem with the commit log size which grew too much and full one
> of the hard disks in all m
Hi, I'm experimenting a strange and very annoying phenomena.
I had a problem with the commit log size which grew too much and full one
of the hard disks in all my nodes almost at the same time (2 nodes only,
RF=2, so the 2 nodes are behaving exactly in the same way)
My data are mounted in an othe
Hi,
We are surprised by the number of files generated by Cassandra.
Our cluster consists of 9 nodes and each node handles about 35 GB.
We're using Cassandra 1.0.6 with LeveledCompactionStrategy.
We have 30 CF.
We've got roughly 45,000 files under the keyspace directory on each node:
ls -l /var/l
Dear
All,
I am new to Cassandra 1.0.8, and I use the tool json2sstable for bulk
insert, but I still have the error:
java.lang.RuntimeException: Can not write to the Standard columns Super
Column Family.
Has org.apache.cassandra.tools.SSTableImport.importSorted
(SSTableImport.java
Hello,
I am on Cassandra 1.0.7. My repair processes are taking over 30 hours to
complete. Is it normal for the repair process to take this long? I wonder
if it's because I am using the ext3 file system.
thanks
Dear All,
I am new
to Cassandra 1.0.8, and I use the tool json2sstable for bulk insert, but I
still have the error:
You can also look at using a .net client wrapper like
https://github.com/managedfusion/fluentcassandra
On Tue, Apr 10, 2012 at 8:06 AM, puneet loya wrote:
> thankk :) :) it works :)
>
>
> On Tue, Apr 10, 2012 at 3:07 PM, Henrik Schröder wrote:
>
>> In your code you are using BufferedTranspo
thankk :) :) it works :)
On Tue, Apr 10, 2012 at 3:07 PM, Henrik Schröder wrote:
> In your code you are using BufferedTransport, but in the Cassandra logs
> you're getting errors when it tries to use FramedTransport. If I remember
> correctly, BufferedTransport is gone, so you should only u
Plz refer to the following thread:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/cleanup-crashing-with-quot-java-util-concurrent-ExecutionException-java-lang-ArrayIndexOutOfBoundsEx-td7371682.html
maki
From iPhone
On 2012/04/10, at 17:21, Radim Kolar wrote:
> what is metho
> Caused by: java.net.SocketTimeoutException: Read timed out
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> at
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
>
Super Columns: top level column to have a list of sub column.
e.g.
row key: foo
column: bar
sub columns:
baz = qux
Composite columns: data types are defined by combining multiple types,
instances of the type are compared by comparing each component in turn.
e.g.
row key: foo
column: =
Schema may not be fully propagated.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 9/04/2012, at 10:18 PM, Rishabh Agrawal wrote:
> Thanks, it just worked. Though I am able to load sstables but I get following
> error:
>
> ERROR 15:44:
In your code you are using BufferedTransport, but in the Cassandra logs
you're getting errors when it tries to use FramedTransport. If I remember
correctly, BufferedTransport is gone, so you should only use
FramedTransport. Like this:
TTransport transport = new TFramedTransport(new TSocket(host, p
Log is showing the following exception
DEBUG [ScheduledTasks:1] 2012-04-10 14:49:29,654 LoadBroadcaster.java (line
86) Disseminating load info ...
DEBUG [Thrift:7] 2012-04-10 14:50:00,820 CustomTThreadPoolServer.java (line
197) Thrift transport error occurred during processing of message.
org.apac
I checked logs of cassandra.. in the debug state..
I got this response
DEBUG [ScheduledTasks:1] 2012-04-10 14:49:29,654 LoadBroadcaster.java (line
86) Disseminating load info ...
DEBUG [Thrift:7] 2012-04-10 14:50:00,820 CustomTThreadPoolServer.java (line
197) Thrift transport error occurred durin
what is method for undo effect of CASSANDRA-3989 (too many unnecessary
levels)? running major compact or cleanup does nothing.
35 matches
Mail list logo