Our Young size=800 MB,SurvivorRatio=8,edenSize=640MB. All objects/bytes
generated during compaction are garbage right?
During compaction, with in_memory_compaction_limit=64MB and
concurrent_compactors=8, there is a lot of pressure on ParNew sweeps.
I was thinking of decreasing concurrent_compact
@ravi, u can increase young gen size, keep a high tenuring rate or
increase survivor ratio..
On Fri, Jul 6, 2012 at 4:03 AM, aaron morton wrote:
> Ideally we would like to collect maximum garbage from ParNew itself, during
> compactions. What are the steps to take towards to achieving this?
>
>
Good evening,
I have read multiple keyspaces are bad before in a few discussions, but to
what extent?
We have some reasonably powerful machines and looking to host
an additional (currently we have 1) 2 keyspaces within our cassandra
cluster (of 3 nodes, using RF3).
At what point does adding extr
On Fri, Jul 6, 2012 at 9:44 AM, rohit bhatia wrote:
> On Fri, Jul 6, 2012 at 4:47 AM, aaron morton wrote:
>> 12G Heap,
>> 1600Mb Young gen,
>>
>> Is a bit higher than the normal recommendation. 1600MB young gen can cause
>> some extra ParNew pauses.
> Thanks for heads up, i'll try tinkering on th
On Fri, Jul 6, 2012 at 4:47 AM, aaron morton wrote:
> 12G Heap,
> 1600Mb Young gen,
>
> Is a bit higher than the normal recommendation. 1600MB young gen can cause
> some extra ParNew pauses.
Thanks for heads up, i'll try tinkering on this
>
> 128 Concurrent writer
> threads
>
> Unless you are on
HI Aaron,
It is
create column family CF
with comparator =
'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)'
and key_validation_class = UTF8Type
and default_validation_class = UTF8Type;
> #2 has the Composite Column and #1 does not.
They are both strings.
All column names *must* be of the same type. What was your CF definition ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2012, at 7:26 AM, Sunit Randhawa wrote:
> Does it mean that the popular use case is when we need to update multiple
> column families using the same key?
Yes.
> Shouldn’t we design our space in such a way that those columns live in the
> same column family?
Design a model where the data for common queries is stored in one row+cf. Yo
> 12G Heap,
> 1600Mb Young gen,
Is a bit higher than the normal recommendation. 1600MB young gen can cause some
extra ParNew pauses.
> 128 Concurrent writer
> threads
Unless you are on SSD this is too many.
> 1) Is using JDK 1.7 any way detrimental to cassandra?
as far as I know it's not full
> I would really prefer to do it in Cassandra itself,
See
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/marshal/CompositeType.java
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2012, at 10:40 AM, L
Consult the NEWS.txt file for help on upgrading
https://github.com/apache/cassandra/blob/trunk/NEWS.txt
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2012, at 2:52 AM, rohit bhatia wrote:
> http://cassandra.apache.org/ says 1.1.2
>
Sounds like this problem in 1.1.0
https://issues.apache.org/jira/browse/CASSANDRA-4219 upgrade if you are on 1.1.0
If not please paste the entire exception.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2012, at 1:32 AM, puneet lo
> But I don't understand, how was all the available space taken away.
Take a look on disk at /var/lib/cassandra/data/ and
/var/lib/cassandra/commitlog to see what is taking up a lot of space.
Cassandra stores the column names as well as the values, so that can take up
some space.
> it says t
I need to create a ByteBuffer instance containing the proper composite key,
based on the values of the components of the key. I am going to use it for
update operation.
I tried to simply concatenate the buffers corresponding to the components, but
I am not sure this is correct, because I am gett
1.1. docs for the same
http://www.datastax.com/docs/1.1/operations/cluster_management
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/07/2012, at 9:17 PM, prasenjit mukherjee wrote:
> I am using cassandar version 1.1.2. I got the docume
> Ideally we would like to collect maximum garbage from ParNew itself, during
> compactions. What are the steps to take towards to achieving this?
I'm not sure what you are asking.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/07/2012,
agree.
It's a good idea to remove as many variables and possible and get to a
stable/known state. Use a clean install and a well known client and see if the
problems persist.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/07/2012, a
Can you provide an example ?
select * should return all the columns from the CF.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/07/2012, at 4:31 AM, Thierry Templier wrote:
> Thanks Aaron.
>
> I wonder if it's possible to obtain colu
Hi
I am new to to Cassandra and we started with 1.1 and modeled everything
with Composite columns and wide rows and chose CQL 3 even if it is beta.
Since I could not find a way in Hector to set CQL 3, I started with Thrift
and prototyped all my scenarios with Thrift including retrieving all row
ke
Hello,
I have 2 Columns for a 'RowKey' as below:
#1 : set CF['RowKey']['1000']='A=1,B=2';
#2: set CF['RowKey']['1000:C1']='A=2,B=3'';
#2 has the Composite Column and #1 does not.
Now when I execute the Composite Slice query by 1000 and C1, I do get
both the columns above.
I am hoping get #2
Hello.
I have a question regarding JNA and Windows.
I read about the problem that when taking snapshots might require the
process space x 2 due to how hardlinks are created.
Is JNA for Windows supported?
Looking at jira issue
https://issues.apache.org/jira/browse/CASSANDRA-1371 looks like it but
ch
>From what I understand, wide rows have quite a bit of overhead, especially
if you are picking columns that are far apart from each other for a given
row.
This post by Aaron Morton was quite good at explaining this issue
http://thelastpickle.com/2011/07/04/Cassandra-Query-Plans/
-Phil
On Thu, Ju
Here is my flow:
One process write a really wide row (250K+ supercolumns, each one with
5 subcolumns, for the total of 1K or so per supercolumn)
Second process comes in literally 2-3 seconds later and starts reading from it.
My observation is that nothing good happens. It is ridiculously slow
I actually found an answer to my first question at
http://wiki.apache.org/cassandra/API. So I got it wrong: actually the outer key
is the key in the table, and the inner key is the table name (this was somewhat
counter-intuitive). Does it mean that the popular use case is when we need to
update
http://cassandra.apache.org/ says 1.1.2
On Thu, Jul 5, 2012 at 7:46 PM, Raj N wrote:
> Hi experts,
> I am planning to upgrade from 0.8.4 to 1.+. Whats the latest stable
> version?
>
> Thanks
> -Rajesh
My current way of inserting rows one by one is too slow (I use cql3 prepared
statements) , so I want to try batch_mutate.
Could anybody give me more details about the interface? In the javadoc it says:
public void
batch_mutate(java.util.Map>>>
mutation_map,
Consistenc
Hi experts,
I am planning to upgrade from 0.8.4 to 1.+. Whats the latest stable
version?
Thanks
-Rajesh
-- Forwarded message --
From: Rob Coli
Date: Mon, Jul 2, 2012 at 11:19 PM
Subject: Re: cassandra on re-Start
To: user@cassandra.apache.org
On Mon, Jul 2, 2012 at 5:43 AM, puneet loya wrote:
> When I restarted the system , it is showing the keyspace does not exist.
>
> Not even l
Also,
Looking at gc log. I see messages like this across different servers
before they start dropping messages
"2012-07-04T10:48:20.336+: 96771.117: [GC 96771.118: [ParNew:
1367297K->57371K(1474560K), 0.0617350 secs]
6641571K->5340088K(12419072K), 0.0634460 secs] [Times: user=0.56
sys=0.01,
Hello to all,
I have cassandra instance I'm trying to use to store millions of file with
size ~ 3MB. Data structure is simple, 1 row for 1 file, with row key being the
id of file.
I'm loaded 1GB of data, and total available space is 10GB. And after a few
hour, all the available space was taken
The next London meetup is coming up on 16th July.
We've got two speakers - Richard Churchill talking about his
experiences rolling out Cassandra at ServiceTick and Tom Wilkie
talking about real time analytics on top of Cassandra.
http://www.meetup.com/Cassandra-London/events/69791362/
Dave
I am using cassandar version 1.1.2. I got the document to add node for
version 0.7 : http://www.datastax.com/docs/0.7/getting_started/configuring
Is it still valid ? Is there a documentation on this topic from
cassandra twiki/docs ?
-Prasenjit
i did with no luck. i got my fire put out.
for some reason one of my nodes upgraded itself after rebooting to fix
the leap second bug. i use apt-get to put on 1.0.8. seeing that my
cluster was running 1.0.7 i had to upgrade the rest of the nodes.
upgrading was very simple, stop, apt-get in
33 matches
Mail list logo