ding the columns. What version are you
> on ?
>
> https://issues.apache.org/jira/browse/CASSANDRA-2894
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 26/09/2012, at 8:26 PM, Віталій Ти
I suppose the way is to convert all SST to json, then install previous
version, convert back and load
2012/9/24 Arend-Jan Wijtzes
> On Thu, Sep 20, 2012 at 10:13:49AM +1200, aaron morton wrote:
> > No.
> > They use different minor file versions which are not backwards
> compatible.
>
> Thanks Aa
Actually an easy way to put cassandra down is
select count(*) from A limit 1000
CQL will read everything into List to make latter a count.
2012/9/26 aaron morton
> Can you provide some information on the queries and the size of the data
> they traversed ?
>
> The default maximum size for a s
See my comments inline
2012/9/25 Aaron Turner
> On Mon, Sep 24, 2012 at 10:02 AM, Віталій Тимчишин
> wrote:
> > Why so?
> > What are pluses and minuses?
> > As for me, I am looking for number of files in directory.
> > 700GB/512MB*5(files per SST) = 7000
leads to strange
pauses, I suppose because of compactor thinking what to compact next),...
2012/9/23 Aaron Turner
> On Sun, Sep 23, 2012 at 8:18 PM, Віталій Тимчишин
> wrote:
> > If you think about space, use Leveled compaction! This won't only allow
> you
> > to fill
If you think about space, use Leveled compaction! This won't only allow you
to fill more space, but also will shrink you data much faster in case of
updates. Size compaction can give you 3x-4x more space used than there are
live data. Consider the following (our simplified) scenario:
1) The data is
> versions of the schema?
>
>
>
>
>
> On 9/18/12 11:38 AM, "Michael Kjellman" wrote:
>
> >Thanks, I just modified the schema on the worse offending column family
> >(as determined by the .json) from 10MB to 200MB.
> >
> >Should I kick off a c
Network also matters. It would take a lot of time sending 6TB over 1Gb
link, even fully saturating it. IMHO You can try with 10Gb, but you will
need to raise your streaming/compaction limits a lot.
Also you will need to ensure that your compaction can keep up. It is often
done in one thread and I a
I've started to use LeveledCompaction some time ago and from my experience
this indicates some SST on lower levels than they should be. The compaction
is going, moving them up level by level, but total count does not change as
new data goes in.
The numbers are pretty high as for me. Such numbers me
You can try increasing streaming throttle.
2012/9/4 Dustin Wenz
> I'm following up on this issue, which I've been monitoring for the last
> several weeks. I thought people might find my observations interesting.
>
> Ever since increasing the heap size to 64GB, we've had no OOM conditions
> that
is also essential for distributing Tombstones before they are purged by
> compaction.
>
> P.S. If some points apply only to some cassandra versions, I will be happy
> to know this too.
>
> Assume everyone for version 1.X
>
> Thanks
>
> -
> Aaron Morton
Hello.
I am making some cassandra presentations in Kyiv and would like to check
that I am telling people truth :)
Could community tell me if next points are true:
1) Failed (from client-side view) operation may still be applied to cluster
2) Coordinator does not try anything to "roll-back" operati
You should read multiple "batches" specifying last key received from
previous batch as first key for next one.
For large databases I'd recommend you to use statistical approach (if it's
feasible). With random parittioner it works well.
Don't read the whole db. Knowing whole keyspace you can read pa
Thanks a lot. It seems that a fix is commited now and fix will appear in
the next release, so I won't need my own patched cassandra :)
Best regards, Vitalii Tymchyshyn.
2012/5/3 Andrey Kolyadenko
> Hi Vitalii,
>
> I sent patch.
>
>
> 2012/4/24 Віталій Тимчишин
>
>
olumnFamily='countersCF') liveRatio is 64.0
>> (just-counted was 64.0). calculation took 63355ms for 0 columns
>>
>> Looking at the comments in the code: "If it gets higher than 64
>> something is probably broken.", looks like it's probably the p
See https://issues.apache.org/jira/browse/CASSANDRA-3741
I did post a fix there that helped me.
2012/4/24 crypto five
> Hi,
>
> I have 50 millions of rows in column family on 4G RAM box. I allocatedf
> 2GB to cassandra.
> I have program which is traversing this CF and cleaning some data there,
>
BTW: Are you sure system doing wrong? System may save some pages to swap
not removing them from RAM simply to have possibility to remove them later
fast if needed.
2012/4/14 ruslan usifov
> Hello
>
> We have 6 node cluster (cassandra 0.8.10). On one node i increase java
> heap size to 6GB, and n
Is the on-disk format already settled? I've thought to try betas but
impossibility to upgrade to 1.1 release stopped me.
2012/4/13 Sylvain Lebresne
> The Cassandra team is pleased to announce the release of the first release
> candidate for the future Apache Cassandra 1.1.
>
>
--
Best regards,
Hello.
We are using java async thrift client.
As of ruby, it seems you need to use something like
http://www.mikeperham.com/2010/02/09/cassandra-and-eventmachine/
(Not sure as I know nothing about ruby).
Best regards, Vitalii Tymchyshyn
2012/4/3 Jeff Williams
> Vitalii,
>
> Yep, that sounds l
We are using client-side compression because of next points. Can you
confirm they are valid?
1) Server-side compression uses replication factor more CPU (3 times more
with replication factor of 3).
2) Network is used more by compression factor (as you are sending
uncompressed data over the wire).
4
Yep, I think I can. Here you are: https://github.com/tivv/cassandra-balancer
2012/1/15 Carlos Pérez Miguel
> If you can partage it would be greate
>
> Carlos Pérez Miguel
>
>
>
> 2012/1/15 Віталій Тимчишин :
> > Yep. Have written groovy script this friday to perf
Yep. Have written groovy script this friday to perform autobalancing :) I
am going to add it to my jenkins soon.
2012/1/15 Maxim Potekhin
> I see. Sure, that's a bit more complicated and you'd have to move tokens
> after adding a machine.
>
> Maxim
>
>
>
It's nothing wrong for 3 nodes. It's a problem for cluster of 20+ nodes,
growing.
2012/1/14 Maxim Potekhin
> I'm just wondering -- what's wrong with manual specification of tokens?
> I'm so glad I did it and have not had problems with balancing and all.
>
> Before I was indeed stuck with 25/25/
Actually for me it seems that largest means with most data, not range, that
with replication involved makes the feature useless.
2012/1/13 David McNelis
> The documentation for that section needs to be updated...
>
> What happens is that if you just autobootstrap without setting a token it
> wil
2012/1/4 Vitalii Tymchyshyn
> 04.01.12 14:25, Radim Kolar написав(ла):
>
> > So, what are cassandra memory requirement? Is it 1% or 2% of disk data?
>> It depends on number of rows you have. if you have lot of rows then
>> primary memory eaters are index sampling data and bloom filters. I use
>>
2012/1/5 Michael Cetrulo
> in a traditional database it's not a good a idea to have hundreds of
> tables but is it also bad to have hundreds of column families in cassandra?
> thank you.
>
As far as I can see, this may raise memory requirements for you, since you
need to have index/bloom filter
Hello.
We are using cassandra for some time in our project. Currently we are on
1.1 trunk (it was accidental migration, but since it's hard to migrate back
and it's performing nice enough we are currently on 1.1).
During New Year holidays one of the servers've produces a number of OOM
messages in
27 matches
Mail list logo