When I upgraded my system from 1.2.x to 2.0.x there were simple hint:
never upgrade before target release does not have at least 5 on third
place. versions before x.x.5 are unstable and aren't ready for
production use. I don't know if it's still true, but be careful ;)
Regards
Olek
2014-09-17 20:1
ed
>
> no real benefit I can think of for doing the delete first.
>
> On Sep 10, 2014, at 2:25 PM, olek.stas...@gmail.com wrote:
>
>> I think so.
>> this is how i see it:
>> on the very beginning you have such line in datafile:
>> {key: [col_name, col_value, d
, in second scenario you have only one line, in first: two.
I hope my post is correct ;)
regards,
Olek
2014-09-10 18:56 GMT+02:00 Michal Budzyn :
> Would the factor before compaction be always 2 ?
>
> On Wed, Sep 10, 2014 at 6:38 PM, olek.stas...@gmail.com
> wrote:
>>
>>
IMHO, delete then insert will take two times more disk space then
single insert. But after compaction the difference will disappear.
This was true in version prior to 2.0, but it should still work this
way. But maybe someone will correct me, if i'm wrong.
Cheers,
Olek
2014-09-10 18:30 GMT+02:00 Mi
Bump one more time, could anybody help me?
regards
Olek
2014-03-19 16:44 GMT+01:00 olek.stas...@gmail.com :
> Bump, could anyone comment this behaviour, is it correct, or should I
> create Jira task for this problems?
> regards
> Olek
>
> 2014-03-18 16:49 GMT+01:00 olek.stas...
Bump, could anyone comment this behaviour, is it correct, or should I
create Jira task for this problems?
regards
Olek
2014-03-18 16:49 GMT+01:00 olek.stas...@gmail.com :
> Oh, one more question: what should be configuration for storing
> system_traces keyspace? Should it be replicated or
Oh, one more question: what should be configuration for storing
system_traces keyspace? Should it be replicated or stored locally?
Regards
Olek
2014-03-18 16:47 GMT+01:00 olek.stas...@gmail.com :
> Ok, i've dropped all system keyspaces, rebuild cluster and recover
> schema, now every
Could you help me, how can I safely add new DC to the cluster?
Regards
Aleksander
2014-03-14 18:28 GMT+01:00 olek.stas...@gmail.com :
> Ok, I'll do this during the weekend, I'll give you a feedback on Monday.
> Regards
> Aleksander
>
> 14 mar 2014 18:15 "Robert Coli
Ok, I'll do this during the weekend, I'll give you a feedback on Monday.
Regards
Aleksander
14 mar 2014 18:15 "Robert Coli" napisał(a):
> On Fri, Mar 14, 2014 at 12:40 AM, olek.stas...@gmail.com <
> olek.stas...@gmail.com> wrote:
>
>> OK, I see, so the
system_traces? should it be
removed and recreted? What data it's holding?
best regards
Aleksander
2014-03-14 0:14 GMT+01:00 Robert Coli :
> On Thu, Mar 13, 2014 at 1:20 PM, olek.stas...@gmail.com
> wrote:
>>
>> Huh,
>> you mean json dump?
>
>
> If you'r
Huh,
you mean json dump?
Regards
Aleksander
2014-03-13 18:59 GMT+01:00 Robert Coli :
> On Thu, Mar 13, 2014 at 2:05 AM, olek.stas...@gmail.com
> wrote:
>>
>> Bump, are there any solutions to bring my cluster back to schema
>> consistency?
>> I've 6 node c
Bump, are there any solutions to bring my cluster back to schema consistency?
I've 6 node cluster with exactly six versions of schema, how to deal with it?
regards
Aleksander
2014-03-11 14:36 GMT+01:00 olek.stas...@gmail.com :
> Didn't help :)
> thanks and regards
> Aleksande
Didn't help :)
thanks and regards
Aleksander
2014-03-11 14:14 GMT+01:00 Duncan Sands :
> On 11/03/14 14:00, olek.stas...@gmail.com wrote:
>>
>> I plan to install 2.0.6 as soon as it will be available in datastax rpm
>> repo.
>> But how to deal with schema inconsi
sed by CASSANDRA-6700 then you are in luck: it is fixed in 2.0.6).
>
> Best wishes, Duncan.
>
>
> On 11/03/14 13:30, olek.stas...@gmail.com wrote:
>>
>> Hi All,
>> I've faced an issue with cassandra 2.0.5.
>> I've 6 node cluster with random partiti
Hi All,
I've faced an issue with cassandra 2.0.5.
I've 6 node cluster with random partitioner, still using tokens
instead of vnodes.
Cause we're changing hardware we decide to migrate cluster to 6 new
machines and change partitioning options to vnode rather then
token-based.
I've followed instructi
Seems good. I'll discus it with data owners and we choose the best method.
Best regards,
Aleksander
4 lut 2014 19:40 "Robert Coli" napisał(a):
> On Tue, Feb 4, 2014 at 12:21 AM, olek.stas...@gmail.com <
> olek.stas...@gmail.com> wrote:
>
>> I don't know
/writes/repairs.
Could you please, describe briefly how to recover data? I have a
problem with scenario described under link:
http://thelastpickle.com/blog/2011/12/15/Anatomy-of-a-Cassandra-Partition.html ,
I can't apply this solution to my case.
regards
Olek
2014-02-03 Robert Coli :
> On Mon,
2014-02-03 Robert Coli :
> On Mon, Feb 3, 2014 at 1:02 PM, olek.stas...@gmail.com
> wrote:
>>
>> Today I've noticed that oldest files with broken values appear during
>> repair (we do repair once a week on each node). Maybe it's the repair
>> operation, w
nk how to re-gather them?
best regards
Aleksander
ps. I like your link Rob, i'll pin it over my desk ;) In Oracle there
were a rule: never deploy RDBMS before release 2 ;)
2014-02-03 Robert Coli :
> On Mon, Feb 3, 2014 at 12:51 AM, olek.stas...@gmail.com
> wrote:
>>
>> We
che.org/jira/browse/CASSANDRA-6527
>
>
> On Mon, Feb 3, 2014 at 2:51 AM, olek.stas...@gmail.com
> wrote:
> > Hi All,
> > We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
> > 1.2.10). Probably after upgradesstable (but it's only a gue
Hi All,
We've faced very similar effect after upgrade from 1.1.7 to 2.0 (via
1.2.10). Probably after upgradesstable (but it's only a guess,
because we noticed problem few weeks later), some rows became
tombstoned. They just disappear from results of queries. After
inverstigation I've noticed, that
Hello,
I'm facing bug https://issues.apache.org/jira/browse/CASSANDRA-6277.
After migration to 2.0.2 I can't perform repair on my cluster (six
nodes). Repair on the biggest CF breaks with error described in Jira.
I know, that probably there is a solution in repository, but it's not
included in any
Yes, as I wrote in first e-mail. When I removed key cache file
cassandra started without further problems.
regards
Olek
2013/11/13 Robert Coli :
>
> On Wed, Nov 13, 2013 at 12:35 AM, Tom van den Berge
> wrote:
>>
>> I'm having the same problem, after upgrading from 1.2.3 to 1.2.10.
>>
>> I can r
Hello,
I'm facing OOM on reading key_cache
Cluster conf is as follows:
-6 machines which 8gb RAM each and three 150GB disks each
-default heap configuration
-deafult key cache configuration
-the biggest keyspace has abt 500GB size (RF: 2, so in fact there is
250GB of raw data).
After upgrading fir
24 matches
Mail list logo