Bump one more time, could anybody help me?
regards
Olek

2014-03-19 16:44 GMT+01:00 olek.stas...@gmail.com <olek.stas...@gmail.com>:
> Bump, could anyone comment this behaviour, is it correct, or should I
> create Jira task for this problems?
> regards
> Olek
>
> 2014-03-18 16:49 GMT+01:00 olek.stas...@gmail.com <olek.stas...@gmail.com>:
>> Oh, one more question: what should be configuration for storing
>> system_traces keyspace? Should it be replicated or stored locally?
>> Regards
>> Olek
>>
>> 2014-03-18 16:47 GMT+01:00 olek.stas...@gmail.com <olek.stas...@gmail.com>:
>>> Ok, i've dropped all system keyspaces, rebuild cluster and recover
>>> schema, now everything looks ok.
>>> But main goal of operations was to add new datacenter to cluster.
>>> After starting node in new cluster two schema versions appear, one
>>> version is held by 6 nodes of first datacenter, second one is in newly
>>> added node in new datacenter. Sth like this:
>>> nodetool status
>>> Datacenter: datacenter1
>>> =======================
>>> Status=Up/Down
>>> |/ State=Normal/Leaving/Joining/Moving
>>> --  Address        Load       Tokens  Owns   Host ID
>>>             Rack
>>> UN  192.168.1.1  50.19 GB   1       0,5%
>>> c9323f38-d9c4-4a69-96e3-76cd4e1a204e  rack1
>>> UN  192.168.1.2  54.83 GB   1       0,3%
>>> ad1de2a9-2149-4f4a-aec6-5087d9d3acbb  rack1
>>> UN  192.168.1.3  51.14 GB   1       0,6%
>>> 0ceef523-93fe-4684-ba4b-4383106fe3d1  rack1
>>> UN  192.168.1.4  54.31 GB   1       0,7%
>>> 39d15471-456d-44da-bdc8-221f3c212c78  rack1
>>> UN  192.168.1.5  53.36 GB   1       0,3%
>>> 7fed25a5-e018-43df-b234-47c2f118879b  rack1
>>> UN  192.168.1.6  39.89 GB   1       0,1%
>>> 9f54fad6-949a-4fa9-80da-87efd62f3260  rack1
>>> Datacenter: DC1
>>> ===============
>>> Status=Up/Down
>>> |/ State=Normal/Leaving/Joining/Moving
>>> --  Address        Load       Tokens  Owns   Host ID
>>>             Rack
>>> UN  192.168.1.7  100.77 KB  256     97,4%
>>> ddb1f913-d075-4840-9665-3ba64eda0558  RAC1
>>>
>>> describe cluster;
>>> Cluster Information:
>>>    Name: Metadata Cluster
>>>    Snitch: org.apache.cassandra.locator.GossipingPropertyFileSnitch
>>>    Partitioner: org.apache.cassandra.dht.RandomPartitioner
>>>    Schema versions:
>>> 8fe34841-4f2a-3c05-97f2-15dd413d71dc: [192.168.1.7]
>>>
>>> 4ad381b6-df5a-3cbc-ba5a-0234b74d2383: [192.168.1.1, 192.168.1.2,
>>> 192.168.1.3, 192.168.1.4, 192.168.1.5, 192.168.1.6]
>>>
>>> All keyspaces are now configured to keep data in datacenter1.
>>> I assume, that It's not correct behaviour, is it true?
>>> Could you help me, how can I safely add new DC to the cluster?
>>>
>>> Regards
>>> Aleksander
>>>
>>>
>>> 2014-03-14 18:28 GMT+01:00 olek.stas...@gmail.com <olek.stas...@gmail.com>:
>>>> Ok, I'll do this during the weekend, I'll give you a feedback on Monday.
>>>> Regards
>>>> Aleksander
>>>>
>>>> 14 mar 2014 18:15 "Robert Coli" <rc...@eventbrite.com> napisał(a):
>>>>
>>>>> On Fri, Mar 14, 2014 at 12:40 AM, olek.stas...@gmail.com
>>>>> <olek.stas...@gmail.com> wrote:
>>>>>>
>>>>>> OK, I see, so the data files stay in place, i have to just stop
>>>>>> cassandra on whole cluster, remove system schema and then start
>>>>>> cluster and recreate all keyspaces with all column families? Data will
>>>>>> be than loaded automatically from existing ssstables, right?
>>>>>
>>>>>
>>>>> Right. If you have clients reading while loading the schema, they may get
>>>>> exceptions.
>>>>>
>>>>>>
>>>>>> So one more question: what about KS system_traces? should it be
>>>>>> removed and recreted? What data it's holding?
>>>>>
>>>>>
>>>>> It's holding data about tracing, a profiling feature. It's safe to nuke.
>>>>>
>>>>> =Rob
>>>>>

Reply via email to