Ohh i see now. It makes sense. Thanks a lot.

On Fri, Jun 29, 2018 at 9:17 PM, Randy Lynn <rl...@getavail.com> wrote:

> data is only lost if you stop the node. between restarts the storage is
> fine.
>
> On Fri, Jun 29, 2018 at 10:39 AM, Pradeep Chhetri <prad...@stashaway.com>
> wrote:
>
>> Isnt NVMe storage an instance storage ie. the data will be lost in case
>> the instance restarts. How are you going to make sure that there is no data
>> loss in case instance gets rebooted?
>>
>> On Fri, 29 Jun 2018 at 7:00 PM, Randy Lynn <rl...@getavail.com> wrote:
>>
>>> GPFS - Rahul FTW! Thank you for your help!
>>>
>>> Yes, Pradeep - migrating to i3 from r3. moving for NVMe storage, I did
>>> not have the benefit of doing benchmarks.. but we're moving from 1,500 IOPS
>>> so I intrinsically know we'll get better throughput.
>>>
>>> On Fri, Jun 29, 2018 at 7:21 AM, Rahul Singh <
>>> rahul.xavier.si...@gmail.com> wrote:
>>>
>>>> Totally agree. GPFS for the win. EC2 multi region snitch is an
>>>> automation tool like Ansible or Puppet. Unless you have two orders of
>>>> magnitude more servers than you do now, you don’t need it.
>>>>
>>>> Rahul
>>>> On Jun 29, 2018, 6:18 AM -0400, kurt greaves <k...@instaclustr.com>,
>>>> wrote:
>>>>
>>>> Yes. You would just end up with a rack named differently to the AZ.
>>>> This is not a problem as racks are just logical. I would recommend
>>>> migrating all your DCs to GPFS though for consistency.
>>>>
>>>> On Fri., 29 Jun. 2018, 09:04 Randy Lynn, <rl...@getavail.com> wrote:
>>>>
>>>>> So we have two data centers already running..
>>>>>
>>>>> AP-SYDNEY, and US-EAST.. I'm using Ec2Snitch over a site-to-site
>>>>> tunnel.. I'm wanting to move the current US-EAST from AZ 1a to 1e..
>>>>> I know all docs say use ec2multiregion for multi-DC.
>>>>>
>>>>> I like the GPFS idea. would that work with the multi-DC too?
>>>>> What's the downside? status would report rack of 1a, even though in 1e?
>>>>>
>>>>> Thanks in advance for the help/thoughts!!
>>>>>
>>>>>
>>>>> On Thu, Jun 28, 2018 at 6:20 PM, kurt greaves <k...@instaclustr.com>
>>>>> wrote:
>>>>>
>>>>>> There is a need for a repair with both DCs as rebuild will not stream
>>>>>> all replicas, so unless you can guarantee you were perfectly consistent 
>>>>>> at
>>>>>> time of rebuild you'll want to do a repair after rebuild.
>>>>>>
>>>>>> On another note you could just replace the nodes but use GPFS instead
>>>>>> of EC2 snitch, using the same rack name.
>>>>>>
>>>>>> On Fri., 29 Jun. 2018, 00:19 Rahul Singh, <
>>>>>> rahul.xavier.si...@gmail.com> wrote:
>>>>>>
>>>>>>> Parallel load is the best approach and then switch your Data access
>>>>>>> code to only access the new hardware. After you verify that there are no
>>>>>>> local read / writes on the OLD dc and that the updates are only via 
>>>>>>> Gossip,
>>>>>>> then go ahead and change the replication factor on the key space to have
>>>>>>> zero replicas in the old DC. Then you can decommissioned.
>>>>>>>
>>>>>>> This way you are hundred percent sure that you aren’t missing any
>>>>>>> new data. No need for a DC to DC repair but a repair is always healthy.
>>>>>>>
>>>>>>> Rahul
>>>>>>> On Jun 28, 2018, 9:15 AM -0500, Randy Lynn <rl...@getavail.com>,
>>>>>>> wrote:
>>>>>>>
>>>>>>> Already running with Ec2.
>>>>>>>
>>>>>>> My original thought was a new DC parallel to the current, and then
>>>>>>> decommission the other DC.
>>>>>>>
>>>>>>> Also my data load is small right now.. I know small is relative
>>>>>>> term.. each node is carrying about 6GB..
>>>>>>>
>>>>>>> So given the data size, would you go with parallel DC or let the new
>>>>>>> AZ carry a heavy load until the others are migrated over?
>>>>>>> and then I think "repair" to cleanup the replications?
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Jun 28, 2018 at 10:09 AM, Rahul Singh <
>>>>>>> rahul.xavier.si...@gmail.com> wrote:
>>>>>>>
>>>>>>>> You don’t have to use EC2 snitch on AWS but if you have already
>>>>>>>> started with it , it may put a node in a different DC.
>>>>>>>>
>>>>>>>> If your data density won’t be ridiculous You could add 3 to
>>>>>>>> different DC/ Region and then sync up. After the new DC is operational 
>>>>>>>> you
>>>>>>>> can remove one at a time on the old DC and at the same time add to the 
>>>>>>>> new
>>>>>>>> one.
>>>>>>>>
>>>>>>>> Rahul
>>>>>>>> On Jun 28, 2018, 9:03 AM -0500, Randy Lynn <rl...@getavail.com>,
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>> I have a 6-node cluster I'm migrating to the new i3 types.
>>>>>>>> But at the same time I want to migrate to a different AZ.
>>>>>>>>
>>>>>>>> What happens if I do the "running node replace method" with 1 node
>>>>>>>> at a time moving to the new AZ. Meaning, I'll have temporarily;
>>>>>>>>
>>>>>>>> 5 nodes in AZ 1c
>>>>>>>> 1 new node in AZ 1e.
>>>>>>>>
>>>>>>>> I'll wash-rinse-repeat till all 6 are on the new machine type and
>>>>>>>> in the new AZ.
>>>>>>>>
>>>>>>>> Any thoughts about whether this gets weird with the Ec2Snitch and a
>>>>>>>> RF 3?
>>>>>>>>
>>>>>>>> --
>>>>>>>> Randy Lynn
>>>>>>>> rl...@getavail.com
>>>>>>>>
>>>>>>>> office:
>>>>>>>> 859.963.1616 <+1-859-963-1616> ext 202
>>>>>>>> 163 East Main Street - Lexington, KY 40507 - USA
>>>>>>>> <https://maps.google.com/?q=163+East+Main+Street+-+Lexington,+KY+40507+-+USA&entry=gmail&source=g>
>>>>>>>>
>>>>>>>> <https://www.getavail.com/> getavail.com
>>>>>>>> <https://www.getavail.com/>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Randy Lynn
>>>>>>> rl...@getavail.com
>>>>>>>
>>>>>>> office:
>>>>>>> 859.963.1616 <+1-859-963-1616> ext 202
>>>>>>> 163 East Main Street - Lexington, KY 40507 - USA
>>>>>>> <https://maps.google.com/?q=163+East+Main+Street+-+Lexington,+KY+40507+-+USA&entry=gmail&source=g>
>>>>>>>
>>>>>>> <https://www.getavail.com/> getavail.com <https://www.getavail.com/>
>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Randy Lynn
>>>>> rl...@getavail.com
>>>>>
>>>>> office:
>>>>> 859.963.1616 <+1-859-963-1616> ext 202
>>>>> 163 East Main Street - Lexington, KY 40507 - USA
>>>>> <https://maps.google.com/?q=163+East+Main+Street+-+Lexington,+KY+40507+-+USA&entry=gmail&source=g>
>>>>>
>>>>> <https://www.getavail.com/> getavail.com <https://www.getavail.com/>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Randy Lynn
>>> rl...@getavail.com
>>>
>>> office:
>>> 859.963.1616 <+1-859-963-1616> ext 202
>>> 163 East Main Street - Lexington, KY 40507 - USA
>>> <https://maps.google.com/?q=163+East+Main+Street+-+Lexington,+KY+40507+-+USA&entry=gmail&source=g>
>>>
>>> <https://www.getavail.com/> getavail.com <https://www.getavail.com/>
>>>
>>
>
>
> --
> Randy Lynn
> rl...@getavail.com
>
> office:
> 859.963.1616 <+1-859-963-1616> ext 202
> 163 East Main Street - Lexington, KY 40507 - USA
> <https://maps.google.com/?q=163+East+Main+Street+-+Lexington,+KY+40507+-+USA&entry=gmail&source=g>
>
> <https://www.getavail.com/> getavail.com <https://www.getavail.com/>
>

Reply via email to