Hi all,

We currently have an 8TB 4-brick replicated volume on our 4 servers, and
are at 80% capacity. The max disk size on our host is 10TB. I'm starting to
think about what happens closer to 100% and see 2 options.

Either we go with another new 4-brick replicated volume and start dealing
with symlinks in our webapp to make sure it knows which volumes the data is
on, which is a bit of a pain (but not too much) on the sysops side of
things. Right now the whole volume mount is symlinked to a single location
in the webapps (an uploads/ directory) and life is good. After such a
split, I'd have to split uploads into yeardir symlinks, make sure future
yeardir symlinks are created ahead of time and point to the right volume,
etc).

The other direction would be converting the replicated volume to a
distributed replicated one
https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes,
but I'm a bit scared to do it with production data (even after testing, of
course), and having never dealt with a distributed replicated volume.

   1. Is it possible to convert our existing volume on the fly by adding 4
   bricks but keeping the replica count at 4?
   2. What happens if bricks 5-8 which contain the replicated volume #2 go
   down for whatever reason or can't meet their quorum, but the replicated
   volume #1 is still up? Does the whole main combined volume become
   unavailable or only a portion of it which has data residing on replicated
   volume #2?
   3. Any other gotchas?

Thank you very much in advance.

Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | @ArtemR
<http://twitter.com/ArtemR>
________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to