Re: [ceph-users] question on reusing OSD

2015-09-16 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 My understanding of growing file systems is the same as yours, it can only grow at the end not the beginning. In addition to that, having partition 2 before partition 1 just cries to me to have it fixed, but that is just aesthetic. Because the weigh

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson
Christian, Thanks for the feedback. I guess I'm wondering about step 4 "clobber partition, leaving data in tact and grow partition and the file system as needed". My understanding of xfs_growfs is that the free space must be at the end of the existing file system. In this case the existing part

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread Christian Balzer
Hello, On Wed, 16 Sep 2015 07:21:26 -0500 John-Paul Robinson wrote: > The move journal, partition resize, grow file system approach would > work nicely if the spare capacity were at the end of the disk. > That shouldn't matter, you can "safely" loose your journal in controlled circumstances. T

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson (Campus)
So I just realized I had described the partition error incorrectly in my initial post. The journal was placed at the 800GB mark leaving the 2TB data partition at the end of the disk. (See my follow-up to Lionel for details.) I'm working to correct that so I have a single large partition the siz

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson
The move journal, partition resize, grow file system approach would work nicely if the spare capacity were at the end of the disk. Unfortunately, the gdisk (0.8.1) end of disk location bug caused the journal placement to be at the 800GB mark, leaving the largest remaining partition at the end of

Re: [ceph-users] question on reusing OSD

2015-09-15 Thread Lionel Bouton
Le 16/09/2015 01:21, John-Paul Robinson a écrit : > Hi, > > I'm working to correct a partitioning error from when our cluster was > first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB > partitions for our OSDs, instead of the 2.8TB actually available on > disk, a 29% space hit. (Th

[ceph-users] question on reusing OSD

2015-09-15 Thread John-Paul Robinson
Hi, I'm working to correct a partitioning error from when our cluster was first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB partitions for our OSDs, instead of the 2.8TB actually available on disk, a 29% space hit. (The error was due to a gdisk bug that mis-computed the end of t