Hi,

I am trying to find a way ti identifying sparse files properly and quickly and find a way to rectify the situation.

Any trick to do this?

The problem is that overtime looks like I am ending up with lots of them and because I have to sync multiples servers together the sparse files makes the sync painful over time as well as huge obviously and slow. I am talking multiple GB's here.

So far the only way I have do it is with rsync and -S options, but then the sync process takes a lots of time and when you need to sync multiple boxes multiple times per hours, it end up not be able to do it anymore and the process is not finish and it is suppose to start again.

The other way that I found is to use dump and then restore, but that also is painful to do on live systems obviously. I need to find a way to clean the source, so that the sync system do their stuff easy. If I simply sync with the sparse file, sure I can do that, but then, the problem is the destinations runs out of space as the sparse gets to big over time.

Google also pointed out that may be "FIBMAP ioctl" may have done to job, may be, but that was kill by Theo on 2007/06/02 09:14:36. I assume for many good reason for sure, so I didn't pursue that anymore.

Then may be filefrag -v might work, but not much success there either.

So, I am running out of ideas and may be there isn't any way to do this, I however hope there is.

If it is not possible to correct the problem in a cronjob fashion or something, may be how could I possible find sparse files efficiently?

At a minimum, if I could find the file getting out of control, then I could at a minimum delete them and copy them from the source again and reduce the problem of the sparse files.

Any clue as to how to tackle this problem, or any trick around it?

Best,

Daniel

Reply via email to