Kai Krakow wrote:
> Am Fri, 2 Sep 2016 17:42:13 -0500
> schrieb Dale <rdalek1...@gmail.com>:
>
>> Mick wrote:
>>> On Thursday 01 Sep 2016 22:57:12 Kai Krakow wrote:
>>>  
>>>> Regarding performance:
>>>>
>>>> I wish Linux had options to relocate files (not just defragment)
>>>> back into logical groups for nearby access. Fragmentation is less
>>>> of a problem, the bigger problem is data block dislocation over
>>>> time due to updates. In Windows, there's the wonderful tool
>>>> MyDefrag which does magic and puts your aging Windows installation
>>>> back into a state of an almost fresh installation by relocating
>>>> files to sane positions.
>>>>
>>>> Is there anything similar for Linux?  
>>> Dale will pop in soon to mention the defrag application he was
>>> running on reiserfs, but a potentially more effective defrag method
>>> irrespective of fs (we're talking about spinning disks where this
>>> issue applies) is tar off/tar on your data.  
>> Now someone is asking for me to post something.  ROFL 
>>
>> Script should be attached.  Be forewarned, I have not used this script
>> in ages.  I have no clue if it works or not or if it will totally
>> screw up anything and everything.  I would recommend trying it on
>> something that doesn't matter or maybe a directory full of copied
>> files to be sure.  If it hoses your system, it's not my script and
>> you been warned. I'm not even sure where I got it from.  Might be the
>> forums but could be anywhere. 
>>
>> By the way, I switched to ext4 and it has a defrag command of its
>> own. Just man e4defrag for details, assuming you have the ext
>> utilities package installed.  That would be sys-fs/e2fsprogs by the
>> way.  I *think* it works on ext3 as well but not sure.  Everything
>> here is ext4 except /boot which is ext2. 
>>
>> I guess this is the benefit of large hard drives.  I don't have to
>> delete stuff even if I don't use it for a long time.  lol 
>>
>> Y'all have fun. 
> Well, this is not exactly what I was asking for. I think defragmenting
> files is really not that important as long as the fragments have some
> sane minimum sizes. I think something like contiguous chunks of 4 MB
> are enough for performance, SuSE seems to suggest 32 MB when you are
> looking at their btrfs maintenance script (it doesn't consider extents
> of more than 32MB for defragmentation).
>
> Much more important is to have executables, libs and data files nearby
> that are typically loaded at same time. The preload application
> (adaptive preload daemon) already does the right analysis by recording
> which files are needed and uses markov chains to predict which files
> you are going to need next to preload them into the page cache. I think,
> this data could also be used to rearrange files into better on-disk
> locations. Also, I think exploiting the page cache for this may not
> always be the best idea because in the end you may not need this data
> and it will push other important data out of cache.
>
> I think there's e4rat which already rearranges boot-related files to
> the start of the disk but it's ext[34] only. I think this technology
> could be developed further by clustering files needed by applications
> you start together.
>

All that sounds good but I don't know of any such tool to do that.  Some
people wanted a way to defrag things so someone wrote the script I
posted.  It worked back then but to be honest, I don't think defragging
is even really needed on Linux and any reasonably modern file system,
excluding the windozish ones that Linux can access like fat etc. 
Whenever I use such a tool or run some tool that shows fragmentation, it
is either none existent or so small that it doesn't matter.  Then there
is always those files that because of size, will always be fragmented. 

I guess no matter how fast hard drives get, someone will always want to
squeeze out just a little tiny fraction of more speed.  ;-)

Dale

:-)  :-) 


Reply via email to