On Thu, Apr 1, 2010 at 10:09 PM, <meino.cra...@gmx.de> wrote: > [ ... snip ... ] > > So I have a lot of docs (specs of microcontrollers, howtos, programm > and source code docs...etc) on my disk. > This one part.
I've seen that nobody mentioned JFS yet... :) In some benchmarks the best FS for most tasks is either XFS and JFS, but it seems that JFS has less CPU and memory usage. So for small and medium files I would say it's best. (I think it was on Tom's Hardware site?) I'll also describe my history on the issue: initially I've only used ReiserFS until something (not the hard drive) just snapped and I've almost lost all my data. At that moment I've migrated to Ext3. But Ext3 has the problem of needing constant (usually once a moth) checking (I know this is optional or tunable but it seems it is recommended) which for large file systems takes incredibly long (60GB HDD takes about 2 or 3 minutes... So imagine what would to to 1TB...) So I got angry again and moved to JFS... And I'm using JFS for about two years without major incidents... (Only once I've lost the contents of a configuration file due to a power interruption but this is because of the editor.) So as a conclusion for this task I would recommend JFS (I also have 200GB of documentation which covers about 100 thousand files I guess.) Also see at the end for my notes on journaled file systems. > Then: I often transer videos from my DVB-T-receiver/recorder to my > harddisk to cut out the advertising and to transcode the videos to > somethings better than "ts" (transport streams), > This is another part. Although JFS could handle this, maybe a file system specially designed for this would do best: Ext4 with it's extent feature. (But be aware that by just using a file system is not enough... The software also has to be specially crafted if you want high performance. Just see the `fallocate` and `fadvise` system calls.) > Then I plan to have two roots this time: One to experiment with and > one "good and stable"-version which is used/updated/... "strictly as > recommended". Filesizes and usage do vary here...take a look at your > own roots ;))) :) This sounds like my setup: 160GB HDD from my laptop has the following layout: * GPT partition table (not MBR) -- this gives me more partitions without needing the "extended" partition feature of MBR; * 2 boot partitions of 512MB (maybe 1GB would have been better) -- one for current usage (Grub 0.97 with GPT patches) and one for experimentation; these are Ext2 for safety and compatibility; * 3 root partitions of 4GB (I should have made them 8GB) -- one for the current operating system, and two for future upgrades / experimentation; currently JFS and maybe also so in the future; * 1 swap of 8GB (encrypted with random password with the help of dm-crypt); * rest of the HDD as one big partition with LVM; (large extents 256MB); * from the LVM I have partitions for personal data (/home) and other things -- everything is JFS; > Then I want something encrypted, either as a partition or as a files > (carrying a encrypted fs), which I can copy to dvd and will be able > to mount this dvd and use it without to have to copy the whole dvd > first to harddisk before using it... > Currently I am using encfs...(outdated?). What can I do use instead? > This is for personal things like letters, photos, texts ... etc. > Files vary from some kb up to about 2GByte (guessed). Most of them > smaller than 200MByte As someone noted maybe EncryptFS (in kernel one) would be better... (It's an install option in Ubuntu so I would say it's mature enough.) But for this encrypted purpose I would use dm-crypt with `aes-xts-essiv:sha256` encryption. (In the past I've used LoopAES but I had some minor issues with kernel building as it's not in the vanilla kernel...) > Last thing: I have a lot iof copies of code from svn repositories because > I like to have the "bleeding edge" of some projects (do you know the > new Blender 2.50??? :O) I also have a lot of repositories on JFS and everything works nice. > This implies a lot of compile work. This will be the only case where > files are created as often as read. For temporary folders while compiling I would recommend to instruct your build scripts to build inside /tmp where you have tmpfs mounted... It's blazingly fast... And some notes about journaled file systems: they journal meta-data (that is file creation, deletion, rename, etc), not data (that is the contents)... (Of course there are a few (Ext3 maybe?) file systems that have the option to also journal data...) What does this mean: well when you edit a file and save it and then cut the power, the file still exists (the meta-data), but the contents could (and usually is) wrong: either no content (like I've encountered once with JFS), either mixed content (old and new)... So the fineprint here is: no journaled file system is safe... They are all safe if you also use fsync (which forces everything to go on disk)... This is why I say that the fault for the file content lost is from the editor: * it opened the file by truncating it => 0 length; * it wrote to it and closed it; * it DIDN'T `fsync`-ed it which means that the data still remained in the buffer cache; * and when the power was lost so was the data in the buffer cache; Another fine-print about file system performance -- memory helps a lot... I've upgraded my laptop from 2GB to 4GB of RAM and with some fine-tuning the file operations are more snappier... The fine-tuning included delaying the write-back to about 3 minutes or until 1GB of data is dirty... Which means that if my laptop loses power or I need to hard power-off it I'll lose a great deal of data... Hope I was of some help, Ciprian. P.S.: The following link could give you some insight on the journaled file system problems (not only about ReiserFS) http://zork.net/~nick/mail/why-reiserfs-is-teh-sukc