Interesting I will have to try this; can you post the exact test steps . Also 
what type of controller were you using and what kernel / version .

intel atom D525 builtin

ahci0: <Intel ICH7 AHCI SATA controller> port 
0x20b8-0x20bf,0x20cc-0x20cf,0x20b0-0x20b7,0x20c8-0x20cb,0x20a0-0x20af mem 
0xf0284000-0xf02843ff irq 18 at device 31.2 on pci0

ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <ST500NM0011 PA08> ATA-8 SATA 2.x device
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 476940MB (976773168 512 byte sectors: 16H 63S/T 16383C)
ada0: Previously was known as ad4


(BTW - no idea why it previously was known as ad4 as i've never ever run this machine without AHCI driver ;)


create partition.
copy kernel sources tarball here.

now on that partition

rm -rf *

then write a script test.sh

#!/usr/local/bin/bash
a=0
while [ $a -lt 12 ];do
 mkdir $a
 cd $a
 a=$[a+1]
 tar xpf ../yourtarball.tar.gz
 cd ..
done

run it with (after chmod 700)

time ./test.sh

rerun a test on different conditions.


SU+J makes sure that metadata will get consistent. NOT DATA. And this is quite 
a mess if you get UPS failure under high load.

gjournal does journal everything.


Not exactly, ufs mounted  with default options insures data is written sync and 
metadata asynchronous . Standard Softupdate (no journal) improves upon this by 
limiting what ops need to write to the disk. It had some short falls  for edge 
case operations; which softupdate journal resolved by journaling the metadata 
ops that were not protected / covered by standard softupdate .

See
http://jeffr-tech.livejournal.com/24357.html

gjournal recommends using async, because it takes care of the rest.

my tests using pendrive as log device (slow but OK) confirmed my opinion, at least roughly.


SSDs are not expensive today. i can get 128GB SSD and create 20GB journal just 
to limit wear. and possibly use the rest of SSD to store read-intensive data.


I wonder if how trim / no trim effects the journal wear .


not at all.

gjournal writes sequentially. best case for SSD - just write subsequent flash blocks and older gets freed completely and erase it.

gjournal doesn't seem to be elegant in case of journal failure (i simulated it 
with forced removal of ramdisk with mdconfig).

TONS of messages in logs, but still - no data loss, just you have to shutdown 
system, boot from pendrive, remove journal, fsck (just for sure), and then add 
journal again


I would be careful of using the md for the journal .



Something makes me think it will play nicer when you remove that then a real 
failure .  Try a USB stick for the journal;

tried. same results. md was only for testing, doesn't make any sense in production setup as it's volatile.

Also when testing su+j I ran the following test case . Extract ports via 
portsnap extract , build world with -j4 . Let the box warm up the yank the 
power and then boot the box back up and see what happens .
and?

didn't you end with empty - but existing - object files that make things it is properly compiled programs?


_______________________________________________
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr...@freebsd.org"

Reply via email to