Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-11-13 Thread Karsten Weiss
> Does this maybe ring a bell with someone? Update: The cause of the problem was OpenSolaris bug 6826836 "Deadlock possible in dmu_object_reclaim() http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6826836 It could be fixed by upgrading the OpenSolaris 2009.06 system to 0.5.11-0.111.17

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-11-10 Thread Karsten Weiss
> I'm not very familar with mdb. I've tried this: Ah, this looks much better: root 641 0.0 0.0 7660 2624 ?S Nov 08 2:16 /sbin/zfs receive -dF datapool/share/ (...) # echo "0t641::pid2proc|::walk thread|::findstack -v" | mdb -k stack pointer for thread ff09236198e0: f

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-11-08 Thread Karsten Weiss
Does anyone know the current state of bug #6975124? Has there been any progress since August? I currently have an OpenSolaris 2009.06 snv_111b system (entire 0.5.11-0.111.14) which *repeatedly* gets stuck after a couple of minutes during a large (xxx GB) incremental zfs receive operation. The p

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Karsten Weiss
Hi Jeroen, Adam! > link. Switched write caching off with the following > addition to the /kernel/drv/sd.conf file (Karsten: if > you didn't do this already, you _really_ want to :) Okay, I bite! :) format->inquiry on the F20 FMods disks returns: # Vendor: ATA # Product: MARVELL SD88SA02 So

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Karsten Weiss
> What would be useful though is to be able to easily disable ZIL per > dataset instead of OS wide switch. > This feature has already been coded and tested and awaits a formal > process to be completed in order to get integrated. > Should be rather sooner than later. I agree. > > You'd be bett

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Karsten Weiss
> Nobody knows any way for me to remove my unmirrored > log device. Nobody knows any way for me to add a mirror to it (until Since snv_125 you can remove log devices. See http://bugs.opensolaris.org/view_bug.do?bug_id=6574286 I've used this all the time during my testing and was able to remove b

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Karsten Weiss
Hi Adam, > Very interesting data. Your test is inherently > single-threaded so I'm not surprised that the > benefits aren't more impressive -- the flash modules > on the F20 card are optimized more for concurrent > IOPS than single-threaded latency. Thanks for your reply. I'll probably test the m

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Karsten Weiss
> I stand corrected. You don't lose your pool. You don't have corrupted > filesystem. But you lose whatever writes were not yet completed, so if > those writes happen to be things like database transactions, you could have > corrupted databases or files, or missing files if you were creating the

[zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-30 Thread Karsten Weiss
Hi, I did some tests on a Sun Fire x4540 with an external J4500 array (connected via two HBA ports). I.e. there are 96 disks in total configured as seven 12-disk raidz2 vdevs (plus system, spares, unused disks) providing a ~ 63 TB pool with fletcher4 checksums. The system was recently equipped w