Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Phillip Wagstrom
If you're dedicating the disk to a single task (data, SLOG, L2ARC) then absolutely. If you're splitting tasks and wanting to make a drive do two things, like SLOG and L2ARC, then you have to do this. Some of the confusion here is between what is a traditional FDISK partition (p

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Eugen Leitl
On Fri, Jan 04, 2013 at 06:57:44PM -, Robert Milkowski wrote: > > > Personally, I'd recommend putting a standard Solaris fdisk > > partition on the drive and creating the two slices under that. > > Why? In most cases giving zfs an entire disk is the best option. > I wouldn't bother with a

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Robert Milkowski
> Personally, I'd recommend putting a standard Solaris fdisk > partition on the drive and creating the two slices under that. Why? In most cases giving zfs an entire disk is the best option. I wouldn't bother with any manual partitioning. -- Robert Milkowski http://milek.blogspot.com __

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote: > Eugen, Thanks Phillip and others, most illuminating (pun intended). > Be aware that p0 corresponds to the entire disk, regardless of how it > is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for >

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Gea
> > Thanks. Apparently, napp-it web interface did not do what I asked it to do. > I'll try to remove the cache and the log devices from the pool, and redo it > from the command line interface. > napp-it up to 0.8 does not support slices or partitions napp-it 0.9 supports partitions an offers pa

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Cindy Swearingen
Free advice is cheap... I personally don't see the advantage of caching reads and logging writes to the same devices. (Is this recommended?) If this pool is serving CIFS/NFS, I would recommend testing for best performance with a mirrored log device first without a separate cache device: # zpool

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 03:44:54PM -0600, Phillip Wagstrom wrote: > > On Jan 3, 2013, at 3:33 PM, Eugen Leitl wrote: > > > On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote: > >> Eugen, > >> > >>Be aware that p0 corresponds to the entire disk, regardless of how it > >> is par

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Phillip Wagstrom
On Jan 3, 2013, at 3:33 PM, Eugen Leitl wrote: > On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote: >> Eugen, >> >> Be aware that p0 corresponds to the entire disk, regardless of how it >> is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for >> log a

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote: > Eugen, > > Be aware that p0 corresponds to the entire disk, regardless of how it > is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for > log and p1 for cache, you could very well be writing to same

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Phillip Wagstrom
Eugen, Be aware that p0 corresponds to the entire disk, regardless of how it is partitioned with fdisk. The fdisk partitions are 1 - 4. By using p0 for log and p1 for cache, you could very well be writing to same location on the SSD and corrupting things. Personally, I'd recom

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 12:44:26PM -0800, Richard Elling wrote: > > On Jan 3, 2013, at 12:33 PM, Eugen Leitl wrote: > > > On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote: > >> > >> Happy $holidays, > >> > >> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as > > > > Just

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Richard Elling
On Jan 3, 2013, at 12:33 PM, Eugen Leitl wrote: > On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote: >> >> Happy $holidays, >> >> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as > > Just a little update on the home NAS project. > > I've set the pool sync to disabled, a

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote: > > Happy $holidays, > > I have a pool of 8x ST31000340AS on an LSI 8-port adapter as Just a little update on the home NAS project. I've set the pool sync to disabled, and added a couple of 8. c4t1d0 /pci@0,0/pci146

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-02 Thread Richard Elling
On Jan 2, 2013, at 2:03 AM, Eugen Leitl wrote: > On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote: >> On Dec 30, 2012, at 9:02 AM, Eugen Leitl wrote: > >>> The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3 >>> memory, no ECC. All the systems have Intel NICs with mtu 9000 >>

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-02 Thread Eugen Leitl
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote: > On Dec 30, 2012, at 9:02 AM, Eugen Leitl wrote: > > The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3 > > memory, no ECC. All the systems have Intel NICs with mtu 9000 > > enabled, including all switches in the path. > > Doe

Re: [zfs-discuss] poor CIFS and NFS performance

2012-12-31 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Eugen Leitl > > I have a pool of 8x ST31000340AS on an LSI 8-port adapter as > a raidz3 (no compression nor dedup) with reasonable bonnie++ > 1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU

Re: [zfs-discuss] poor CIFS and NFS performance

2012-12-30 Thread Richard Elling
On Dec 30, 2012, at 9:02 AM, Eugen Leitl wrote: > > Happy $holidays, > > I have a pool of 8x ST31000340AS on an LSI 8-port adapter as > a raidz3 (no compression nor dedup) with reasonable bonnie++ > 1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s > Seq-Read @ 53% CPU. It sc

[zfs-discuss] poor CIFS and NFS performance

2012-12-30 Thread Eugen Leitl
Happy $holidays, I have a pool of 8x ST31000340AS on an LSI 8-port adapter as a raidz3 (no compression nor dedup) with reasonable bonnie++ 1.03 values, e.g. 145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s Seq-Read @ 53% CPU. It scrubs with 230+ MByte/s with reasonable system load. No hybrid po