[zfs-discuss] Maximum zfs send/receive throughput

2010-06-25 Thread Mika Borner
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too

Re: [zfs-discuss] NFS/ZFS slow on parallel writes

2009-09-29 Thread Mika Borner
Bob Friesenhahn wrote: Striping across two large raidz2s is not ideal for multi-user use. You are getting the equivalent of two disks worth of IOPS, which does not go very far. More smaller raidz vdevs or mirror vdevs would be better. Also, make sure that you have plenty of RAM installed. F

Re: [zfs-discuss] EMC - top of the table for efficiency, how well would ZFS do?

2008-09-01 Thread Mika Borner
I've read the same log entry, and was also thinking about ZFS... Pillar Data Systems is also answering to the call http://blog.pillardata.com/pillar_data_blog/2008/08/blog-i-love-a-p.html BTW: Would transparent compression be considered as cheating? :-) -- This message posted from opensolaris.or

Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-10-22 Thread Mika Borner
> Leave the default recordsize. With 128K recordsize, > files smaller than If I turn zfs compression on, does the recordsize influence the compressratio in anyway? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Mika Borner
Adam Leventhal wrote: > Yes. The Sun Storage 7000 Series uses the same ZFS that's in OpenSolaris > today. A pool created on the appliance could potentially be imported on an > OpenSolaris system; that is, of course, not explicitly supported in the > service contract. > Would be interesting to he

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-02 Thread Mika Borner
Ulrich Graef wrote: > You need not to wade through your paper... > ECC theory tells, that you need a minimum distance of 3 > to correct one error in a codeword, ergo neither RAID-5 or RAID-6 > are enough: you need RAID-2 (which nobody uses today). > > Raid-Controllers today take advantage of the fa

[zfs-discuss] zfs send / zfs receive hanging

2009-01-12 Thread Mika Borner
Hi Updated today from snv 101 to 105 today. I wanted to do zfs send/receive to a new zpool while forgetting that the new pool was a newer version. zfs send timed out after a while, but it was impossible to kill the receive process. Shouldn't the zfs receive command just fail with a "wrong vers

Re: [zfs-discuss] Crazy Problem with

2009-01-27 Thread Mika Borner
Henri Meddox wrote: > Hi Folks, > call me a lernen ;-) > > I got a crazy Problem with "zpool list" and the size of my pool: > > created "zpool create raidz2 hdd1 hdd2 hdd3" - each hdd is about 1GB. > > zpool list shows me a size of 2.95GB - shouldn't this bis online 1GB? > > After creating a file

Re: [zfs-discuss] Crazy Problem with

2009-01-27 Thread Mika Borner
Mika Borner wrote: > > You're lucky. Ben just wrote about it :-) > > http://www.cuddletech.com/blog/pivot/entry.php?id=1013 > > > Oops, should have read your message completly :-) Anyway you can "lernen" something from it... ___

Re: [zfs-discuss] ZFS on SAN?

2009-02-14 Thread Mika Borner
Andras Spitzer wrote: Is it worth to move the redundancy from the SAN array layer to the ZFS layer? (configuring redundancy on both layers is sounds like a waste to me) There are certain advantages on the array to have redundancy configured (beyond the protection against simple disk failure).

[zfs-discuss] ZFS and Storage

2006-06-26 Thread Mika Borner
Hi Now that Solaris 10 06/06 is finally downloadable I have some questions about ZFS. -We have a big storage sytem supporting RAID5 and RAID1. At the moment, we only use RAID5 (for non-solaris systems as well). We are thinking about using ZFS on those LUNs instead of UFS. As ZFS on Hardware RAID5

Re: [zfs-discuss] ZFS and Storage

2006-06-26 Thread Mika Borner
>The vdev can handle dynamic lun growth, but the underlying VTOC or >EFI label >may need to be zero'd and reapplied if you setup the initial vdev on >a slice. If >you introduced the entire disk to the pool you should be fine, but I >believe you'll >still need to offline/online the pool. Fin

[zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Mika Borner
>I'm a little confused by the first poster's message as well, but you lose some benefits of ZFS if you don't create >your pools with either RAID1 or RAIDZ, such as data corruption detection. The array isn't going to detect that >because all it knows about are blocks. That's the dilemma, the arra

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Mika Borner
>but there may not be filesystem space for double the data. >Sounds like there is a need for a zfs-defragement-file utility perhaps? >Or if you want to be politically cagey about naming choice, perhaps, >zfs-seq-read-optimize-file ? :-) For Datawarehouse and streaming applications a "seq-read-om

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-27 Thread Mika Borner
>RAID5 is not a "nice" feature when it breaks. Let me correct myself... RAID5 is a "nice" feature for systems without ZFS... >Are huge write caches really a advantage? Or are you taking about huge >write caches with non-volatile storage? Yes, you are right. The huge cache is needed mostly beca

Re: [zfs-discuss] ZFS and Storage

2006-06-27 Thread Mika Borner
>given that zfs always does copy-on-write for any updates, it's not clear >why this would necessarily degrade performance.. Writing should be no problem, as it is serialized... but when both database instances are reading a lot of different blocks at the same time, the spindles might "heat up". >

[zfs-discuss] Archiving on ZFS

2006-09-15 Thread Mika Borner
Hi We are thinking about moving away from our Magneto-Optical based archive system (WORM technology). At the moment, we use a volume manager, which virtualizes the WORM's in the jukebox and presents them as UFS Filesystems. The volume manager automatically does asynchronous replication to an id

[zfs-discuss] Physical Clone of zpool

2006-09-18 Thread Mika Borner
Hi We have following scenario/problem: Our zpool resides on a single LUN on a Hitachi Storage Array. We are thinking about making a physical clone of the zpool with the ShadowImage functionality. ShadowImage takes a snapshot of the LUN, and copies all the blocks to a new LUN (physical copy). In

[zfs-discuss] Oracle 11g Performace

2006-10-24 Thread Mika Borner
Here's an interesting read about forthcoming Oracle 11g file system performance. Sadly, there is now information about how this works. It will be interesting to compare it with ZFS Performance, as soon as ZFS is tuned for Databases. "Speed and performance will be the hallmark of the 11g, s

[zfs-discuss] Re: Current status of a ZFS root

2006-10-27 Thread Mika Borner
>Unfortunately, the T1000 only has a > single drive bay (!) which makes it impossible to > follow our normal practice of mirroring the root file You can replace the existing 3.5" disk with two 2.5" disks (quite cheap) //Mika This message posted from opensolaris.org ___