[zfs-discuss] preview zfs port to grub2 + raidz

2009-04-26 Thread C. Bergström


Hi zfsers

Just wanted to ping the list since one of the osunix/grub devs has been 
working hard at porting zfs to grub2.  I don't think he's subscribed to 
zfs-discuss so quoting his original email and cc'ing him.


Drop by #osunix on freenode if you're interested in the raidz/compressed 
rpool support


http://lists.gnu.org/archive/html/grub-devel/2009-04/msg00512.html
patch
http://lists.gnu.org/archive/html/grub-devel/2009-04/txtPQO8HRPI4u.txt
---
Hello, here is initial port of zfs from grub-solaris. It can only read 
file by its name. No ls or tab completition yet. Also identation and 
error handling in this patch is completely wrong. To choose the dataset 
set zfs variable. Here is an example of how I tested it:

grub> zfs=grubz/grubzfs
grub> cat (hd0)/hello
hello, grub
grub>
Such syntax is temporary and heavily restricts what you can do with 
different zfs filesystems (e.g. you can't cmp files on different 
filesystems)

I propose the following syntax for the future:
(:)
E.g.
(hd0:grubzfs)
Any other porposition?
Regards
Vladimir 'phcoder' Serbinenko
--

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add WORM to OpenSolaris

2009-04-26 Thread Ellis, Mike
Wow... that's seriously cool!

Throw in some of this... http://www.nexenta.com/demos/auto-cdp.html  and
now we're really getting somewhere...


Nice to see this level of innovation here. Anyone try to employ these
types of techniques on s10? I haven't used nexenta in the past, and I'm
not clear in my mind (yet) how much of this is userland/script based vs.
deeper/kernel level stuff.

Thanks,

 -- MikeE

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Erast
Sent: Sunday, April 26, 2009 1:30 AM
To: Daniel P. Bath
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Add WORM to OpenSolaris

Something like this?

http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=17
1&Itemid=112

Daniel P. Bath wrote:
> Has anyone created a opensource plugin for WORM (Write Once, Read
Many) for OpenSolaris?
> Any ideas how hard it would be to create this?
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Add WORM to OpenSolaris

2009-04-26 Thread Mattias Pantzare
On Sun, Apr 26, 2009 at 11:54, Ellis, Mike  wrote:
> Wow... that's seriously cool!
>
> Throw in some of this... http://www.nexenta.com/demos/auto-cdp.html  and
> now we're really getting somewhere...
>
>
> Nice to see this level of innovation here. Anyone try to employ these
> types of techniques on s10? I haven't used nexenta in the past, and I'm
> not clear in my mind (yet) how much of this is userland/script based vs.
> deeper/kernel level stuff.

>From 
>http://www.nexenta.com/corp/index.php?option=com_content&task=view&id=150&Itemid=112:

NexentaStor auto-cdp service is based on Sun StorageTek Availability
Suite and does not utilize ZFS.

So yes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Jeb Campbell's slog recovery

2009-04-26 Thread Peter Woodman
Hello- I've run into a problem with a slog device failing and a system
failing to boot with the pool it failed in available. I'm attempting
to recover it from another system, but now have the problem of being
unable to import a pool with a missing slog. I've read Jeb Campbell's
post about recovering from this situation[1], but have two questions:
first, is this still the correct method? Second, I'm having trouble in
the linking stage, where the linker is unable to find zio_compress
with current onnv-gate. Anybody know which library this function has
moved to?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
filer.  There's a great deal of churn in e-mail folders, with messages
appearing and being deleted frequently.  I know that ZFS uses copy-on-
write, so that blocks in use are never overwritten, and that deleted
blocks are added to a free list.  This behavior would spread the free
list all over the zpool.  As well, the Netapp uses WAFL, also a
variety of copy-on-write.  The LUNs appear as large files on the
filer.  It won't know which blocks are in use by ZFS.  It would have
to do copy-on-write each time, I suppose.  Do we have a problem here?

The Netapp has a utility that will defragment files on a volume.  It
must put them back into sequential order.  Does ZFS have any concept
of the geometry of its disks?  If so, regular degragmentation on the
Netapp might be a good thing.

Should ZFS and the Netapp be using the same blocksize, so that they
cooperate to some extent?

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Tim
On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills  wrote:

> We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
> filer.  There's a great deal of churn in e-mail folders, with messages
> appearing and being deleted frequently.  I know that ZFS uses copy-on-
> write, so that blocks in use are never overwritten, and that deleted
> blocks are added to a free list.  This behavior would spread the free
> list all over the zpool.  As well, the Netapp uses WAFL, also a
> variety of copy-on-write.  The LUNs appear as large files on the
> filer.  It won't know which blocks are in use by ZFS.  It would have
> to do copy-on-write each time, I suppose.  Do we have a problem here?
>

Not at all.


>
> The Netapp has a utility that will defragment files on a volume.  It
> must put them back into sequential order.  Does ZFS have any concept
> of the geometry of its disks?  If so, regular degragmentation on the
> Netapp might be a good thing.


I assume you mean reallocate on the filer?  This is run automatically as
part of weekly maintenance.  There are flags to run it more aggressively,
but unless you're actually seeing problems, I would suggest avoiding doing
so.


>
>
> Should ZFS and the Netapp be using the same blocksize, so that they
> cooperate to some extent?
>

Just make sure ZFS is using a block size that is a multiple of 4k, which I
believe it does by default.

I have to ask though... why not just serve NFS off the filer to the Solaris
box?  ZFS on a LUN served off a filer seems to make about as much sense as
sticking a ZFS based lun behind a v-filer (although the latter might
actually might make sense in a world where it were supported
*cough*neverhappen*cough* since you could buy the "cheap" newegg disk).


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Miles Nordin
> "t" == Tim   writes:

 t> why not just serve NFS off the filer

there can be some benefit to the lossless FC fabric through
eliminating TCP RTO's and applying backpressure so the initiator has
more control over I/O scheduling.

As discussed here, block-based storage can produce fewer synchronous
calls / rtt waits than NFS for workloads involving opening and closing
lots of small files when you are not calling fsync on them.  

I state both based on theory not experience, and I'm not saying that's
Gary's workload falls in the second category, nor that NFS is
necessarily the wrong approach, but here are two reasons a sane person
might plausibly decide to use the LUN interface instead.  I'm sure
there are more arguments for and against.


pgp6RnX79ZJlp.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
On Sun, Apr 26, 2009 at 05:19:18PM -0400, Ellis, Mike wrote:

> As soon as you put those zfs blocks ontop of iscsi, the netapp won't
> have a clue as far as how to defrag those "iscsi files" from the
> filer's perspective.  (It might do some fancy stuff based on
> read/write patterns, but that's unlikely)

Since the LUN is just a large file on the Netapp, I assume that all
it can do is to put the blocks back into sequential order.  That might
have some benefit overall.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote:
> 
>On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills <[1]mi...@cc.umanitoba.ca>
>wrote:
>
>  We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
>  filer.  There's a great deal of churn in e-mail folders, with
>  messages
>  appearing and being deleted frequently.

>  Should ZFS and the Netapp be using the same blocksize, so that they
>  cooperate to some extent?
>  
>Just make sure ZFS is using a block size that is a multiple of 4k,
>which I believe it does by default.

Okay, that's good.

>I have to ask though... why not just serve NFS off the filer to the
>Solaris box?  ZFS on a LUN served off a filer seems to make about as
>much sense as sticking a ZFS based lun behind a v-filer (although the
>latter might actually might make sense in a world where it were
>supported *cough*neverhappen*cough* since you could buy the "cheap"
>newegg disk).

I prefer NFS too, but the IMAP server requires POSIX semantics.
I believe that NFS doesn't support that, at least NFS version 3.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Richard Elling

Gary Mills wrote:

We run our IMAP spool on ZFS that's derived from LUNs on a Netapp
filer.  There's a great deal of churn in e-mail folders, with messages
appearing and being deleted frequently.  I know that ZFS uses copy-on-
write, so that blocks in use are never overwritten, and that deleted
blocks are added to a free list.  This behavior would spread the free
list all over the zpool.  As well, the Netapp uses WAFL, also a
variety of copy-on-write.  The LUNs appear as large files on the
filer.  It won't know which blocks are in use by ZFS.  It would have
to do copy-on-write each time, I suppose.  Do we have a problem here?

The Netapp has a utility that will defragment files on a volume.  It
must put them back into sequential order.  Does ZFS have any concept
of the geometry of its disks?  If so, regular degragmentation on the
Netapp might be a good thing.
  


If you measure this, then please share your results. There is much
speculation, but little characterization, of the "ills of COW performance."


Should ZFS and the Netapp be using the same blocksize, so that they
cooperate to some extent?

  


ZFS blocksize is dynamic, power of 2, with a max size == recordsize.
Writes can also be coalesced. If you want to measure the distribution, then
there are a few DTrace scripts which will measure it (eg. iosnoop)

I did a large e-mail server over ZFS POC earlier this year.  We could
handle more than 250,000 users on a T5120 message store server using
decent storage (lots of spindles). Since the I/O workload for IMAP is
quite a unique and demanding workload, we were very pleased with
how well ZFS worked.  But low-latency storage is key to maintaining
such large workloads.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Tomas Ögren
On 26 April, 2009 - Gary Mills sent me these 1,3K bytes:

> On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote:
> >I have to ask though... why not just serve NFS off the filer to the
> >Solaris box?  ZFS on a LUN served off a filer seems to make about as
> >much sense as sticking a ZFS based lun behind a v-filer (although the
> >latter might actually might make sense in a world where it were
> >supported *cough*neverhappen*cough* since you could buy the "cheap"
> >newegg disk).
> 
> I prefer NFS too, but the IMAP server requires POSIX semantics.
> I believe that NFS doesn't support that, at least NFS version 3.

What non-POSIXness are you referring to, or is it just random old
thoughts that actually doesn't apply?

Lots of people (me for instance) are using IMAP servers with data served
over NFSv3..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss