[zfs-discuss] boot from zfs, mirror config issue

2008-03-26 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems




Hello,

I ran into the following issue when configuring a system with ZFS for
root.
The restriction is that it can only be either a single disk or a mirror
pool for root.
Trying to set bootfs on a zpool that does not satisfy the above
criteria fails, so this is good.

However when adding a mirror set to the zpool like zpool add  rootpool
mirror disk3 disk 4 is not blocked.
Once this command is executed you are toast as you cannot remove the
extra mirror. 
Executing zpool set bootfs now fails.

If this is general zfs behavior , would it not be better for zpool to
check if bootfs is set and refuse any zpool add command that
compromises the boot restrictions of the pool bootfs is set for.

-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool automount issue

2008-03-26 Thread Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems




Hello,

I ran into an issue with the automount feature of zpool.
Normal default behavior is for the pool and filesystems in it to be
automatically mounted, unless you set zfs/zpool set
mountpoint=[legacy|/]

When I used 'export' as pool name I could not get it to automount.
Wonder if export is a reserved name as it is part of the zpool command
syntax.

Anyone seen this too?

Used the nexenta opensolaris derivative.
-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread William D. Hathaway
If you are using 6 Thumpers via iSCSI to provide storage to your zpool and 
don't use either mirroring or RAIDZ/RAIDZ2 across the Thumpers, if one Thumper 
goes down then your storage pool is unavailable.  I think you want some form of 
RAID at both levels.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread Bob Friesenhahn
On Wed, 26 Mar 2008, Tim wrote:

> No raid at all.  The system should just stripe across all of the LUN's
> automagically, and since you're already doing your raid on the thumper's,
> they're *protected*.  You can keep growing the zpool indefinitely, I'm not
> aware of any maximum disk limitation.

The data may be protected, but the uptime will be dependent on the 
uptime of all of those systems.  Downtime of *any* of the systems in a 
load-share configuration means downtime for the entire pool.  Of 
course this is the case with any storage system as more hardware is 
added but autonomously administered hardware is more likely to 
encounter a problem.  Local disk is usually more reliable than remote 
disk.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread Richard Elling
Larry Lui wrote:
> Hello,
> I have a situation here at the office I would like some advice on.
>
> I have 6 Sun Fire x4550(Thumper) that I want to aggregate the storage 
> and create a unified namespace for my client machines.  My plan was to 
> export the zpools from each thumper as an iscsi target to a Solaris 
> machine and create a RAIDZ zpool from the iscsi targets.  I think this 
> is what they call RAID plaiding(RAID on RAID).  This Solaris frontend 
> machine would then share out this zpool via NFS or CIFS.
>   

What is your operating definition of "unified namespace."  In my mind,
I've been providing a unified namespace for 20+ years -- it is a process
rather than a product.  For example, here at Sun, no matter where I login,
I get my home directory.

> My question is what is the best solution for this?  The question i'm 
> facing is how to add additional thumpers since you cannot expand a RAIDZ 
> array.
>   

Don't think of a thumper as a whole disk.  Then expanding a raidz2 
(preferred)
can be accomplished quite easily.  For example, something like 6 
thumpers, each
providing N  iSCSI volumes.  You can add another thumper, move the data 
around
and end up with 7 thumpers providing data -- online, no downtime.  This 
will take
a good long while to do because you are moving TBytes between thumpers, but
it can be done.

IMHO, there is some ugliness here.  You might see if pNFS, QFS, or Lustre
would better suit the requirements at the "unified namespace" level.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread Larry Lui
This issue with not having RAID on the front end solaris box is what 
happens when 1 of the backend thumpers dies.  I would imagine that the 
entire zpool would become useless if 1 of the thumpers should die since 
the data would be across all the thumpers.

Tim wrote:
> What you want to do should actually be pretty easy.  On the thumper's, 
> just do your normal raid-z/raid-z2, and export them to the solaris box.  
> Then on the solaris box, you just create a zpool, and add the LUN's one 
> at a time.  No raid at all.  The system should just stripe across all of 
> the LUN's automagically, and since you're already doing your raid on the 
> thumper's, they're *protected*.  You can keep growing the zpool 
> indefinitely, I'm not aware of any maximum disk limitation.
> 
> 
> On Tue, Mar 25, 2008 at 6:12 PM, Larry Lui <[EMAIL PROTECTED] 
> > wrote:
> 
> Hello,
> I have a situation here at the office I would like some advice on.
> 
> I have 6 Sun Fire x4550(Thumper) that I want to aggregate the storage
> and create a unified namespace for my client machines.  My plan was to
> export the zpools from each thumper as an iscsi target to a Solaris
> machine and create a RAIDZ zpool from the iscsi targets.  I think this
> is what they call RAID plaiding(RAID on RAID).  This Solaris frontend
> machine would then share out this zpool via NFS or CIFS.
> 
> My question is what is the best solution for this?  The question i'm
> facing is how to add additional thumpers since you cannot expand a RAIDZ
> array.
> 
> Thanks for taking the time to read this.
> 
> Larry
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org 
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 

-- 
Larry Lui
BIRN Coordinating Center
UC San Diego
9500 Gilman Drive
La Jolla, CA 92093

email: llui at ncmir dot ucsd dot edu
phone: 858-822-0702
fax: 858-822-0828
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread Larry Lui
My definition of a "unified namespace" is to provide the end user with 1 
logical mount point which would be comprised of an aggregate of all the 
thumpers.  A very simple example, 6 thumpers (17TB each).  I want the 
end user to see one mount point that is 102TB large.

I agree with you that there is some ugliness here.  Thats why I'm hoping 
to get some better suggestions on how to accomplish this.  I looked at 
Lustre but it seems to be linux only.

Thanks for your input.

Richard Elling wrote:
> Larry Lui wrote:
>> Hello,
>> I have a situation here at the office I would like some advice on.
>>
>> I have 6 Sun Fire x4550(Thumper) that I want to aggregate the storage 
>> and create a unified namespace for my client machines.  My plan was to 
>> export the zpools from each thumper as an iscsi target to a Solaris 
>> machine and create a RAIDZ zpool from the iscsi targets.  I think this 
>> is what they call RAID plaiding(RAID on RAID).  This Solaris frontend 
>> machine would then share out this zpool via NFS or CIFS.
>>   
> 
> What is your operating definition of "unified namespace."  In my mind,
> I've been providing a unified namespace for 20+ years -- it is a process
> rather than a product.  For example, here at Sun, no matter where I login,
> I get my home directory.
> 
>> My question is what is the best solution for this?  The question i'm 
>> facing is how to add additional thumpers since you cannot expand a 
>> RAIDZ array.
>>   
> 
> Don't think of a thumper as a whole disk.  Then expanding a raidz2 
> (preferred)
> can be accomplished quite easily.  For example, something like 6 
> thumpers, each
> providing N  iSCSI volumes.  You can add another thumper, move the data 
> around
> and end up with 7 thumpers providing data -- online, no downtime.  This 
> will take
> a good long while to do because you are moving TBytes between thumpers, but
> it can be done.
> 
> IMHO, there is some ugliness here.  You might see if pNFS, QFS, or Lustre
> would better suit the requirements at the "unified namespace" level.
> -- richard
> 

-- 
Larry Lui
BIRN Coordinating Center
UC San Diego
9500 Gilman Drive
La Jolla, CA 92093

email: llui at ncmir dot ucsd dot edu
phone: 858-822-0702
fax: 858-822-0828
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread Tim
On Wed, Mar 26, 2008 at 11:04 AM, Larry Lui <[EMAIL PROTECTED]> wrote:

> This issue with not having RAID on the front end solaris box is what
> happens when 1 of the backend thumpers dies.  I would imagine that the
> entire zpool would become useless if 1 of the thumpers should die since
> the data would be across all the thumpers.
>
> Tim wrote:
> > What you want to do should actually be pretty easy.  On the thumper's,
> > just do your normal raid-z/raid-z2, and export them to the solaris box.
> > Then on the solaris box, you just create a zpool, and add the LUN's one
> > at a time.  No raid at all.  The system should just stripe across all of
> > the LUN's automagically, and since you're already doing your raid on the
> > thumper's, they're *protected*.  You can keep growing the zpool
> > indefinitely, I'm not aware of any maximum disk limitation.
> >
> >
> > On Tue, Mar 25, 2008 at 6:12 PM, Larry Lui <[EMAIL PROTECTED]
> > > wrote:
> >
> > Hello,
> > I have a situation here at the office I would like some advice on.
> >
> > I have 6 Sun Fire x4550(Thumper) that I want to aggregate the
> storage
> > and create a unified namespace for my client machines.  My plan was
> to
> > export the zpools from each thumper as an iscsi target to a Solaris
> > machine and create a RAIDZ zpool from the iscsi targets.  I think
> this
> > is what they call RAID plaiding(RAID on RAID).  This Solaris
> frontend
> > machine would then share out this zpool via NFS or CIFS.
> >
> > My question is what is the best solution for this?  The question i'm
> > facing is how to add additional thumpers since you cannot expand a
> RAIDZ
> > array.
> >
> > Thanks for taking the time to read this.
> >
> > Larry
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org 
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> >
>
> --
> Larry Lui
> BIRN Coordinating Center
> UC San Diego
> 9500 Gilman Drive
> La Jolla, CA 92093
>
> email: llui at ncmir dot ucsd dot edu
> phone: 858-822-0702
> fax: 858-822-0828
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


True, but then it becomes a matter of assumed risk vs. payoff.  It seems to
me it would be cheaper in the long run to have an entire spare thumper
chassis that you could throw the drives into than the price/performance loss
of doing raid on the front and the backend.  If this is so mission critical
that it can't ever be down, I guess my first response would be "find a
different way". In fact, my response would be buy a USP-VM or a Symm if it's
that mission critical, and put a cluster of *whatever* in front of them to
serve your nfs traffic.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread Richard Elling
Larry Lui wrote:
> My definition of a "unified namespace" is to provide the end user with 1 
> logical mount point which would be comprised of an aggregate of all the 
> thumpers.  A very simple example, 6 thumpers (17TB each).  I want the 
> end user to see one mount point that is 102TB large.
>
> I agree with you that there is some ugliness here.  Thats why I'm hoping 
> to get some better suggestions on how to accomplish this.  I looked at 
> Lustre but it seems to be linux only.
>   

WIP, see http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU
But I'm not convinced this is what you are after, either.

There are a number of people exporting ZFS+iSCSI to hosts running ZFS and
subsequently exporting NFS.  While maybe not the best possible performance,
it should work ok.  I'd suggest a migration plan to expand which will 
determine
your logical volume size.  AFAIK, there is little performance 
characterization
that has been done for this, and there are zillions of possible 
permutations.  Of
course, backups will be challenging until ADM arrives.
http://opensolaris.org/os/project/adm
 -- richard

> Thanks for your input.
>
> Richard Elling wrote:
>   
>> Larry Lui wrote:
>> 
>>> Hello,
>>> I have a situation here at the office I would like some advice on.
>>>
>>> I have 6 Sun Fire x4550(Thumper) that I want to aggregate the storage 
>>> and create a unified namespace for my client machines.  My plan was to 
>>> export the zpools from each thumper as an iscsi target to a Solaris 
>>> machine and create a RAIDZ zpool from the iscsi targets.  I think this 
>>> is what they call RAID plaiding(RAID on RAID).  This Solaris frontend 
>>> machine would then share out this zpool via NFS or CIFS.
>>>   
>>>   
>> What is your operating definition of "unified namespace."  In my mind,
>> I've been providing a unified namespace for 20+ years -- it is a process
>> rather than a product.  For example, here at Sun, no matter where I login,
>> I get my home directory.
>>
>> 
>>> My question is what is the best solution for this?  The question i'm 
>>> facing is how to add additional thumpers since you cannot expand a 
>>> RAIDZ array.
>>>   
>>>   
>> Don't think of a thumper as a whole disk.  Then expanding a raidz2 
>> (preferred)
>> can be accomplished quite easily.  For example, something like 6 
>> thumpers, each
>> providing N  iSCSI volumes.  You can add another thumper, move the data 
>> around
>> and end up with 7 thumpers providing data -- online, no downtime.  This 
>> will take
>> a good long while to do because you are moving TBytes between thumpers, but
>> it can be done.
>>
>> IMHO, there is some ugliness here.  You might see if pNFS, QFS, or Lustre
>> would better suit the requirements at the "unified namespace" level.
>> -- richard
>>
>> 
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Brandon Wilson
Hey all,

I haven't read any notices for ZFS boot / ZFS root filesystem support for sparc 
based systems? Please tell me, dear god please tell me, that this hasn't been 
set aside. I'm really hoping to see it in the next release.

Brandon Wilson
[EMAIL PROTECTED]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Lori Alt
zfs boot support for sparc (included in the overall delivery
of zfs boot, which includes install support, support for
swap and dump zvols, and various other improvements)
is still planned for Update 6.

We are working very hard to get it into build 88.

Lori

Brandon Wilson wrote:
> Hey all,
>
> I haven't read any notices for ZFS boot / ZFS root filesystem support for 
> sparc based systems? Please tell me, dear god please tell me, that this 
> hasn't been set aside. I'm really hoping to see it in the next release.
>
> Brandon Wilson
> [EMAIL PROTECTED]
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Brandon Wilson
Awesome, thanks Lori!

Brandon Wilson
[EMAIL PROTECTED]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Bob Friesenhahn
On Wed, 26 Mar 2008, Lori Alt wrote:

> zfs boot support for sparc (included in the overall delivery
> of zfs boot, which includes install support, support for
> swap and dump zvols, and various other improvements)
> is still planned for Update 6.

Does zfs boot have any particular firmware dependencies?  Will it work 
on old SPARC systems?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Lori Alt
zfs boot has no firmware dependencies.  It should
work on any sparc platform that supports ufs
boot of the same release.

Lori

Bob Friesenhahn wrote:
> On Wed, 26 Mar 2008, Lori Alt wrote:
>
>   
>> zfs boot support for sparc (included in the overall delivery
>> of zfs boot, which includes install support, support for
>> swap and dump zvols, and various other improvements)
>> is still planned for Update 6.
>> 
>
> Does zfs boot have any particular firmware dependencies?  Will it work 
> on old SPARC systems?
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Vincent Fox
> We are working very hard to get it into build 88.

*sigh*

Last I heard it was going to be build 86.  I saw build 85 come out and thought 
"GREAT only a couple more weeks!"

Oh well..

Will we ever be able to boot from a RAIDZ pool, or is that fantasy?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread kristof
Best option is to stripe pairs of mirrors. So in your case create a pool which 
stripes over 3 mirrors, this will look like:

pool
   mirror: 
  thumper1
  thumper2
   mirror: 
   thumper3
   thumper4
   mirror: 
thumper5
thumper6

So this will stripe over those 3 mirrors.

you can add mirrors if extra space is needed.

That's the way we implement it right now.

You can loose up to 3 servers (as long as they don't belong to the same mirror)

Of course the "nas head" is a single point of failure, and clustering iscsi 
zpools isn't that easy as you would hope :-( iscsi is not yet supported by the 
sun cluster framework. 

One of the drawbacks you can also expect is when you ever have to boot the 
nashead while some or all of the targets are unavailable, solaris is doing very 
nasty during start up, instead off just booting and putting the pool in 
degraded mode you will get the nashead hanging during boot untill you fix the 
targets, then the nashead continues to boot. 

If you really concern about speed I would advice you to use Infiniband and not 
ethernet, its also a good idea to Isolate the iscsi traffic.

K
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread David Magda
On Mar 26, 2008, at 18:45, Vincent Fox wrote:

> *sigh*
>
> Last I heard it was going to be build 86.  I saw build 85 come out  
> and thought "GREAT only a couple more weeks!"
>
> Oh well..

After a little while no one remembers if a product was late or on  
time, but everyone remembers if it was buggy or caused data loss.

:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Erik Trimble
David Magda wrote:
> On Mar 26, 2008, at 18:45, Vincent Fox wrote:
>
>   
>> *sigh*
>>
>> Last I heard it was going to be build 86.  I saw build 85 come out  
>> and thought "GREAT only a couple more weeks!"
>>
>> Oh well..
>> 
>
> After a little while no one remembers if a product was late or on  
> time, but everyone remembers if it was buggy or caused data loss.
>
>   :
Of course, if it was an MS product, we are constantly reminded of both



-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Periodic flush

2008-03-26 Thread Bob Friesenhahn
My application processes thousands of files sequentially, reading 
input files, and outputting new files.  I am using Solaris 10U4. 
While running the application in a verbose mode, I see that it runs 
very fast but pauses about every 7 seconds for a second or two.  This 
is while reading 50MB/second and writing 73MB/second (ARC cache miss 
rate of 87%).  The pause does not occur if the application spends more 
time doing real work.  However, it would be nice if the pause went 
away.

I have tried turning down the ARC size (from 14GB to 10GB) but the 
behavior did not noticeably improve.  The storage device is trained to 
ignore cache flush requests.  According to the Evil Tuning Guide, the 
pause I am seeing is due to a cache flush after the uberblock updates.

It does not seem like a wise choice to disable ZFS cache flushing 
entirely.  Is there a better way other than adding a small delay into 
my application?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?

2008-03-26 Thread Kyle McDonald
I seem to be having a problem mounting the filesystems on my machine, 
and I suspect it's due to the order of processing of /etc/vfstab vs. ZFS 
mount properties.

I have a UFS /export, then I have a ZFS that mounts on /export/OSImages.
In that ZFS I have a couple of directories with many .ISO files.

In /etc/vfstab, I have entries that mount the ISO's using LOFI onto 
mountpoints which are also located under /export/OSImages.

All of these mounts are failing at bootup with messages about 
non-existent mountpoints. My guess is that it's because when /etc/vfstab 
is running, the ZFS '/export/OSImages' isn't mounted yet?

Any ideas?

After bootup, I can login and run 'mountall' manually and everything 
mounts just fine.

 -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance lower than expected

2008-03-26 Thread Jeff Bonwick
> The disks in the SAN servers were indeed striped together with Linux LVM
> and exported as a single volume to ZFS.

That is really going to hurt.  In general, you're much better off
giving ZFS access to all the individual LUNs.  The intermediate
LVM layer kills the concurrency that's native to ZFS.

Jeff
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Periodic flush

2008-03-26 Thread Neelakanth Nadgir
Bob Friesenhahn wrote:
> My application processes thousands of files sequentially, reading 
> input files, and outputting new files.  I am using Solaris 10U4. 
> While running the application in a verbose mode, I see that it runs 
> very fast but pauses about every 7 seconds for a second or two. 

When you experience the pause at the application level,
do you see an increase in writes to disk? This might the
regular syncing of the transaction group to disk.
This is normal behavior. The "amount" of pause is
determined by how much data needs to be synced. You could
of course decrease it by reducing the time between syncs
(either by reducing the ARC and/or decreasing txg_time),
however, I am not sure it will translate to better performance
for you.

hth,
-neel

  This
> is while reading 50MB/second and writing 73MB/second (ARC cache miss 
> rate of 87%).  The pause does not occur if the application spends more 
> time doing real work.  However, it would be nice if the pause went 
> away.
> 
> I have tried turning down the ARC size (from 14GB to 10GB) but the 
> behavior did not noticeably improve.  The storage device is trained to 
> ignore cache flush requests.  According to the Evil Tuning Guide, the 
> pause I am seeing is due to a cache flush after the uberblock updates.
> 
> It does not seem like a wise choice to disable ZFS cache flushing 
> entirely.  Is there a better way other than adding a small delay into 
> my application?
> 
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss