Re: [zfs-discuss] Recover from Solaris crash

2007-09-20 Thread Sanjay Nadkarni

On Sep 20, 2007, at 12:55 AM, Tore Johansson wrote:

> Hi,
>
> I am running solaris 10 on ufs and the rest on ZFS. Now has the  
> solaris disk crashed.
> How can I recover the other ZFS disks?
> Can I reinstall solaris and recreate the zfs systems without data  
> loss?

Zpool import is your friend.  check out http://docs.sun.com/app/docs/ 
doc/817-2271/6mhupg6g0?l=en&a=view#indexterm-188

-Sanjay

>
> Tore
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about uberblock blkptr

2007-09-20 Thread Roch - PAE

[EMAIL PROTECTED] writes:
 > Roch - PAE wrote:
 > > [EMAIL PROTECTED] writes:
 > >  > Jim Mauro wrote:
 > >  > >
 > >  > > Hey Max - Check out the on-disk specification document at
 > >  > > http://opensolaris.org/os/community/zfs/docs/.
 > >  > >
 > >  > > Page 32 illustration shows the rootbp pointing to a dnode_phys_t
 > >  > > object (the first member of a objset_phys_t data structure).
 > >  > >
 > >  > > The source code indicates ub_rootbp is a blkptr_t, which contains
 > >  > > a 3 member array of dva_t 's called blk_dva (blk_dva[3]).
 > >  > > Each dva_t is a 2 member array of 64-bit unsigned ints (dva_word[2]).
 > >  > >
 > >  > > So it looks like each blk_dva contains 3 128-bit DVA's
 > >  > >
 > >  > > You probably figured all this out alreadydid you try using
 > >  > > a objset_phys_t to format the data?
 > >  > >
 > >  > > Thanks,
 > >  > > /jim
 > >  > Ok.  I think I know what's wrong.  I think the information (most 
 > > likely, 
 > >  > a objset_phys_t) is compressed
 > >  > with lzjb compression.  Is there a way to turn this entirely off (not 
 > >  > just for file data, but for all meta data
 > >  > as well when a pool is created?  Or do I need to figure out how to hack 
 > >  > in the lzjb_decompress() function in
 > >  > my modified mdb?  (Also, I figured out that zdb is already doing the 
 > >  > left shift by 9 before dumping DVA values,
 > >  > for anyone following this...).
 > >  > 
 > >
 > > Max, this might help (zfs_mdcomp_disable) :
 > > http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#METACOMP
 > >   
 > Hi Roch,
 > That would help, except it does not seem to work.  I set 
 > zfs_mdcomp_disable to 1 with mdb,
 > deleted the pool, recreated the pool, and zdb - still shows the 
 > rootbp in the uberblock_t
 > to have the lzjb flag turned on.  So I then added the variable to 
 > /etc/system, destroyed the pool,
 > rebooted, recreated the pool, and still the same result.  Also, my mdb 
 > shows the same thing
 > for the uberblock_t rootbp blkptr data.   I am running Nevada build 55b.
 > 
 > I shall update the build I am running soon, but in the meantime I'll 
 > probably write a modified cmd_print() function for my
 > (modified)  mdb to handle (at least) lzjb compressed metadata.  Also, I 
 > think the ZFS Evil Tuning Guide should be
 > modified.  It says this can be tuned for Solaris 10 11/06 and snv_52.  I 
 > guess that means only those
 > two releases.  snv_55b has the variable, but it doesn't have an effect 
 > (at least on the uberblock_t
 > rootbp meta-data).
 > 
 > thanks for your help.
 > 
 > max
 > 

My bad. The tunable only affects indirect  dbufs (so I guess
only for  large  files). As  you   noted, other metadata  is
compressed unconditionaly(I  guess from the use   of
ZIO_COMPRESS_LZJB in dmu_objset_open_impl).

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about uberblock blkptr

2007-09-20 Thread [EMAIL PROTECTED]
Hi Roch,
Roch - PAE wrote:
> [EMAIL PROTECTED] writes:
>  > Roch - PAE wrote:
>  > > [EMAIL PROTECTED] writes:
>  > >  > Jim Mauro wrote:
>  > >  > >
>  > >  > > Hey Max - Check out the on-disk specification document at
>  > >  > > http://opensolaris.org/os/community/zfs/docs/.
>  

> > >  > Ok.  I think I know what's wrong.  I think the information (most 
> > > likely, 
>  > >  > a objset_phys_t) is compressed
>  > >  > with lzjb compression.  Is there a way to turn this entirely off (not 
>  > >  > just for file data, but for all meta data
>  > >  > as well when a pool is created?  Or do I need to figure out how to 
> hack 
>  > >  > in the lzjb_decompress() function in
>  > >  > my modified mdb?  (Also, I figured out that zdb is already doing the 
>  > >  > left shift by 9 before dumping DVA values,
>  > >  > for anyone following this...).
>  > >  > 
>  > >
>  > > Max, this might help (zfs_mdcomp_disable) :
>  > > 
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#METACOMP
>  > >   
>  > Hi Roch,
>  > That would help, except it does not seem to work.  I set 
>  > zfs_mdcomp_disable to 1 with mdb,
>  > deleted the pool, recreated the pool, and zdb - still shows the 
>  > rootbp in the uberblock_t
>  > to have the lzjb flag turned on.  So I then added the variable to 
>  > /etc/system, destroyed the pool,
>  > rebooted, recreated the pool, and still the same result.  Also, my mdb 
>  > shows the same thing
>  > for the uberblock_t rootbp blkptr data.   I am running Nevada build 55b.
>  > 
>  > I shall update the build I am running soon, but in the meantime I'll 
>  > probably write a modified cmd_print() function for my
>  > (modified)  mdb to handle (at least) lzjb compressed metadata.  Also, I 
>  > think the ZFS Evil Tuning Guide should be
>  > modified.  It says this can be tuned for Solaris 10 11/06 and snv_52.  I 
>  > guess that means only those
>  > two releases.  snv_55b has the variable, but it doesn't have an effect 
>  > (at least on the uberblock_t
>  > rootbp meta-data).
>  > 
>  > thanks for your help.
>  > 
>  > max
>  > 
>
> My bad. The tunable only affects indirect  dbufs (so I guess
> only for  large  files). As  you   noted, other metadata  is
> compressed unconditionaly(I  guess from the use   of
> ZIO_COMPRESS_LZJB in dmu_objset_open_impl).
>
> -r
>
>
>   
This makes printing the data with ::print much more problematic...
The code in mdb that prints data structures recursively iterates through the
structure members reading each member separately.  I can either write a new
print function that does the decompression, or add a new dcmd that does the
descompression and dumps the data to the screen, but then I lose the
structure member names in the output.  I guess I'll do the decompression 
dcmd
first, and then figure out how to get the member names back in the output...

thanks,
max


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in s10u4?

2007-09-20 Thread Mark J Musante
On Wed, 19 Sep 2007, Mike Gerdts wrote:

> The rather consistent answer is that zoneadm clone will not do zfs until
> live upgrade does zfs.  Since there is a new project in the works (Snap
> Upgrade) that is very much targeted at environments that use zfs, I
> would be surprised to see zfs support come into live upgrade.

I for one would like to see live upgrade support ZFS.  Even with Snap
Upgrade on the horizon (the page on the OpenSolaris site says 'March' but
the current scedule is a sea of TBDs [see
http://opensolaris.org/os/project/caiman/Snap_Upgrade/schedule/ for
more]), I think there are enough systems and configs out there to warrant
putting ZFS support into Live Upgrade.

LU is familiar and proven, and deployed widely.  My opinion only, but I
think it would be short-sighted to stop supporting it just because another
technology is being developed.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in s10u4?

2007-09-20 Thread Mike Gerdts
On 9/20/07, Mark J Musante <[EMAIL PROTECTED]> wrote:
> I for one would like to see live upgrade support ZFS.  Even with Snap
> Upgrade on the horizon (the page on the OpenSolaris site says 'March' but
> the current scedule is a sea of TBDs [see
> http://opensolaris.org/os/project/caiman/Snap_Upgrade/schedule/ for
> more]), I think there are enough systems and configs out there to warrant
> putting ZFS support into Live Upgrade.
>
> LU is familiar and proven, and deployed widely.  My opinion only, but I
> think it would be short-sighted to stop supporting it just because another
> technology is being developed.

I'm with you.  Snap Upgrade has a March deliverable for OpenSolaris,
which means that the widely deployed base (Solaris 10 and earlier)
won't see it for some time after that, at best.

If people find the use of zfs clones more important than live upgrade
(or upgrade, for that matter) search the zfs or zones lists from late
last year or early this year - I documented a manual procedure that is
essentially the following - there may be more details in the older
message.

zoneadm -z master detach
zfs snapshot
zfs clone
zoneadm -z master attach
zonecfg -z newzone create -t master
# change IP's et. al.
zoneadm -z newzone attach
zoneadm -z newzone boot -s
zlogin newzone sys-unconfig
zoneadm -z newzone boot
zlogin -C newzone

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAIDZ vs. RAID5.

2007-09-20 Thread Roch - PAE

Here is a different twist on your interesting scheme.  First
start with writting 3 blocks and parity in a full stripe.

Disk0   Disk1   Disk2   Disk3
   
D0  D1  D2  P0,1,2


Next application modifies D0 -> D0' and also writes other
data D3, D4. Now you have 

Disk0   Disk1   Disk2   Disk3
   
D0  D1  D2  P0,1,2
D0' D3  D4  P0',3,4

So file update combine with new data into new full stripes.
This is the trivial part. Now the hard part :

We have to deal with D0. D0 is free of data content
(subsided by D0'). However it holds parity information
protecting live data D1, D2. If workload
updates data in D1 and D2 the full stripe becomes free (this 
is the easy part).

But if D1 and D2 stays immutable for long time then we can
run out of pool blocks with D0 held down in an half-freed state.
So as we near full pool capacity, a scrubber would have to walk
the stripes  and look for partially freed ones. Then it
would need to do a scrubbing "read/write" on D1, D2 so that
they become part of a new stripe with some other data
freeing the full initial stripe.


-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-20 Thread John-Paul Drawneek
err, I installed the patch and am still on zfs 3?

solaris 10 u3 with kernel patch 120011-14
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-20 Thread Torrey McMahon
Did you upgrade your pools? "zpool upgrade -a"

John-Paul Drawneek wrote:
> err, I installed the patch and am still on zfs 3?
>
> solaris 10 u3 with kernel patch 120011-14
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Project proposal: Block selection policy and space map enhancements

2007-09-20 Thread eric kustarz

On Sep 15, 2007, at 12:55 PM, Victor Latushkin wrote:

> I'm proposing new project for ZFS community - Block Selection  
> Policy and
> Space Map Enhancements.

+1.

I wonder if some of this could look into a dynamic policy.  For  
example, a policy that switches when the pool becomes "too full".

eric

>
> Space map[1] is very efficient data structure for keeping track of  
> free
> space in the metaslabs, but there is at least one area of  
> improvement -
> space map block selection algorithm which could be better (see [2]).
>
> I propose community project to address the following:
>
> * define requirements for a block selection policy for various  
> workloads
>
> * improve current space map / metaslab implementation to allow for
> efficient implementation of multiple block selection policies
>
> * develop a collection of the block selection policies optimized for
> various workloads and requirements
>
>
> Some background, motivation and requirements you may find in the  
> writeup
> below.
>
> With the best regards,
> Victor
>
>
> Background:
> ===
> Current block selection algorithm as implemented in  
> metaslab_ff_alloc()
> caused some pain recently (see [3-9,10] for examples), and as a result
> bug 6495013 "Loops and recursion in metaslab_ff_alloc can kill  
> performance,
> even on a pool with lots of free data" [11] was filed.  
> Investigation of
> this bug identified race condition in the metaslab selection code (see
> metaslab_group_alloc() for details), and this indeed provided some
> relief but did not solve the problem completely (see [4,10] for  
> example).
>
>
> Current Limitations:
> 
> Synopsis of bug 6495013 suggests that it is loop and recursion in
> metaslab_ff_alloc() which may kill performance. Indeed, loops and
> recursion in metalab_ff_alloc() make it O(NlogN) algorithm in the  
> worst
> case, where N is a number of segments in the space map. Free space
> fragmentation and alignment requirements of metaslab_ff_alloc() may  
> help
> this to surface even on a pool with lots of free space.
>
> For example, let's imagine pool consisting of only one 256k metaslab
> with two allocated 512k blocks - one in the beginning, another one at
> the end. Space map for this metaslab will contain only one space  
> segment
> [512,261632). Attempt to allocate 128k block from this space map will
> fail, though we definitely have 255k of contiguous free space in  
> this case.
>
> Gang blocks came to rescue us here (see [13]). We start trying to
> allocate smaller block sizes - 64k, 32k and so on, until we allocate
> enough of smaller blocks to satisfy allocation of 128k block. In this
> case, depending on the position of 64k-aligned and 512b-aligned  
> cursors
> (see use of sm_ppd in metaslab_ff_alloc()), we may get multiple  
> variants
> of smaller block locations (GH is gang header, GM1 and GM2 are gang
> members, A indicates allocated blocks, and F - free space in the  
> space map):
>
>  GH GM1   GM2
> A:[0-512)[512-1k)  [64k-128k)[128k-192k)   [255,5k-256k)
> F:  [1k-64k)   [192k-255,5k)
>
> In this case we effectively allocate 128k block not aligned on 128k
> boundary with additional overhead of gang header.
>
>  GH GM2   GM1
> A:[0-512)[512-1k)  [64k-128k)[128k-192k)   [255,5k-256k)
> F:  [1k-64k)   [192k-255,5k)
>
> In this case halves of our 128k are swapped.
>
> In case where 512b-aligned cursor points to offset 64k and we allocate
> GH starting at that offset, we'll have to allocate 3 gang members -  
> one
> 64k and two 32k, fragmenting free space further.
>
> Other option would be to walk trough space map and take as much free
> space segment as needed to satisfy an allocation. With our example it
> would look like this:
>
>  GH  GM1
> A:[0-512)[512-1k)[1k-129k)   [255,5k-256k)
> F:   [129k-255,5k)
>
> But do we really need overhead of gang header in this case? It looks
> like we can definitely do better and it is a bit early to "stop  
> looking
> and start ganging" (see [12]). It may be better to look smarter  
> instead.
>
> Potential number of free space segments in space map grows with the  
> size
> of space map. Since number of metaslabs per vdev is somewhat fixed  
> (see
> [14]), larger vdevs have larger space maps and larger potential number
> of free space segments in them, thus higher chances of worst-case
> behaviour. This may be mitigated by increasing number of metaslabs per
> vdev, but this will bring other difficulties.
>
> Requirement:
> 
> Another implementations of block allocation functions and/or space map
> data structure may make allocation procedure better space map citizen
> with O(logN) bound, may provide additional information like size of  
> the
> largest free space segment in the space map. This may translate into
> more compact spa

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Richard Elling
a few comments below...

Paul B. Henson wrote:
> We are looking for a replacement enterprise file system to handle storage
> needs for our campus. For the past 10 years, we have been happily using DFS
> (the distributed file system component of DCE), but unfortunately IBM
> killed off that product and we have been running without support for over a
> year now. We have looked at a variety of possible options, none of which
> have proven fruitful. We are currently investigating the possibility of a
> Solaris 10/ZFS implementation. I have done a fair amount of reading and
> perusal of the mailing list archives, but I apologize in advance if I ask
> anything I should have already found in a FAQ or other repository.
> 
> Basically, we are looking to provide initially 5 TB of usable storage,
> potentially scaling up to 25-30TB of usable storage after successful
> initial deployment. We would have approximately 50,000 user home
> directories and perhaps 1000 shared group storage directories. Access to
> this storage would be via NFSv4 for our UNIX infrastructure, and CIFS for
> those annoying Windows systems you just can't seem to get rid of ;).

50,000 directories aren't a problem, unless you also need 50,000 quotas and
hence 50,000 file systems.  Such a large, single storage pool system will
be an outlier... significantly beyond what we have real world experience
with.

> I read that initial versions of ZFS had scalability issues with such a
> large number of file systems, resulting in extremely long boot times and
> other problems. Supposedly a lot of those problems have been fixed in the
> latest versions of OpenSolaris, and many of the fixes have been backported
> to the official Solaris 10 update 4? Will that version of Solaris
> reasonably support 50 odd thousand ZFS file systems?

There have been improvements in performance and usability.  Not all
performance problems were in ZFS, but large numbers of file systems exposed
other problems.  However, I don't think that this has been characterized.

> I saw a couple of threads in the mailing list archives regarding NFS not
> transitioning file system boundaries, requiring each and every ZFS
> filesystem (50 thousand-ish in my case) to be exported and mounted on the
> client separately. While that might be feasible with an automounter, it
> doesn't really seem desirable or efficient. It would be much nicer to
> simply have one mount point on the client with all the home directories
> available underneath it. I was wondering whether or not that would be
> possible with the NFSv4 pseudo-root feature. I saw one posting that
> indicated it might be, but it wasn't clear whether or not that was a
> current feature or something yet to be implemented. I have no requirements
> to support legacy NFSv2/3 systems, so a solution only available via NFSv4
> would be acceptable.
> 
> I was planning to provide CIFS services via Samba. I noticed a posting a
> while back from a Sun engineer working on integrating NFSv4/ZFS ACL support
> into Samba, but I'm not sure if that was ever completed and shipped either
> in the Sun version or pending inclusion in the official version, does
> anyone happen to have an update on that? Also, I saw a patch proposing a
> different implementation of shadow copies that better supported ZFS
> snapshots, any thoughts on that would also be appreciated.

This work is done and, AFAIK, has been integrated into S10 8/07.

> Is there any facility for managing ZFS remotely? We have a central identity
> management system that automatically provisions resources as necessary for
> users, as well as providing an interface for helpdesk staff to modify
> things such as quota. I'd be willing to implement some type of web service
> on the actual server if there is no native remote management; in that case,
> is there any way to directly configure ZFS via a programmatic API, as
> opposed to running binaries and parsing the output? Some type of perl
> module would be perfect.

This is a loaded question.  There is a webconsole interface to ZFS which can
be run from most browsers.  But I think you'll find that the CLI is easier
for remote management.

> We need high availability, so are looking at Sun Cluster. That seems to add
> an extra layer of complexity , but there's no way I'll get signoff on
> a solution without redundancy. It would appear that ZFS failover is
> supported with the latest version of Solaris/Sun Cluster? I was speaking
> with a Sun SE who claimed that ZFS would actually operate active/active in
> a cluster, simultaneously writable by both nodes. From what I had read, ZFS
> is not a cluster file system, and would only operate in the active/passive
> failover capacity. Any comments?

Active/passive only.  ZFS is not supported over pxfs and ZFS cannot be
mounted simultaneously from two different nodes.

For most large file servers, people will split the file systems across
servers such that under normal circumstances, both nodes are providing
file

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Richard Elling wrote:

> 50,000 directories aren't a problem, unless you also need 50,000 quotas
> and hence 50,000 file systems.  Such a large, single storage pool system
> will be an outlier... significantly beyond what we have real world
> experience with.

Yes, considering that 45,000 of those users will be students, we definitely
need separate quotas for each one :).

Hmm, I get a bit of a shiver down my spine at the prospect of deploying a
critical central service in a relatively untested configuration 8-/. What
is the maximum number of file systems in a given pool that has undergone
some reasonable amount of real world deployment?

One issue I have is that our previous filesystem, DFS, completely spoiled
me with its global namespace and location transparency. We had three fairly
large servers, with the content evenly dispersed among them, but from the
perspective of the client any user's files were available at
/dfs/user/, regardless of which physical server they resided on.
We could even move them around between servers transparently.

Unfortunately, there aren't really any filesystems available with similar
features and enterprise applicability. OpenAFS comes closest, we've been
prototyping that but the lack of per file ACLs bites, and as an add-on
product we've had issues with kernel compatibility across upgrades.

I was hoping to replicate a similar feel by just having one large file
server with all the data on it. If I split our user files across multiple
servers, we would have to worry about which server contained what files,
which would be rather annoying.

There are some features in NFSv4 that seem like they might someday help
resolve this problem, but I don't think they are readily available in
servers and definitely not in the common client.

> > I was planning to provide CIFS services via Samba. I noticed a posting a
> > while back from a Sun engineer working on integrating NFSv4/ZFS ACL support
> > into Samba, but I'm not sure if that was ever completed and shipped either
> > in the Sun version or pending inclusion in the official version, does
> > anyone happen to have an update on that? Also, I saw a patch proposing a
> > different implementation of shadow copies that better supported ZFS
> > snapshots, any thoughts on that would also be appreciated.
>
> This work is done and, AFAIK, has been integrated into S10 8/07.

Excellent. I did a little further research myself on the Samba mailing
lists, and it looks like ZFS ACL support was merged into the official
3.0.26 release. Unfortunately, the patch to improve shadow copy performance
on top of ZFS still appears to be floating around the technical mailing
list under discussion.

> > Is there any facility for managing ZFS remotely? We have a central identity
> > management system that automatically provisions resources as necessary for
[...]
> This is a loaded question.  There is a webconsole interface to ZFS which can
> be run from most browsers.  But I think you'll find that the CLI is easier
> for remote management.

Perhaps I should have been more clear -- a remote facility available via
programmatic access, not manual user direct access. If I wanted to do
something myself, I would absolutely login to the system and use the CLI.
However, the question was regarding an automated process. For example, our
Perl-based identity management system might create a user in the middle of
the night based on the appearance in our authoritative database of that
user's identity, and need to create a ZFS filesystem and quota for that
user. So, I need to be able to manipulate ZFS remotely via a programmatic
API.

> Active/passive only.  ZFS is not supported over pxfs and ZFS cannot be
> mounted simultaneously from two different nodes.

That's what I thought, I'll have to get back to that SE. Makes me wonder as
to the reliability of his other answers :).

> For most large file servers, people will split the file systems across
> servers such that under normal circumstances, both nodes are providing
> file service.  This implies two or more storage pools.

Again though, that would imply two different storage locations visible to
the clients? I'd really rather avoid that. For example, with our current
Samba implementation, a user can just connect to
'\\files.csupomona.edu\' to access their home directory or
'\\files.csupomona.edu\' to access a shared group directory.
They don't need to worry on which physical server it resides or determine
what server name to connect to.

> The SE is mistaken.  Sun^H^Holaris Cluster supports a wide variety of
> JBOD and RAID array solutions.  For ZFS, I recommend a configuration
> which allows ZFS to repair corrupted data.

That would also be my preference, but if I were forced to use hardware
RAID, the additional loss of storage for ZFS redundancy would be painful.

Would anyone happen to have any good recommendations for an enterprise
scale storage subsystem suitable for ZFS deployment? If I recall c

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread James F. Hranicky
Paul B. Henson wrote:

> One issue I have is that our previous filesystem, DFS, completely spoiled
> me with its global namespace and location transparency. We had three fairly
> large servers, with the content evenly dispersed among them, but from the
> perspective of the client any user's files were available at
> /dfs/user/, regardless of which physical server they resided on.
> We could even move them around between servers transparently.

This can be solved using an automounter as well. All home directories
are specified as

/nfs/home/user

in the passwd map, then have a homes map that maps

/nfs/home/user -> /nfs/homeXX/user

then have a map that maps

/nfs/homeXX-> serverXX:/export/homeXX

You can have any number of servers serving up any number of homes
filesystems. Moving users between servers means only changing the
mapping in the homes map. The user never knows the difference, only
seeing the homedir as

/nfs/home/user

(we used amd)

> Again though, that would imply two different storage locations visible to
> the clients? I'd really rather avoid that. For example, with our current
> Samba implementation, a user can just connect to
> '\\files.csupomona.edu\' to access their home directory or
> '\\files.csupomona.edu\' to access a shared group directory.
> They don't need to worry on which physical server it resides or determine
> what server name to connect to.

Samba can be configured to map homes drives to /nfs/home/%u . Let samba use
the automounter setup and it's just as transparent on the CIFS side.

This is how we had things set up at my previous place of employment and
it worked extremely well. Unfortunately, due to lack of BSD-style quotas
and due to the fact that snapshots counted toward ZFS quota, I decided
against using ZFS for filesystem service -- the automounter setup cannot
mitigate the bunches-of-little-filesystems problem.

Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Andy Lubel
On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:

> On Thu, 20 Sep 2007, Richard Elling wrote:
> 
>> 50,000 directories aren't a problem, unless you also need 50,000 quotas
>> and hence 50,000 file systems.  Such a large, single storage pool system
>> will be an outlier... significantly beyond what we have real world
>> experience with.
> 
> Yes, considering that 45,000 of those users will be students, we definitely
> need separate quotas for each one :).
> 
> Hmm, I get a bit of a shiver down my spine at the prospect of deploying a
> critical central service in a relatively untested configuration 8-/. What
> is the maximum number of file systems in a given pool that has undergone
> some reasonable amount of real world deployment?

15,500 is the most I see in this article:

http://developers.sun.com/solaris/articles/nfs_zfs.html

Looks like its completely scalable but your boot time may suffer the more
you have. Just don't reboot :)

> 
> One issue I have is that our previous filesystem, DFS, completely spoiled
> me with its global namespace and location transparency. We had three fairly
> large servers, with the content evenly dispersed among them, but from the
> perspective of the client any user's files were available at
> /dfs/user/, regardless of which physical server they resided on.
> We could even move them around between servers transparently.

If it was so great why did IBM kill it?  Did they have an alternative with
the same functionality?

> 
> Unfortunately, there aren't really any filesystems available with similar
> features and enterprise applicability. OpenAFS comes closest, we've been
> prototyping that but the lack of per file ACLs bites, and as an add-on
> product we've had issues with kernel compatibility across upgrades.
> 
> I was hoping to replicate a similar feel by just having one large file
> server with all the data on it. If I split our user files across multiple
> servers, we would have to worry about which server contained what files,
> which would be rather annoying.
> 
> There are some features in NFSv4 that seem like they might someday help
> resolve this problem, but I don't think they are readily available in
> servers and definitely not in the common client.
> 
>>> I was planning to provide CIFS services via Samba. I noticed a posting a
>>> while back from a Sun engineer working on integrating NFSv4/ZFS ACL support
>>> into Samba, but I'm not sure if that was ever completed and shipped either
>>> in the Sun version or pending inclusion in the official version, does
>>> anyone happen to have an update on that? Also, I saw a patch proposing a
>>> different implementation of shadow copies that better supported ZFS
>>> snapshots, any thoughts on that would also be appreciated.
>> 
>> This work is done and, AFAIK, has been integrated into S10 8/07.
> 
> Excellent. I did a little further research myself on the Samba mailing
> lists, and it looks like ZFS ACL support was merged into the official
> 3.0.26 release. Unfortunately, the patch to improve shadow copy performance
> on top of ZFS still appears to be floating around the technical mailing
> list under discussion.
> 
>>> Is there any facility for managing ZFS remotely? We have a central identity
>>> management system that automatically provisions resources as necessary for
> [...]
>> This is a loaded question.  There is a webconsole interface to ZFS which can
>> be run from most browsers.  But I think you'll find that the CLI is easier
>> for remote management.
> 
> Perhaps I should have been more clear -- a remote facility available via
> programmatic access, not manual user direct access. If I wanted to do
> something myself, I would absolutely login to the system and use the CLI.
> However, the question was regarding an automated process. For example, our
> Perl-based identity management system might create a user in the middle of
> the night based on the appearance in our authoritative database of that
> user's identity, and need to create a ZFS filesystem and quota for that
> user. So, I need to be able to manipulate ZFS remotely via a programmatic
> API.
>
>> Active/passive only.  ZFS is not supported over pxfs and ZFS cannot be
>> mounted simultaneously from two different nodes.
> 
> That's what I thought, I'll have to get back to that SE. Makes me wonder as
> to the reliability of his other answers :).
> 
>> For most large file servers, people will split the file systems across
>> servers such that under normal circumstances, both nodes are providing
>> file service.  This implies two or more storage pools.
> 
> Again though, that would imply two different storage locations visible to
> the clients? I'd really rather avoid that. For example, with our current
> Samba implementation, a user can just connect to
> '\\files.csupomona.edu\' to access their home directory or
> '\\files.csupomona.edu\' to access a shared group directory.
> They don't need to worry on which physical server it resides or dete

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Tim Spriggs
Andy Lubel wrote:
> On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
>
>   
>> On Thu, 20 Sep 2007, Richard Elling wrote:
>>
>> 
>> That would also be my preference, but if I were forced to use hardware
>> RAID, the additional loss of storage for ZFS redundancy would be painful.
>>
>> Would anyone happen to have any good recommendations for an enterprise
>> scale storage subsystem suitable for ZFS deployment? If I recall correctly,
>> the SE we spoke with recommended the StorageTek 6140 in a hardware raid
>> configuration, and evidently mistakenly claimed that Cluster would not work
>> with JBOD.
>> 
>
> I really have to disagree, we have 6120 and 6130's and if I had the option
> to actually plan out some storage I would have just bought a thumper.  You
> could probably buy 2 for the cost of that 6140.
>   

We are in a similar situation. It turns out that buying two thumpers is 
cheaper per TB than buying more shelves for an IBM N7600. I don't know 
about power/cooling considerations yet though.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Gary Mills
On Thu, Sep 20, 2007 at 12:49:29PM -0700, Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Richard Elling wrote:
> 
> > 50,000 directories aren't a problem, unless you also need 50,000 quotas
> > and hence 50,000 file systems.  Such a large, single storage pool system
> > will be an outlier... significantly beyond what we have real world
> > experience with.
> 
> Hmm, I get a bit of a shiver down my spine at the prospect of deploying a
> critical central service in a relatively untested configuration 8-/. What
> is the maximum number of file systems in a given pool that has undergone
> some reasonable amount of real world deployment?

You should consider a Netapp filer.  It will do both NFS and CIFS,
supports disk quotas, and is highly reliable.  We use one for 30,000
students and 3000 employees.  Ours has never failed us.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Dickon Hood
On Thu, Sep 20, 2007 at 16:22:45 -0500, Gary Mills wrote:

: You should consider a Netapp filer.  It will do both NFS and CIFS,
: supports disk quotas, and is highly reliable.  We use one for 30,000
: students and 3000 employees.  Ours has never failed us.

And they might only lightly sue you for contemplating zfs if you're
really, really lucky...

-- 
Dickon Hood

Due to digital rights management, my .sig is temporarily unavailable.
Normal service will be resumed as soon as possible.  We apologise for the
inconvenience in the meantime.

No virus was found in this outgoing message as I didn't bother looking.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, James F. Hranicky wrote:

> This can be solved using an automounter as well.

Well, I'd say more "kludged around" than "solved" ;), but again unless
you've used DFS it might not seem that way.

It just seems rather involved, and relatively inefficient to continuously
be mounting/unmounting stuff all the time. One of the applications to be
deployed against the filesystem will be web service, I can't really
envision a web server with tens of thousands of NFS mounts coming and
going, seems like a lot of overhead.

I might need to pursue a similar route though if I can't get one large
system to house everything in one place.

> Samba can be configured to map homes drives to /nfs/home/%u . Let samba use
> the automounter setup and it's just as transparent on the CIFS side.

I'm planning to use NFSv4 with strong authentication and authorization
through, and intended to run Samba directly on the file server itself
accessing storage locally. I'm not sure that Samba would be able to acquire
local Kerberos credentials and switch between them for the users, without
that access via NFSv4 isn't very doable.

> and due to the fact that snapshots counted toward ZFS quota, I decided

Yes, that does seem to remove a bit of their value for backup purposes. I
think they're planning to rectify that at some point in the future.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Andy Lubel wrote:

> Looks like its completely scalable but your boot time may suffer the more
> you have. Just don't reboot :)

I'm not sure if it's accurate, but the SE we were meeting with claimed that
we could failover all of the filesystems to one half of the cluster, reboot
the other half, fail them back, reboot the first half, and have rebooted
both cluster members with no downtime. I guess as long as the active
cluster member does not fail during the potentially lengthy downtime of the
one rebooting.

> If it was so great why did IBM kill it?

I often daydreamed of a group of high-level IBM executives tied to chairs
next to a table filled with rubber hoses ;), for the sole purpose of
getting that answer.

I think they killed it because the market of technically knowledgeable and
capable people that were able to use it to its full capacity was relatively
limited, and the average IT shop was happy with Windoze :(.

> Did they have an alternative with the same functionality?

No, not really. Depending on your situation, they recommended
transitioning to GPFS or NFSv4, but neither really met the same needs as
DFS.


> I really have to disagree, we have 6120 and 6130's and if I had the option
> to actually plan out some storage I would have just bought a thumper.  You
> could probably buy 2 for the cost of that 6140.

Thumper = x4500, right? You can't really cluster the internal storage of an
x4500, so assuming high reliability/availability was a requirement that
sort of rules that box out.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Tim Spriggs wrote:

> We are in a similar situation. It turns out that buying two thumpers is
> cheaper per TB than buying more shelves for an IBM N7600. I don't know
> about power/cooling considerations yet though.

It's really a completely different class of storage though, right? I don't
know offhand what an IBM N7600 is, but presumably some type of SAN device?
Which can be connected simultaneously to multiple servers for clustering?

An x4500 looks great if you only want a bunch of storage with the
reliability/availability provided by a relatively fault-tolerant server.
But if you want to be able to withstand server failure, or continue to
provide service while having one server down for maintenance/patching, it
doesn't seem appropriate.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Gary Mills wrote:

> You should consider a Netapp filer.  It will do both NFS and CIFS,
> supports disk quotas, and is highly reliable.  We use one for 30,000
> students and 3000 employees.  Ours has never failed us.

We had actually just finished evaluating Netapp before I started looking
into Solaris/ZFS. For a variety of reasons, it was not suitable to our
requirements.

One, for example, was that it did not support simultaneous operation in an
MIT Kerberos realm for NFS authentication while at the same time belonging
to an active directory domain for CIFS authentication. Their workaround was
to have the filer behave like an NT4 server rather than a Windows 2000+
server, which seemed pretty stupid. That also resulted in the filer
not supporting NTLMv2, which was unacceptable.

Another issue we had was with access control. Their approach to ACLs was
just flat out ridiculous. You had UNIX mode bits, NFSv4 ACLs, and CIFs
ACLs, all disjoint, and which one was actually being used and how they
interacted was extremely confusing and not even accurately documented. We
wanted to be able to have the exact same permissions applied whether via
NFSv4 or CIFs, and ideally allow changing permissions via either access
protocol. That simply wasn't going to happen with Netapp.

Their Kerberos implementation only supported DES, not 3DES or AES, their
LDAP integration only supported the legacy posixGroup/memberUid attribute
as opposed to the more modern groupOfNames/member attribute for group
membership.

They have some type of remote management API, but it just wasn't very clean
IMHO.

As far as quotas, I was less than impressed with their implementation.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Tim Spriggs
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>   
>> We are in a similar situation. It turns out that buying two thumpers is
>> cheaper per TB than buying more shelves for an IBM N7600. I don't know
>> about power/cooling considerations yet though.
>> 
>
> It's really a completely different class of storage though, right? I don't
> know offhand what an IBM N7600 is, but presumably some type of SAN device?
> Which can be connected simultaneously to multiple servers for clustering?
>
> An x4500 looks great if you only want a bunch of storage with the
> reliability/availability provided by a relatively fault-tolerant server.
> But if you want to be able to withstand server failure, or continue to
> provide service while having one server down for maintenance/patching, it
> doesn't seem appropriate.
>
>
>   

It's an IBM re-branded NetApp which can which we are using for NFS and 
iSCSI.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS serverproviding NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Dickon Hood wrote:

> On Thu, Sep 20, 2007 at 16:22:45 -0500, Gary Mills wrote:
>
> : You should consider a Netapp filer.  It will do both NFS and CIFS,
> : supports disk quotas, and is highly reliable.  We use one for 30,000
> : students and 3000 employees.  Ours has never failed us.
>
> And they might only lightly sue you for contemplating zfs if you're
> really, really lucky...

Don't even get me started on the subject of software patents ;)...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Chris Kirby
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, James F. Hranicky wrote:
> 
> 
>>and due to the fact that snapshots counted toward ZFS quota, I decided
> 
> 
> Yes, that does seem to remove a bit of their value for backup purposes. I
> think they're planning to rectify that at some point in the future.

We're adding a style of quota that only includes the bytes
referenced by the active fs.  Also, there will be a matching
style for reservations.

"some point in the future" is very soon (weeks).  :-)

-Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Tim Spriggs wrote:

> It's an IBM re-branded NetApp which can which we are using for NFS and
> iSCSI.

Ah, I see.

Is it comparable storage though? Does it use SATA drives similar to the
x4500, or more expensive/higher performance FC drives? Is it one of the
models that allows connecting dual clustered heads and failing over the
storage between them?

I agree the x4500 is a sweet looking box, but when making price comparisons
sometimes it's more than just the raw storage... I wish I could just drop
in a couple of x4500's and not have to worry about the complexity of
clustering ...



-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Paul B. Henson
On Thu, 20 Sep 2007, Chris Kirby wrote:

> We're adding a style of quota that only includes the bytes referenced by
> the active fs.  Also, there will be a matching style for reservations.
>
> "some point in the future" is very soon (weeks).  :-)

I don't think my management will let me run Solaris Express on a production
server ;), how does that translate into availability into a
released/supported version? Would that be something released as a patch to
the just made available U4, or delayed until the next complete update
release?


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-20 Thread John-Paul Drawneek
yep.

but it said that the pools were upto date with the system on 3.

zpool upgrade says the system just has version 3

also patch 120272-12 has been pulled which 120011-14 depends on yay
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread Tim Spriggs
Paul B. Henson wrote:
> Is it comparable storage though? Does it use SATA drives similar to the
> x4500, or more expensive/higher performance FC drives? Is it one of the
> models that allows connecting dual clustered heads and failing over the
> storage between them?
>
> I agree the x4500 is a sweet looking box, but when making price comparisons
> sometimes it's more than just the raw storage... I wish I could just drop
> in a couple of x4500's and not have to worry about the complexity of
> clustering ...
>   

It is configured with SATA drives and does support failover for NFS. 
iSCSI is another story at the moment.

The x4500 is very sweet and the only thing stopping us from buying two 
instead of another shelf is the fact that we have lost pools on Sol10u3 
servers and there is no easy way of making two pools redundant (ie the 
complexity of clustering.) Simply sending incremental snapshots is not a 
viable option.

The pools we lost were pools on iSCSI (in a mirrored config) and they 
were mostly lost on zpool import/export. The lack of a recovery 
mechanism really limits how much faith we can put into our data on ZFS. 
It's safe as long as the pool is safe... but we've lost multiple pools.

-Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-20 Thread Rob Windsor
John-Paul Drawneek wrote:
> yep.
> 
> but it said that the pools were upto date with the system on 3.
> 
> zpool upgrade says the system just has version 3
> 
> also patch 120272-12 has been pulled which 120011-14 depends on yay

Yeah, the listed reason -- " corrupts the snmpd.conf file causing 
the snmp services not to come up"

That never stopped them from releasing sendmail patches before.  ;)

I encountered this problem, diagnosed it, and fixed it within 10minutes. 
  *shrug*

Rob++
-- 
Internet: [EMAIL PROTECTED] __o
Life: [EMAIL PROTECTED]_`\<,_
(_)/ (_)
"They couldn't hit an elephant at this distance."
   -- Major General John Sedgwick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in

2007-09-20 Thread Matthew Flanagan
Mike,

I followed your procedure for cloning zones and it worked well up until 
yesterday when I tried applying the S10U4 kernel patch 12001-14 and it wouldn't 
apply because I had my zones on zfs :(

I'm still figuring out how to fix this other than moving all of my zones onto 
UFS.

Anyone got any tips?

matthew
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "zoneadm clone" doesn't support ZFS snapshots in

2007-09-20 Thread grant beattie
Matthew Flanagan wrote:
> Mike,
>
> I followed your procedure for cloning zones and it worked well up until 
> yesterday when I tried applying the S10U4 kernel patch 12001-14 and it 
> wouldn't apply because I had my zones on zfs :(
>
> I'm still figuring out how to fix this other than moving all of my zones onto 
> UFS.
>   

I don't have any advice, unfortunately, but I do know that in my case 
putting zones on UFS is simply not an option. there must be a way 
considering there is nothing in the documentation to suggest that zones 
on ZFS are not supported.

one question though, why does patchadd care about filesystems in the 
first place? what if I put my zones on VxFS, or QFS? I don't see why it 
should make any difference to patchadd. live upgrade is obviously 
another kettle of fish entirely, though.

grant.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-20 Thread eric kustarz

On Sep 20, 2007, at 6:46 PM, Paul B. Henson wrote:

> On Thu, 20 Sep 2007, Gary Mills wrote:
>
>> You should consider a Netapp filer.  It will do both NFS and CIFS,
>> supports disk quotas, and is highly reliable.  We use one for 30,000
>> students and 3000 employees.  Ours has never failed us.
>
> We had actually just finished evaluating Netapp before I started  
> looking
> into Solaris/ZFS. For a variety of reasons, it was not suitable to our
> requirements.
>



> As far as quotas, I was less than impressed with their implementation.

Would you mind going into more details here?

eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss