Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-30 Thread Simon Breden
Hi Rick,

OK, thanks for clarifying.

As, it seems there's different devices with (1) mixed speed NICs and (2) mixed 
category cabling being used in your setup, I will simplify things by saying 
that if you want to get much faster speeds then I think you'll need to ensure 
you (1) use at least Cat. 5e cables between all devices talking on your LAN, 
and (2) ensure you use Gigabit NICs, and (3) confirm negotiated NIC speed 
before performing speed tests. Then, assuming your disks are reasonably fast, 
you should, using CIFS sharing, be able to get around 40+ MBytes/sec sustained 
throughput using one NIC on each box, with your Gigabit switch.

Hope that helps.

Simon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-30 Thread dh
Hello eschrock,

I'm a newbe on solaris, would you tell me how I can get/install build 89 of 
nevada?

Fabrice.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Metadata corrupted

2008-04-30 Thread Łukasz
Did you see http://www.opensolaris.org/jive/thread.jspa?messageID=220125

I managed to recover my lost data with simple mdb commands.

--Lukas
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS data recovery

2008-04-30 Thread Łukasz
> Hi There,
> 
> Is there any chance you could go into a little more
> detail, perhaps even document the procedure, for the
> benefit of others experiencing a similar problem?
I have some spare time this weekend and will try to give more details.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-30 Thread Spencer Shepler

On Apr 29, 2008, at 9:35 PM, Tim Wood wrote:

> Hi,
> I have a pool /zfs01 with two sub file systems /zfs01/rep1 and / 
> zfs01/rep2.  I used [i]zfs share[/i] to make all of these mountable  
> over NFS, but clients have to mount either rep1 or rep2  
> individually.  If I try to mount /zfs01 it shows directories for  
> rep1 and rep2, but none of their contents.
>
> On a linux machine I think I'd have to set the [i]no_sub_tree_check 
> [/i] flag in /etc/exports to let an NFS mount move through the  
> different exports, but I'm just beginning with solaris, so I'm not  
> sure what to do here.
>
> I found this post in the forum: http://opensolaris.org/jive/ 
> thread.jspa?messageID=169354𩖊
>
> but that makes it sound like this issue was resolved by changing  
> the NFS client behavior in solaris.  Since my NFS client machines  
> are going to be linux machines that doesn't help me any.

My understanding is that the linux client has the same
capabilities of the Solaris client in that it can
traverse server side mount points dynamically.

Spencer

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-30 Thread michael schuster
dh wrote:
> Hello eschrock,
> 
> I'm a newbe on solaris, would you tell me how I can get/install build 89 of 
> nevada?
> 
> Fabrice.

Hi Fabrice,

I think a good place to start is http://www.opensolaris.org/os/newbies/ - I 
don't know whether they give you access to build 89 yet, but you can 
certainly get some practice so that wen you get b89, you know what to do.

(and no, eschrock is not a pseudonym of mine ;-)

HTH
Michael
-- 
Michael Schuster http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-30 Thread Bob Friesenhahn
On Tue, 29 Apr 2008, Jonathan Loran wrote:
>>
> Oh contraire Bob.  I'm not going to boost Linux, but in this department,
> they've tried to do it right.  If you use Linux autofs V4 or higher, you
> can use Sun style maps (except there's no direct maps in V4.  Need V5
> for direct maps).  For our home directories, which use an indirect map,
> we just use the Solaris map, thus:
>
> auto_home:
> *zfs-server:/home/&
>
> Sorry to be so off (ZFS) topic.

I am glad to hear that the Linux automounter has moved forward since 
my experience with it a couple of years ago and indirect maps were 
documented but also documented not to actually work. :-)

I don't think that this discussion is off-topic.  Filesystems are so 
easy to create with ZFS that it has become popular to create per-user 
filesystems.  It would be useful if the various automounter 
incantations to make everything work would appear in a ZFS-related 
Wiki somewhere.

This can be an embarrassing situtation for the system administrator 
who thinks that everything is working fine due to testing with Solaris 
10 clients.  So he swiches all the home directories to ZFS per-user 
filesystems overnight.  Imagine the frustration and embarrassment when 
that poor system administrator returns the next day and finds that 
many users can not access their home directories!

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper / X4500 marvell driver issues

2008-04-30 Thread Doug
When we installed the Marvell driver patch 125205-07 on our X4500 a few months 
ago and it started crashing, Sun support just told us to back out that patch.  
The system has been stable since then.

We are still running Solaris 10 11/06 on that system.  Is there an advantage to 
using 125205-07 and the IDR you mention compared to just not using NCQ?  Better 
performance?  If so, how much better?

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-30 Thread Cindy . Swearingen

Hi Ulrich,

The updated lucreate.1m man page integrated accidentally into
build 88.

If you review the build 88 instructions, here:

http://opensolaris.org/os/community/zfs/boot/

You'll see that we're recommending patience until the install/upgrade
support integrates.

If you are running the transitional ZFS boot support, then this might
apply to you:

Systems that already have ZFS root file systems can be bfu'd with this 
release, but bfu does not convert the legacy mounts (of /, /var, and so 
on) to ZFS mounts. Backwards bfu to releases that don't support ZFS boot 
is prohibited.

Cindy

Ulrich Graef wrote:
> Hi,
> 
> ZFS won't boot on my machine.
> 
> I discovered, that the lu manpages are there, but not
> the new binaries.
> So I tried to set up ZFS boot manually:
> 
> 
>> zpool create -f Root c0t1d0s0
>>
>> lucreate -n nv88_zfs -A "nv88 finally on ZFS"  -c nv88_ufs -p Root -x /zones
>>
>> zpool set bootfs=Root/nv88_zfs Root
>>
>> ufsdump 0f - / | ( cd /Root/nv88_zfs; ufsrestore -rf - ; )
>>
>> eeprom boot-device=disk1
>>
>> Correct vfstab of the boot environment to:
>>Root/nv88_zfs   -   /   zfs -   no  -
>>
>> zfs set mountpoint=legacy Root/nv88_zfs
>>
>> mount -F zfs Root/nv88_zfs /mnt
>>
>> bootadm update-archive -R /mnt
>>
>> umount /mnt
>>
>> installboot /usr/platform/SUNW,Ultra-60/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
> 
> 
> When I try to boot I get the message in the ok prompt:
> 
> Can't mount root
> Fast Data Access MMU Miss
> 
> Same with: boot disk1 -Z Root/nv88_zfs
> 
> What is missing in the setup?
> Unfortunately opensolaris contains only the preliminary setup for x86,
> so it does not help me...
> 
> Regards,
> 
>   Ulrich
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Chris Siebenmann
 I have a test system with 132 (small) ZFS pools[*], as part of our
work to validate a new ZFS-based fileserver environment. In testing,
it appears that we can produce situations that will run the kernel out
of memory, or at least out of some resource such that things start
complaining 'bash: fork: Resource temporarily unavailable'. Sometimes
the system locks up solid.

 I've found at least two situations that reliably do this:
- trying to 'zpool scrub' each pool in sequence (waiting for each scrub
  to complete before starting the next one).
- starting simultaneous sequential read IO from all pools from a NFS client.
  (trying to do the same IO from the server basically kills the server
  entirely.)

 If I aggregate the same disk space into 12 pools instead of 132, the
same IO load does not kill the system.

 The ZFS machine is an X2100 M2 with 2GB of physical memory and 1GB
of swap, running 64-bit Solaris 10 U4 with an almost current set of
patches; it gets the storage from another machine via ISCSI. The pools
are non-redundant, with each vdev being a whole ISCSI LUN.

 Is this a known issue (or issues)? If this isn't a known issue, does
anyone have pointers to good tools to trace down what might be happening
and where memory is disappearing and so on? Does the system plain need
more memory for this number of pools and if so, does anyone know how
much?

 Thanks in advance.

(I was pointed to mdb -k's '::kmastat' by some people on the OpenSolaris
IRC channel but I haven't spotted anything particularly enlightening in
its output, and I can't run it once the system has gone over the edge.)

- cks
[*: we have an outstanding uncertainty over how many ZFS pools a
single system can sensibly support, so testing something larger
than we'd use in production seemed sensible.]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Bill Moore
A silly question:  Why are you using 132 ZFS pools as opposed to a
single ZFS pool with 132 ZFS filesystems?


--Bill

On Wed, Apr 30, 2008 at 01:53:32PM -0400, Chris Siebenmann wrote:
>  I have a test system with 132 (small) ZFS pools[*], as part of our
> work to validate a new ZFS-based fileserver environment. In testing,
> it appears that we can produce situations that will run the kernel out
> of memory, or at least out of some resource such that things start
> complaining 'bash: fork: Resource temporarily unavailable'. Sometimes
> the system locks up solid.
> 
>  I've found at least two situations that reliably do this:
> - trying to 'zpool scrub' each pool in sequence (waiting for each scrub
>   to complete before starting the next one).
> - starting simultaneous sequential read IO from all pools from a NFS client.
>   (trying to do the same IO from the server basically kills the server
>   entirely.)
> 
>  If I aggregate the same disk space into 12 pools instead of 132, the
> same IO load does not kill the system.
> 
>  The ZFS machine is an X2100 M2 with 2GB of physical memory and 1GB
> of swap, running 64-bit Solaris 10 U4 with an almost current set of
> patches; it gets the storage from another machine via ISCSI. The pools
> are non-redundant, with each vdev being a whole ISCSI LUN.
> 
>  Is this a known issue (or issues)? If this isn't a known issue, does
> anyone have pointers to good tools to trace down what might be happening
> and where memory is disappearing and so on? Does the system plain need
> more memory for this number of pools and if so, does anyone know how
> much?
> 
>  Thanks in advance.
> 
> (I was pointed to mdb -k's '::kmastat' by some people on the OpenSolaris
> IRC channel but I haven't spotted anything particularly enlightening in
> its output, and I can't run it once the system has gone over the edge.)
> 
>   - cks
> [*: we have an outstanding uncertainty over how many ZFS pools a
> single system can sensibly support, so testing something larger
> than we'd use in production seemed sensible.]
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Jeff Bonwick
Indeed, things should be simpler with fewer (generally one) pool.

That said, I suspect I know the reason for the particular problem
you're seeing: we currently do a bit too much vdev-level caching.
Each vdev can have up to 10MB of cache.  With 132 pools, even if
each pool is just a single iSCSI device, that's 1.32GB of cache.

We need to fix this, obviously.  In the interim, you might try
setting zfs_vdev_cache_size to some smaller value, like 1MB.

Still, I'm curious -- why lots of pools?  Administration would
be simpler with a single pool containing many filesystems.

Jeff

On Wed, Apr 30, 2008 at 11:48:07AM -0700, Bill Moore wrote:
> A silly question:  Why are you using 132 ZFS pools as opposed to a
> single ZFS pool with 132 ZFS filesystems?
> 
> 
> --Bill
> 
> On Wed, Apr 30, 2008 at 01:53:32PM -0400, Chris Siebenmann wrote:
> >  I have a test system with 132 (small) ZFS pools[*], as part of our
> > work to validate a new ZFS-based fileserver environment. In testing,
> > it appears that we can produce situations that will run the kernel out
> > of memory, or at least out of some resource such that things start
> > complaining 'bash: fork: Resource temporarily unavailable'. Sometimes
> > the system locks up solid.
> > 
> >  I've found at least two situations that reliably do this:
> > - trying to 'zpool scrub' each pool in sequence (waiting for each scrub
> >   to complete before starting the next one).
> > - starting simultaneous sequential read IO from all pools from a NFS client.
> >   (trying to do the same IO from the server basically kills the server
> >   entirely.)
> > 
> >  If I aggregate the same disk space into 12 pools instead of 132, the
> > same IO load does not kill the system.
> > 
> >  The ZFS machine is an X2100 M2 with 2GB of physical memory and 1GB
> > of swap, running 64-bit Solaris 10 U4 with an almost current set of
> > patches; it gets the storage from another machine via ISCSI. The pools
> > are non-redundant, with each vdev being a whole ISCSI LUN.
> > 
> >  Is this a known issue (or issues)? If this isn't a known issue, does
> > anyone have pointers to good tools to trace down what might be happening
> > and where memory is disappearing and so on? Does the system plain need
> > more memory for this number of pools and if so, does anyone know how
> > much?
> > 
> >  Thanks in advance.
> > 
> > (I was pointed to mdb -k's '::kmastat' by some people on the OpenSolaris
> > IRC channel but I haven't spotted anything particularly enlightening in
> > its output, and I can't run it once the system has gone over the edge.)
> > 
> > - cks
> > [*: we have an outstanding uncertainty over how many ZFS pools a
> > single system can sensibly support, so testing something larger
> > than we'd use in production seemed sensible.]
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issue with simultaneous IO to lots of ZFS pools

2008-04-30 Thread Chris Siebenmann
| Still, I'm curious -- why lots of pools?  Administration would be
| simpler with a single pool containing many filesystems.

 The short answer is that it is politically and administratively easier
to use (at least) one pool per storage-buying group in our environment.
This got discussed in more detail in the 'How many ZFS pools is it
sensible to use on a single server' zfs-discuss thread I started earlier
this month[*].

(Trying to answer the question myself is the reason I wound up setting
up 132 pools on my test system and discovering this issue.)

- cks
[*: http://opensolaris.org/jive/thread.jspa?threadID=56802]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-30 Thread Albert Lee

On Tue, 2008-04-29 at 15:02 +0200, Ulrich Graef wrote:
> Hi,
> 
> ZFS won't boot on my machine.
> 
> I discovered, that the lu manpages are there, but not
> the new binaries.
> So I tried to set up ZFS boot manually:
> 
> >  zpool create -f Root c0t1d0s0
> > 
> >  lucreate -n nv88_zfs -A "nv88 finally on ZFS"  -c nv88_ufs -p Root -x 
> > /zones
> > 
> >  zpool set bootfs=Root/nv88_zfs Root
> > 
> >  ufsdump 0f - / | ( cd /Root/nv88_zfs; ufsrestore -rf - ; )
> > 
> >  eeprom boot-device=disk1
> > 
> >  Correct vfstab of the boot environment to:
> > Root/nv88_zfs   -   /   zfs -   no  -
> > 
> >  zfs set mountpoint=legacy Root/nv88_zfs
> > 
> >  mount -F zfs Root/nv88_zfs /mnt
> > 
> >  bootadm update-archive -R /mnt
> > 
> >  umount /mnt
> > 
> >  installboot /usr/platform/SUNW,Ultra-60/lib/fs/zfs/bootblk 
> > /dev/rdsk/c0t1d0s0
> 
> When I try to boot I get the message in the ok prompt:
> 
> Can't mount root
> Fast Data Access MMU Miss
> 
> Same with: boot disk1 -Z Root/nv88_zfs
> 
> What is missing in the setup?
> Unfortunately opensolaris contains only the preliminary setup for x86,
> so it does not help me...
> 
> Regards,
> 
>   Ulrich
> 

Does newboot automatically construct the SPARC-specific "zfs-bootobj"
property from the "bootfs" pool property?

Make sure you also didn't export the pool. The pool must be imported
and /etc/zfs/zpool.cache must be in sync between running system and the
ZFS root.

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss