Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-06-21 Thread Lanky Doodle
Thanks for all the replies. I have a pretty good idea how the disk enclosure assigns slot locations so should be OK. One last thing - I see thet Supermicro has just released a newer version of the card I mentioned in the first post that supports SATA 6Gbps. From what I can see it uses the Marv

Re: [zfs-discuss] Zpool with data errors

2011-06-21 Thread Tomas Ögren
On 21 June, 2011 - Todd Urie sent me these 5,9K bytes: > I have a zpool that shows the following from a zpool status -v > > brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101 > pool:ABC0101 > state: ONLINE > status: One or more devices has experienced an error resulting in data >

Re: [zfs-discuss] Zpool with data errors

2011-06-21 Thread Remco Lengers
Todd, Is that ZFS on top of VxVM ? Are those volumes okay? I wonder if this is really a sensible combination? ..Remco On 6/21/11 7:36 AM, Todd Urie wrote: I have a zpool that shows the following from a zpool status -v name> brsnnfs0104 [/var/spool/cron/scripts]# zpool status -v ABC0101 p

Re: [zfs-discuss] Zpool with data errors

2011-06-21 Thread Todd Urie
The volumes sit on HDS SAN. The only reason for the volumes is to prevent inadvertent import of the zpool on two nodes of a cluster simultaneously. Since we're on SAN with Raid internally, didn't seem to we would need zfs to provide that redundancy also. On Tue, Jun 21, 2011 at 4:17 AM, Remco Le

Re: [zfs-discuss] Zpool with data errors

2011-06-21 Thread Toby Thain
On 21/06/11 7:54 AM, Todd Urie wrote: > The volumes sit on HDS SAN. The only reason for the volumes is to > prevent inadvertent import of the zpool on two nodes of a cluster > simultaneously. Since we're on SAN with Raid internally, didn't seem to > we would need zfs to provide that redundancy al

Re: [zfs-discuss] Zpool with data errors

2011-06-21 Thread Marty Scholes
> didn't seem to we would need zfs to provide that redundancy also. There was a time when I fell for this line of reasoning too. The problem (if you want to call it that) with zfs is that it will show you, front and center, the corruption taking place in your stack. > Since we're on SAN with R

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-21 Thread Bob Friesenhahn
On Sun, 19 Jun 2011, Richard Elling wrote: Yes. I've been looking at what the value of zfs_vdev_max_pending should be. The old value was 35 (a guess, but a really bad guess) and the new value is 10 (another guess, but a better guess). I observe that data from a fast, modern I am still using 5

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Dave U . Random
Hello Jim! I understood ZFS doesn't like slices but from your reply maybe I should reconsider. I have a few older servers with 4 bays x 73G. If I make a root mirror pool and swap on the other 2 as you suggest, then I would have about 63G x 4 left over. If so then I am back to wondering what to do a

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Nomen Nescio
Hello Marty! > With four drives you could also make a RAIDZ3 set, allowing you to have > the lowest usable space, poorest performance and worst resilver times > possible. That's not funny. I was actually considering this :p But you have to admit, it would probably be somewhat reliable!

Re: [zfs-discuss] Zpool with data errors

2011-06-21 Thread Sami Ketola
On Jun 21, 2011, at 2:54 PM, Todd Urie wrote: > The volumes sit on HDS SAN. The only reason for the volumes is to prevent > inadvertent import of the zpool on two nodes of a cluster simultaneously. > Since we're on SAN with Raid internally, didn't seem to we would need zfs to > provide that

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Tomas Ögren
On 21 June, 2011 - Nomen Nescio sent me these 0,4K bytes: > Hello Marty! > > > With four drives you could also make a RAIDZ3 set, allowing you to have > > the lowest usable space, poorest performance and worst resilver times > > possible. > > That's not funny. I was actually considering this :p

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-21 Thread Richard Elling
On Jun 21, 2011, at 8:18 AM, Garrett D'Amore wrote: >> >> Does that also go through disksort? Disksort doesn't seem to have any >> concept of priorities (but I haven't looked in detail where it plugs in to >> the whole framework). >> >>> So it might make better sense for ZFS to keep the disk qu

[zfs-discuss] dskinfo utility

2011-06-21 Thread Henrik Johansson
Hello, I got tired of gathering disk information from different places when working with Solaris disks so I wrote a small utility for summarizing the most commonly used information. It is especially tricky to work with a large set of SAN disks using MPxIO you do not even see the logical unit n