Re: [zfs-discuss] Running on Dell hardware?

2010-11-01 Thread Brian Kolaci
I've been having the same problems, and it appears to be from a remote monitoring app that calls zpool status and/or zfs list. I've also found problems with PERC and I'm finally replacing the PERC cards with SAS5/E controllers (which are much cheaper anyway). Every time I reboot, the PERC tel

Re: [zfs-discuss] hot spare remains in use

2010-10-04 Thread Brian Kolaci
zpool clear pool2 > > I would use fmdump -eV to see what's going with c10t11d0. > > Thanks, > > Cindy > > On 10/04/10 07:47, Brian Kolaci wrote: >> Hi, >> I had a hot spare used to replace a failed drive, but then the drive appears >> to be fine

[zfs-discuss] hot spare remains in use

2010-10-04 Thread Brian Kolaci
Hi, I had a hot spare used to replace a failed drive, but then the drive appears to be fine anyway. After clearing the error it shows that the drive was resilvered, but keeps the spare in use. zpool status pool2 pool: pool2 state: ONLINE scrub: none requested config: NAMEST

Re: [zfs-discuss] ZFS with EMC PowerPath

2010-08-10 Thread Brian Kolaci
/importing the pool, even after > the upgrade. Format sees both the pseudo and physical/native device names for both paths. I can provide an example that he did today. > > Thanks, > > Cindy > On 08/09/10 09:55, Brian Kolaci wrote: >> On some machines running PowerPath,

[zfs-discuss] ZFS with EMC PowerPath

2010-08-09 Thread Brian Kolaci
On some machines running PowerPath, there's sometimes issues after an update/upgrade of the PowerPath software. Sometimes the pseudo devices get remapped and change names. ZFS appears to handle it OK, however sometimes it then references half native device names and half the emcpower pseudo d

Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-07 Thread Brian Kolaci
On 7/6/2010 10:37 AM, Victor Latushkin wrote: On Jul 6, 2010, at 6:30 PM, Brian Kolaci wrote: Well, I see no takers or even a hint... I've been playing with zdb to try to examine the pool, but I get: # zdb -b pool4_green zdb: can't open pool4_green: Bad exchange descriptor

Re: [zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-06 Thread Brian Kolaci
s in the logs and it just "disappeared" without a trace. The only logs are from subsequent reboots where it says a ZFS pool failed to open. It does not give me a warm & fuzzy about using ZFS as I've relied on it heavily in the past 5 years. Any advice would be well appreciate

[zfs-discuss] pool wide corruption, "Bad exchange descriptor"

2010-07-02 Thread Brian Kolaci
I've recently acquired some storage and have been trying to copy data from a remote data center to hold backup data. The copies had been going for weeks, with about 600GB transferred so far, and then I noticed the throughput on the router stopped. I see a pool disappeared. # zpool status -x

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Brian Kolaci
On Jun 28, 2010, at 12:26 PM, Tristram Scott wrote: >> I use Bacula which works very well (much better than >> Amanda did). >> You may be able to customize it to do direct zfs >> send/receive, however I find that although they are >> great for copying file systems to other machines, >> they are i

Re: [zfs-discuss] Announce: zfsdump

2010-06-28 Thread Brian Kolaci
I use Bacula which works very well (much better than Amanda did). You may be able to customize it to do direct zfs send/receive, however I find that although they are great for copying file systems to other machines, they are inadequate for backups unless you always intend to restore the whole f

Re: [zfs-discuss] ZFS Large scale deployment model

2010-03-02 Thread Brian Kolaci
On Mar 2, 2010, at 11:09 AM, Bob Friesenhahn wrote: > On Tue, 2 Mar 2010, Brian Kolaci wrote: >> >> What is probability of corruption with ZFS in Solaris 10 U6 and up in a SAN >> environment? Have people successfully recovered? > > The probability of corruption in

[zfs-discuss] ZFS Large scale deployment model

2010-03-02 Thread Brian Kolaci
We have a virtualized environment of T-Series where each host has either zones or LDoms. All of the virtual systems will have their own dedicated storage on ZFS (and some may also get raw LUNs). All the SAN storage is delivered in fixed sized 33GB LUNs. The question I have to the community i

[zfs-discuss] panic: assertion failed: 0 == dmu_buf_hold_array(os, object, offset, size, FALSE, FTAG, &numbufs, &dbp), file: ../../common/fs/zfs/dmu.c, line: 591

2010-02-23 Thread Brian Kolaci
I recently upgraded a box to Solaris 10 U8. I've been getting more timeouts and I guess the Adaptec card is suspect, possibly not able to keep up, so it issues bus resets at times. It has apparently corrupted some files on the pool, and zpool status -v showed 2 files and one dataset corrupt.

Re: [zfs-discuss] adpu320 scsi timeouts only with ZFS

2010-01-14 Thread Brian Kolaci
I was frustrated with this problem for months. I've tried different disks, cables, even disk cabinets. The driver hasn't been updated in a long time. When the timeouts occurred, they would freeze for about a minute or two (showing the 100% busy). I even had the problem with less than 8 L

[zfs-discuss] adpu320 scsi timeouts only with ZFS

2009-11-22 Thread Brian Kolaci
Hi, I'm having trouble with scsi timeouts, but it appears to only happen when I use ZFS. I've tried to replicate with SVM, but I can't get the timeouts to happen when that is the underlying volume manager, however the performance with ZFS is much better when it does work. The symptom is tha

Re: [zfs-discuss] zfs eradication

2009-11-11 Thread Brian Kolaci
Thanks all, It was a government customer that I was talking too and it sounded like a good idea, however with the certification paper trails required today, I don't think it would be of such a benefit after all. It may be useful on the disk evacuation, but they're still going to need their pa

[zfs-discuss] zfs eradication

2009-11-10 Thread Brian Kolaci
Hi, I was discussing the common practice of disk eradication used by many firms for security. I was thinking this may be a useful feature of ZFS to have an option to eradicate data as its removed, meaning after the last reference/snapshot is done and a block is freed, then write the eradicati

[zfs-discuss] Changing zpool vdev name

2009-08-06 Thread Brian Kolaci
Is there a way to change the device name used to create a zpool? My customer created their pool with on emc powerpath. An SA removed powerpath by mistake, then reinstalled it. The names on the zpool are now the physical device names of one path. They have data on there already, so they woul

Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Brian Kolaci
On Aug 6, 2009, at 5:36 AM, Ian Collins wrote: Brian Kolaci wrote: They understand the technology very well. Yes, ZFS is very flexible with many features, and most are not needed in an enterprise environment where they have high-end SAN storage that is shared between Sun, IBM, linux

Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci
cindy.swearin...@sun.com wrote: Brian, CR 4852783 was updated again this week so you might add yourself or your customer to continue to be updated. Will do. I thought I was on it, but didn't see any updates... In the meantime, a reminder is that a mirrored ZFS configuration is flexible in

Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci
Bob Friesenhahn wrote: On Wed, 5 Aug 2009, Brian Kolaci wrote: I have a customer that is trying to move from VxVM/VxFS to ZFS, however they have this same need. They want to save money and move to ZFS. They are charged by a separate group for their SAN storage needs. The business group

Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci
Richard Elling wrote: On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote: I'm chiming in late, but have a mission critical need of this as well and posted as a non-member before. My customer was wondering when this would make it into Solaris 10. Their complete adoption depends on it. I

Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci
I'm chiming in late, but have a mission critical need of this as well and posted as a non-member before. My customer was wondering when this would make it into Solaris 10. Their complete adoption depends on it. I have a customer that is trying to move from VxVM/VxFS to ZFS, however they have

[zfs-discuss] zfs remove vdev

2009-08-04 Thread Brian Kolaci
Does anyone know when Solaris 10 will have the bits to allow removal of vdevs from a pool to shrink the storage? Thanks, Brian ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Changing zpool vdev name

2009-08-04 Thread Brian Kolaci
Is there a way to change the device name used to create a zpool? My customer created their pool with physical device names rather than the emc powerpath virtual names. They have data on there already, so they would like to preserve it. My experience with zpool replace is that it copies data ove

[zfs-discuss] How to unmount when devices write-disabled?

2008-04-01 Thread Brian Kolaci
In a recovery situation where the primary node crashed, the disks get write-disabled while the failover node takes control. How can you unmount the zpool? It panics the system and actually gets into a panic loop when it tries to mount it again on next boot. Thanks, Brian

Re: [zfs-discuss] Yager on ZFS

2007-12-13 Thread Brian Kolaci
Robert Milkowski wrote: > Hello can, > > Thursday, December 13, 2007, 12:02:56 AM, you wrote: > > cyg> On the other hand, there's always the possibility that someone > cyg> else learned something useful out of this. And my question about > > To be honest - there's basically nothing useful in th