Re: [zfs-discuss] ZFS on a damaged disk

2006-12-12 Thread Tomas Ögren
On 12 December, 2006 - Patrick P Korsnick sent me these 1,1K bytes: > i have a machine with a disk that has some sort of defect and i've > found that if i partition only half of the disk that the machine will > still work. i tried to use 'format' to scan the disk and find the bad > blocks, but it

[zfs-discuss] ZFS on a damaged disk

2006-12-12 Thread Patrick P Korsnick
i have a machine with a disk that has some sort of defect and i've found that if i partition only half of the disk that the machine will still work. i tried to use 'format' to scan the disk and find the bad blocks, but it didn't work. so as i don't know where the bad blocks are but i'd still li

[zfs-discuss] Re: Kickstart hot spare attachment

2006-12-12 Thread Anton B. Rang
> If the SCSI commands hang forever, then there is nothing that ZFS can > do, as a single write will never return. The more likely case is that > the commands are continually timining out with very long response times, > and ZFS will continue to talk to them forever. It looks like the sd driver d

[zfs-discuss] Re: ZFS and write caching (SATA)

2006-12-12 Thread Anton B. Rang
It took manufacturers of SCSI drives some years to get this right. Around 1997 or so we were still seeing drives at my former employer that didn't properly flush their caches under all circumstances (and had other "interesting" behaviours WRT caching). Lots of ATA disks never did bother to impl

Re: [zfs-discuss] ZFS Usage in Warehousing (lengthy intro)

2006-12-12 Thread Rob Logan
> http://www.norcotek.com/item_detail.php?categoryid=8&modelno=DS-1220 yea SiI3726 Multipliers, are cool.. http://cooldrives.com/cosapomubrso.html http://cooldrives.com/mac-port-multiplier-sata-case.html but finding PCI-X slots for Ying Tian's si3124 or marvell88sx cards are getting tricky.. even

[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Anton B. Rang
> Were looking for pure performance. > > What will be contained in the LUNS is Student User > account files that they will access and Department > Share files like, MS word documents, excel files, > PDF. There will be no applications on the ZFS > Storage pools or pool Does this help on what > s

[zfs-discuss] Re: Uber block corruption?

2006-12-12 Thread Anton B. Rang
> Also note that the UB is written to every vdev (4 per disk) so the > chances of all UBs being corrupted is rather low. The chances that they're corrupted by the storage system, yes. However, they are all sourced from the same in-memory buffer, so an undetected in-memory error (e.g. kernel bug

[zfs-discuss] Re: ZFS behavior under heavy load (I/O that is)

2006-12-12 Thread Anton B. Rang
I think you may be observing that fsync() is slow. The file will be written, and visible to other processes via the in-memory cache, before the data has been pushed to disk. vi forces the data out via fsync, and that can be quite slow when the file system is under load, especially before a fix

Re: [zfs-discuss] Monitoring ZFS

2006-12-12 Thread Tom Duell
Thanks, Neil, for the assistance. Tom Neil Perrin wrote On 12/12/06 19:59,: >Tom Duell wrote On 12/12/06 17:11,: > > >>Group, >> >>We are running a benchmark with 4000 users >>simulating a hospital management system >>running on Solaris 10 6/06 on USIV+ based >>SunFire 6900 with 6540 storage ar

[zfs-discuss] ZFS behavior under heavy load (I/O that is)

2006-12-12 Thread Anantha N. Srirama
I'm observing the following behavior on our E2900 (24 x 92 config), 2 FCs, and ... I've a large filesystem (~758GB) with compress mode on. When this filesystem is under heavy load (>150MB/S) I've problems saving files in 'vi'. I posted here about it and recall that the issue is addressed in Sol1

Re: [zfs-discuss] Monitoring ZFS

2006-12-12 Thread Neil Perrin
Tom Duell wrote On 12/12/06 17:11,: Group, We are running a benchmark with 4000 users simulating a hospital management system running on Solaris 10 6/06 on USIV+ based SunFire 6900 with 6540 storage array. Are there any tools for measuring internal ZFS activity to help us understand what is g

Re: Re[2]: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Darren Dunham
> Hello Toby, > > Tuesday, December 12, 2006, 4:18:54 PM, you wrote: > TT> On 12-Dec-06, at 9:46 AM, George Wilson wrote: > > >> Also note that the UB is written to every vdev (4 per disk) so the > >> chances of all UBs being corrupted is rather low. > > It depends actually - if all your vdevs

[zfs-discuss] Monitoring ZFS

2006-12-12 Thread Tom Duell
Group, We are running a benchmark with 4000 users simulating a hospital management system running on Solaris 10 6/06 on USIV+ based SunFire 6900 with 6540 storage array. Are there any tools for measuring internal ZFS activity to help us understand what is going on during slowdowns? We have 192GB

[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Kory Wheatley
Also there will be no NFS services on this system. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Kory Wheatley
Were looking for pure performance. What will be contained in the LUNS is Student User account files that they will access and Department Share files like, MS word documents, excel files, PDF. There will be no applications on the ZFS Storage pools or pool Does this help on what strategy might

Re: [zfs-discuss] Performance problems during 'destroy' (and bizzare Zone problem as well)

2006-12-12 Thread Matthew Ahrens
Anantha N. Srirama wrote: - Why is the destroy phase taking so long? Destroying clones will be much faster with build 53 or later (or the unreleased s10u4 or later) -- see bug 6484044. - What can explain the unduly long snapshot/clone times - Why didn't the Zone startup? - More surprisi

Re: [zfs-discuss] ZFS and write caching (SATA)

2006-12-12 Thread Peter Schuller
> PS> While I do intend to perform actual powerloss tests, it would be > interesting PS> to hear from anybody whether it is generally expected to be > safe. > > Well is disks honors cache flush commands then it should be reliable > wether it's SATA or SCSI disk. Yes. Sorry, I could have stated my

Re: [zfs-discuss] ZFS Storage Pool advice

2006-12-12 Thread Jason J. W. Williams
Hi Kory, It depends on the capabilities of your array in our experience...and also the zpool type. If you're going to do RAID-Z in a write intensive environment you're going to have a lot more I/Os with three LUNs then a single large LUN. Your controller may go nutty. Also, (Richard can address

Re: [zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS

2006-12-12 Thread Torrey McMahon
Robert Milkowski wrote: Hello Matthew, MCA> Also, I am considering what type of zpools to create. I have a MCA> SAN with T3Bs and SE3511s. Since neither of these can work as a MCA> JBOD (at lesat that is what I remember) I guess I am going to MCA> have to add in the LUNS in a mirrored zpool of

Re: [zfs-discuss] ZFS and write caching (SATA)

2006-12-12 Thread Robert Milkowski
Hello Peter, Tuesday, December 12, 2006, 11:18:32 PM, you wrote: PS> Hello, PS> my understanding is that ZFS is specifically designed to work with write PS> caching, by instructing drives to flush their caches when a write barrier is PS> needed. And in fact, even turns write caching on explicitl

Re: [zfs-discuss] Re: Re: Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Robert Milkowski
Hello Anton, Tuesday, December 12, 2006, 9:36:41 PM, you wrote: ABR> Is there an easy way to determine whether a pool has this fix applied or not? Yep. Just do 'df -h' and see what is a reported size of a pool. It should be something like N-1 times disk size for each raid-z group. If it is N t

[zfs-discuss] ZFS and write caching (SATA)

2006-12-12 Thread Peter Schuller
Hello, my understanding is that ZFS is specifically designed to work with write caching, by instructing drives to flush their caches when a write barrier is needed. And in fact, even turns write caching on explicitly on managed devices. My question is of a practical nature: will this *actually

Re: [zfs-discuss] Re: Netapp to Solaris/ZFS issues

2006-12-12 Thread Darren Dunham
> NetApp can actually grow their RAID groups, but they recommend adding > an entire RAID group at once instead. If you add a disk to a RAID > group on NetApp, I believe you need to manually start a reallocate > process to balance data across the disks. There's no reallocation process that I'm awar

Re: [zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Neil Perrin
Are you looking purely for performance, or for the added reliability that ZFS can give you? If the latter, then you would want to configure across multiple LUNs in either a mirrored or RAID configuration. This does require sacrificing some storage in exchange for the peace of mind that any “si

[zfs-discuss] Re: Re: Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Anton B. Rang
Is there an easy way to determine whether a pool has this fix applied or not? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS Storage Pool advice

2006-12-12 Thread Anton B. Rang
Are you looking purely for performance, or for the added reliability that ZFS can give you? If the latter, then you would want to configure across multiple LUNs in either a mirrored or RAID configuration. This does require sacrificing some storage in exchange for the peace of mind that any “sil

Re: [zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread James F. Hranicky
Eric Schrock wrote: > Hmmm, it means that we correctly noticed that the device had failed, but > for whatever reason the ZFS FMA agent didn't correctly replace the > drive. I am cleaning up the hot spare behavior as we speak so I will > try to reproduce this. Ok, great. >> Well, as long as I kn

Re: [zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread Eric Schrock
On Tue, Dec 12, 2006 at 02:38:22PM -0500, James F. Hranicky wrote: > > Dec 11 14:42:32.1271 1319464e-7a8c-e65b-962e-db386e90f7f2 ZFS-8000-D3 > 100% fault.fs.zfs.device > > Problem in: zfs://pool=2646e20c1cb0a9d0/vdev=724c128cdbc17745 >Affects: zfs://pool=2646e20c1cb0a9d0/vd

Re: [zfs-discuss] ZFS Storage Pool advice

2006-12-12 Thread Richard Elling
Kory Wheatley wrote: This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here's are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I'm trying to ask is if you have 3 LUNS a

Re: [zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS

2006-12-12 Thread Richard Elling
Matthew C Aycock wrote: We are currently working on a plan to upgrade our HA-NFS cluster that uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is there a known procedure or best practice for this? I have enough free disk space to recreate all the filesystems and copy the dat

Re: [zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread James F. Hranicky
Eric Schrock wrote: > On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote: >> Sure, but that's what I want to avoid. The FMA agent should do this by >> itself, but it's not, so I guess I'm just wondering why, or if there's >> a good way to get to do so. If this happens in the middle o

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread Joe Little
On 12/12/06, James F. Hranicky <[EMAIL PROTECTED]> wrote: Jim Davis wrote: >> Have you tried using the automounter as suggested by the linux faq?: >> http://nfs.sourceforge.net/#section_b > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > that limit with a standard au

Re: [zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread Eric Schrock
On Tue, Dec 12, 2006 at 02:08:57PM -0500, James F. Hranicky wrote: > > Sure, but that's what I want to avoid. The FMA agent should do this by > itself, but it's not, so I guess I'm just wondering why, or if there's > a good way to get to do so. If this happens in the middle of the night I > don't

[zfs-discuss] Re: Re: Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Jeb Campbell
> IIRC you have to re-create entire raid-z pool to get > it fixed - just > rewriting data or upgrading a pool won't do it. You are correct ... Now I have to find some place to stick +1TB of temp files ;) Thanks for the help, Jeb This message posted from opensolaris.org _

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread James F. Hranicky
Jim Davis wrote: >> Have you tried using the automounter as suggested by the linux faq?: >> http://nfs.sourceforge.net/#section_b > > Yes. On our undergrad timesharing system (~1300 logins) we actually hit > that limit with a standard automounting scheme. So now we make static > mounts of the N

Re: [zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread James F. Hranicky
Eric Schrock wrote: > On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote: >> - I know I can attach it via the zpool commands, but is there a way to >> kickstart the attachment process if it fails to attach automatically upon >> disk failure? > > Yep. Just do a 'zpool replace zmir '. T

Re: [zfs-discuss] Re: Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Matthew Ahrens
Jeb Campbell wrote: After upgrade you did actually re-create your raid-z pool, right? No, but I did "zpool upgrade -a". Hmm, I guess I'll try re-writing the data first. I know you have to do that if you change compression options. Ok -- rewriting the data doesn't work ... I'll create a new

Re: [zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS

2006-12-12 Thread Robert Milkowski
Hello Matthew, Tuesday, December 12, 2006, 7:13:47 PM, you wrote: MCA> We are currently working on a plan to upgrade our HA-NFS cluster MCA> that uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 MCA> and ZFS. Is there a known procedure or best practice for this? I MCA> have enough free

Re: [zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread Eric Schrock
On Tue, Dec 12, 2006 at 07:53:32AM -0800, Jim Hranicky wrote: > > - I know I can attach it via the zpool commands, but is there a way to > kickstart the attachment process if it fails to attach automatically upon > disk failure? Yep. Just do a 'zpool replace zmir '. This is what the FMA agent

Re: [zfs-discuss] Re: Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Robert Milkowski
Hello Jeb, Tuesday, December 12, 2006, 7:11:30 PM, you wrote: >> After upgrade you did actually re-create your raid-z >> pool, right? JC> No, but I did "zpool upgrade -a". JC> Hmm, I guess I'll try re-writing the data first. I know you have JC> to do that if you change compression options. II

[zfs-discuss] Performance problems during 'destroy' (and bizzare Zone problem as well)

2006-12-12 Thread Anantha N. Srirama
[b]Setting:[/b] We've operating in the following setup for well over 60 days. - E2900 (24 x 92) - 2 2Gbps FC to EMC SAN - Solaris 10 Update 2 (06/06) - ZFS with compression turned on - Global zone + 1 local zone (sparse) - Local zone is fed ZFS clones from the global Zone [b]Daily Routine

[zfs-discuss] ZFS Storage Pool advice

2006-12-12 Thread Kory Wheatley
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here's are plan to incorporate ZFS: On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance? What I'm trying to ask is if you have 3 LUNS and you want to create a

[zfs-discuss] SunCluster HA-NFS from Sol9/VxVM to Sol10u3/ZFS

2006-12-12 Thread Matthew C Aycock
We are currently working on a plan to upgrade our HA-NFS cluster that uses HA-StoragePlus and VxVM 3.2 on Solaris 9 to Solaris 10 and ZFS. Is there a known procedure or best practice for this? I have enough free disk space to recreate all the filesystems and copy the data if necessary, but would

Re[2]: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-12 Thread Robert Milkowski
Hello Jason, Thursday, December 7, 2006, 11:18:17 PM, you wrote: JJWW> Hi Luke, JJWW> That's terrific! JJWW> You know you might be able to tell ZFS which disks to look at. I'm not JJWW> sure. It would be interesting, if anyone with a Thumper could comment JJWW> on whether or not they see the im

[zfs-discuss] Re: Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Jeb Campbell
> After upgrade you did actually re-create your raid-z > pool, right? No, but I did "zpool upgrade -a". Hmm, I guess I'll try re-writing the data first. I know you have to do that if you change compression options. Ok -- rewriting the data doesn't work ... I'll create a new temp pool and see

Re[2]: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Robert Milkowski
Hello Toby, Tuesday, December 12, 2006, 4:18:54 PM, you wrote: TT> On 12-Dec-06, at 9:46 AM, George Wilson wrote: >> Also note that the UB is written to every vdev (4 per disk) so the >> chances of all UBs being corrupted is rather low. It depends actually - if all your vdevs are on the same

Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-12 Thread Darren Dunham
> Hi All, > > Assume the device c0t0d0 size is 10 KB. > I created ZFS file system on this > $ zpool create -f mypool c0t0d0s2 This creates a pool on the entire slice. > and to limit the size of ZFS file system I used quota property. > > $ zfs set quota = 5000K mypool Note

[zfs-discuss] Re: ZFS Usage in Warehousing (lengthy intro)

2006-12-12 Thread Anton B. Rang
> But seriously, the big issue with SCSI, is that the SCSI commands are sent > over the SCSI bus at the original (legacy) rate of 5 Mbits/Sec in 8-bit > mode. Actually, this isn't true on the newest (Ultra320) SCSI systems, though I don't know if the 3320 supports packetized SCSI. It's definitel

Re: [zfs-discuss] Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Robert Milkowski
Hello Jeb, Tuesday, December 12, 2006, 6:04:36 PM, you wrote: JC> I updated to Sol10u3 last night, and I'm still seeing different JC> differences between "du -h" and "ls -h". JC> "du" seems to take into account raidz and compression -- if this is correct, please let me know. JC> It makes sense

Re: [zfs-discuss] Re: Re: Re: Snapshots impact on performance

2006-12-12 Thread Robert Milkowski
Hello Chris, Wednesday, December 6, 2006, 6:23:48 PM, you wrote: CG> One of our file servers internally to Sun that reproduces this CG> running nv53 here is the dtrace output: Any conclusions yet? -- Best regards, Robertmailto:[EMAIL PROTECTED]

[zfs-discuss] Sol10u3 -- is "du" bug fixed?

2006-12-12 Thread Jeb Campbell
I updated to Sol10u3 last night, and I'm still seeing different differences between "du -h" and "ls -h". "du" seems to take into account raidz and compression -- if this is correct, please let me know. It makes sense that "du" reports actual disk usage, but this makes some scripts I wrote very

Re: [zfs-discuss] ZFS Usage in Warehousing (lengthy intro)

2006-12-12 Thread Stuart Glenn
On Dec 12, 2006, at 10:02, Al Hopper wrote: Another possiblity, which is on my todo list to checkout, is: http://www.norcotek.com/item_detail.php?categoryid=8&modelno=DS-1220 I would not go with this device. I picked up one along with 12 500GB SATA drives with the hopes of making a dumpin

[zfs-discuss] Re: zpool mirror

2006-12-12 Thread Gino Ruopolo
> > Not right now (without a bunch of shell-scripting). > I'm working on > eing able to "send" a whole tree of filesystems & > their snapshots. > Would that do what you want? Exactly! When you think that -really useful- feature will be available? thanks, gino This message posted from ope

[zfs-discuss] Re: Uber block corruption?

2006-12-12 Thread Anton B. Rang
> [...] there is no possibility of referencing an overwritten > block unless you have to back off more than two uberblocks. At this > point, blocks that have been overwritten will show up as corrupted (bad > checksums). Hmmm. Is there some way we can warn the user to scrub their pool because we

Re: [zfs-discuss] Re: zfs exported a live filesystem

2006-12-12 Thread Darren J Moffat
Jim Hranicky wrote: Now having said that I personally wouldn't have expected that zpool export should have worked as easily as that while there where shared filesystems. I would have expected that exporting the pool should have attempted to unmount all the ZFS filesystems first - which would h

[zfs-discuss] Re: ZFS related kernel panic

2006-12-12 Thread Anton B. Rang
> UFS will panic on EIO also. Most other file systems, too. In which cases will UFS panic on an I/O error? A quick browse through the UFS code shows several cases where we can panic if we have bad metadata on disk, but none if a disk read (or write) fails altogether. If UFS fails to read a bl

Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread eric kustarz
Bill Casale wrote: Please reply directly to me. Seeing the message below. Is it possible to determine exactly which file is corrupted? I was thinking the OBJECT/RANGE info may be pointing to it but I don't know how to equate that to a file. This is bug: 6410433 'zpool status -v' would be more

Re: [zfs-discuss] ZFS Usage in Warehousing (lengthy intro)

2006-12-12 Thread Al Hopper
On Fri, 8 Dec 2006, Jochen M. Kaiser wrote: > Dear all, > > we're currently looking forward to restructure our hardware environment for > our datawarehousing product/suite/solution/whatever. > > We're currently running the database side on various SF V440's attached via > dual FC to our SAN backen

[zfs-discuss] Kickstart hot spare attachment

2006-12-12 Thread Jim Hranicky
For my latest test I set up a stripe of two mirrors with one hot spare like so: zpool create -f -m /export/zmir zmir mirror c0t0d0 c3t2d0 mirror c3t3d0 c3t4d0 spare c3t1d0 I spun down c3t2d0 and c3t4d0 simultaneously, and while the system kept running (my tar over NFS barely hiccuped), the zpoo

[zfs-discuss] Re: Netapp to Solaris/ZFS issues

2006-12-12 Thread Anton B. Rang
NetApp can actually grow their RAID groups, but they recommend adding an entire RAID group at once instead. If you add a disk to a RAID group on NetApp, I believe you need to manually start a reallocate process to balance data across the disks. This message posted from opensolaris.org __

Re: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Toby Thain
On 12-Dec-06, at 9:46 AM, George Wilson wrote: Also note that the UB is written to every vdev (4 per disk) so the chances of all UBs being corrupted is rather low. Furthermore the time window where UBs are mutually inconsistent would be very short, since they'd be updated together? --Tob

Re: [zfs-discuss] Netapp to Solaris/ZFS issues

2006-12-12 Thread Robert Milkowski
Hello Jim, Wednesday, December 6, 2006, 3:28:53 PM, you wrote: JD> We have two aging Netapp filers and can't afford to buy new Netapp gear, JD> so we've been looking with a lot of interest at building NFS fileservers JD> running ZFS as a possible future approach. Two issues have come up in the J

Re: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Mark Maybee
[EMAIL PROTECTED] wrote: Hello Casper, Tuesday, December 12, 2006, 10:54:27 AM, you wrote: So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will become corrupt through something that doesn't also make all the data also corrupt or inaccessible. CDSC> So how does this work for

Re: [zfs-discuss] Uber block corruption?

2006-12-12 Thread George Wilson
Also note that the UB is written to every vdev (4 per disk) so the chances of all UBs being corrupted is rather low. Thanks, George Darren Dunham wrote: DD> To reduce the chance of it affecting the integrety of the filesystem, DD> there are multiple copies of the UB written, each with a checks

Re: [zfs-discuss] How to do DIRECT IO on ZFS ?

2006-12-12 Thread Roch - PAE
Maybe this will help: http://blogs.sun.com/roch/entry/zfs_and_directio -r dudekula mastan writes: > Hi All, > > We have directio() system to do DIRECT IO on UFS file system. Can > any one know how to do DIRECT IO on ZFS file system. > > Regards > Masthan > > _

Re: [zfs-discuss] Re: ZFS Usage in Warehousing (no more lengthy intro)

2006-12-12 Thread Robert Milkowski
Hello Jochen, Sunday, December 10, 2006, 10:51:57 AM, you wrote: JMK> James, >> Just a thought. >> >> have you thought about giving thumper x4500's a trial >> for this work >> load? Oracle would seem to be IO limited in the end >> so 4 cores may be >> enough to keep oracle happy when linked wi

Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread George Wilson
Bill, If you want to find the file associated with the corruption you could do a "find /u01 -inum 4741362" or use the output of "zdb -d u01" to find the object associated with that id. Thanks, George Bill Casale wrote: Please reply directly to me. Seeing the message below. Is it possib

Re: Re[2]: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Casper . Dik
>Hello Casper, > >Tuesday, December 12, 2006, 10:54:27 AM, you wrote: > >>>So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will >>>become corrupt through something that doesn't also make all the data >>>also corrupt or inaccessible. > > >CDSC> So how does this work for data which i

Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread Robert Milkowski
Hello Bill, Tuesday, December 12, 2006, 2:34:01 PM, you wrote: BC> Please reply directly to me. Seeing the message below. BC> Is it possible to determine exactly which file is corrupted? BC> I was thinking the OBJECT/RANGE info may be pointing to it BC> but I don't know how to equate that to a f

Re: [zfs-discuss] How to do DIRECT IO on ZFS ?

2006-12-12 Thread Robert Milkowski
Hello dudekula, Tuesday, December 12, 2006, 9:36:24 AM, you wrote: > Hi All,   We have directio() system to do DIRECT IO on UFS file system. Can any one know how to do DIRECT IO on ZFS file system. Right now you can't. --  Best regards,  Robert                            mailto:

[zfs-discuss] ZFS Corruption

2006-12-12 Thread Bill Casale
Please reply directly to me. Seeing the message below. Is it possible to determine exactly which file is corrupted? I was thinking the OBJECT/RANGE info may be pointing to it but I don't know how to equate that to a file. # zpool status -v pool: u01 state: ONLINE status: One or more devices

[zfs-discuss] Re: zfs exported a live filesystem

2006-12-12 Thread Jim Hranicky
For the record, this happened with a new filesystem. I didn't muck about with an old filesystem while it was still mounted, I created a new one, mounted it and then accidentally exported it. > > Except that it doesn't: > > > > # mount /dev/dsk/c1t1d0s0 /mnt > > # share /mnt > > # umount /mnt > >

Re: [zfs-discuss] zfs exported a live filesystem

2006-12-12 Thread Darren J Moffat
Boyd Adamson wrote: On 12/12/2006, at 8:48 AM, Richard Elling wrote: Jim Hranicky wrote: By mistake, I just exported my test filesystem while it was up and being served via NFS, causing my tar over NFS to start throwing stale file handle errors. Should I file this as a bug, or should I just

Re[2]: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Robert Milkowski
Hello Casper, Tuesday, December 12, 2006, 10:54:27 AM, you wrote: >>So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will >>become corrupt through something that doesn't also make all the data >>also corrupt or inaccessible. CDSC> So how does this work for data which is freed and

Re: [zfs-discuss] Need Clarification on ZFS quota property.

2006-12-12 Thread Tomas Ögren
On 12 December, 2006 - dudekula mastan sent me these 2,7K bytes: > > Hi All, > > Assume the device c0t0d0 size is 10 KB. > > I created ZFS file system on this > > $ zpool create -f mypool c0t0d0s2 > > and to limit the size of ZFS file system I used quota property. >

[zfs-discuss] Need Clarification on ZFS quota property.

2006-12-12 Thread dudekula mastan
Hi All, Assume the device c0t0d0 size is 10 KB. I created ZFS file system on this $ zpool create -f mypool c0t0d0s2 and to limit the size of ZFS file system I used quota property. $ zfs set quota = 5000K mypool Which 5000 K bytes are belongs (or reserved) t

Re: [zfs-discuss] Doubt on solaris 10 installation ..

2006-12-12 Thread Zoram Thanga
[EMAIL PROTECTED] looks like the more appropriate list to post questions like yours. dudekula mastan wrote: Hi Everybody, I have some problems in solaris 10 installation. After installing the first CD , I removed the CD from CDrom , after that the machine is getting rebooting agai

Re: [zfs-discuss] Uber block corruption?

2006-12-12 Thread Casper . Dik
>So 'a' UB can become corrupt, but it is unlikely that 'all' UBs will >become corrupt through something that doesn't also make all the data >also corrupt or inaccessible. So how does this work for data which is freed and overwritten; does the system make sure that none of the data referenced by

[zfs-discuss] How to do DIRECT IO on ZFS ?

2006-12-12 Thread dudekula mastan
Hi All, We have directio() system to do DIRECT IO on UFS file system. Can any one know how to do DIRECT IO on ZFS file system. Regards Masthan - Everyone is raving about the all-new Yahoo! Mail beta.___ zf