Re: [zfs-discuss] 3510 JBOD with multipath

2008-05-21 Thread Wee Yeh Tan
On Wed, May 21, 2008 at 10:55 PM, Krutibas Biswal <[EMAIL PROTECTED]> wrote: > Robert Milkowski wrote: >> Originally you wanted to get it multipathed which was the case by >> default. Now you have disabled it (well, you still have to paths but >> no automatic failover). >> > Thanks. Can somebody po

Re: [zfs-discuss] openSolaris ZFS root, swap, dump

2008-05-18 Thread Wee Yeh Tan
IIRC, EFI boot requires support from the system BIOS. On Sun, May 18, 2008 at 1:54 AM, A Darren Dunham <[EMAIL PROTECTED]> wrote: > On Fri, May 16, 2008 at 07:29:31PM -0700, Paul B. Henson wrote: > > For ZFS root, is it required to have a partition and slices? Or can I just > > give it the whole

Re: [zfs-discuss] ZFS Administration

2008-04-09 Thread Wee Yeh Tan
I'm just thinking out loud. What would be the advantage of having periodic snapshot taken within ZFS vs invoking it from an external facility? On Thu, Apr 10, 2008 at 1:21 AM, sean walmsley <[EMAIL PROTECTED]> wrote: > I haven't used it myself, but the following blog describes an automatic > sna

Re: [zfs-discuss] ZFS abd Java error

2008-03-18 Thread Wee Yeh Tan
You are looking for mdb. echo '0t22861::pid2proc |::walk thread |::findstack' | mdb -k On Tue, Mar 18, 2008 at 11:28 PM, Vahid Moghaddasi <[EMAIL PROTECTED]> wrote: > Thanks for your reply, > Before I used lsof, I tried pstack and truss -p but I get the following > message: > # pstack 22861

Re: [zfs-discuss] zfs mount fails - directory not empty

2008-03-10 Thread Wee Yeh Tan
Bob, Are you sure that /pandora is mounted? I hazard a guess that the error message is caused by mounting zpool:pandora when /pandora is not empty. I notice that snv81 started mounting zfs in level-order whereas my snv73 did not. On Sun, Mar 9, 2008 at 10:16 AM, Bob Netherton <[EMAIL PROTECTED]

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Wee Yeh Tan
On Wed, Feb 27, 2008 at 10:42 PM, Marcus Sundman <[EMAIL PROTECTED]> wrote: > Darren J Moffat <[EMAIL PROTECTED]> wrote: > > Marcus Sundman wrote: > > > Nicolas Williams <[EMAIL PROTECTED]> wrote: > > >> On Wed, Feb 27, 2008 at 05:54:29AM +0200, Marcus Sundman wrote: > > >>> Nathan Kroenert <[E

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Wee Yeh Tan
On Wed, Feb 27, 2008 at 9:36 PM, Uwe Dippel <[EMAIL PROTECTED]> wrote: > I was hoping to be clear with my examples. > Within that 1 minute the user has easily received the mail alert that 5 > mails have arrived, has seen the sender and deleted them. Without any trigger > of some snapshot, or st

Re: [zfs-discuss] Inode (dnode) numbers (Re: rename(2) (mv(1)) between ZFS filesystems in the same zpool)

2008-01-02 Thread Wee Yeh Tan
On Jan 3, 2008 12:32 AM, Nicolas Williams <[EMAIL PROTECTED]> wrote: > Oof, I see this has been discussed since (and, actually, IIRC it was > discussed a long time ago too). > > Anyways, IMO, this requires a new syscall or syscalls: > > xdevrename(2) > xdevcopy(2) > > and then mv(1) can do:

Re: [zfs-discuss] Adding to zpool: would failure of one device destroy all data?

2008-01-02 Thread Wee Yeh Tan
Your data will be striped across both vdevs after you add the 2nd vdev. In any case, failure of one stripe device will result in the loss of the entire pool. I'm not sure, however, if there is anyway vm recover any data from surviving vdevs. On 1/2/08, Austin <[EMAIL PROTECTED]> wrote: > I didn't

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2008-01-02 Thread Wee Yeh Tan
On Jan 2, 2008 11:46 AM, Darren Reed <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] wrote: > > ... > > That's a sad situation for backup utilities, by the way - a backup > > tool would have no way of finding out that file X on fs A already > > existed as file Z on fs B. So what ? If the file got co

Re: [zfs-discuss] ZFS & array NVRAM cache

2007-10-08 Thread Wee Yeh Tan
On 10/6/07, Vincent Fox <[EMAIL PROTECTED]> wrote: > So I went ahead and loaded 10u4 on a pair of V210 units. > > I am going to set this nocacheflush option and cross my fingers and see how > it goes. > > I have my ZPool mirroring LUNs off 2 different arrays. I have > single-controllers in each

Re: [zfs-discuss] reccomended disk configuration

2007-09-15 Thread Wee Yeh Tan
On 9/15/07, Peter Bridge <[EMAIL PROTECTED]> wrote: > I have: > 2x150GB SATA ii disks > 2x500GB SATA ii disks I will go with a mirror. You need at least 500GB in parity anyway (since you want to survive any disk failure). That means the maximum you can get out of this setup is 800GB. With a mir

Re: [zfs-discuss] reccomended disk configuration

2007-09-15 Thread Wee Yeh Tan
On 9/15/07, Mario Goebbels <[EMAIL PROTECTED]> wrote: > You can't create a RAID-Z out of two disks. You either have to go with > two mirrors (150GB and 500GB) in a pool, or the funkier variation of a > RAID-Z and mirror (4x150GB and a 350GB mirror). Actually, you can. It may not make sense but it

Re: [zfs-discuss] import a group

2007-07-17 Thread Wee Yeh Tan
On 7/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote: > It in what is integrated into OpenSolaris and this is an OpenSolaris.org > list not an @sun.com support list for Solaris 10. True enough. I stand corrected. -- Just me, Wire ... Blog: ___ zfs-d

Re: [zfs-discuss] import a group

2007-07-17 Thread Wee Yeh Tan
On 7/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote: > Wee Yeh Tan wrote: > > Firstly, zonepaths in ZFS is no yet supported. But this is the > > hacker's forum so... > > I don't think that is actually true, particularly given that you can use > zoneadm c

Re: [zfs-discuss] import a group

2007-07-17 Thread Wee Yeh Tan
On 7/17/07, Mike Salehi <[EMAIL PROTECTED]> wrote: > Sorry, my question is not clear enough. These pools contain a zone each. Firstly, zonepaths in ZFS is no yet supported. But this is the hacker's forum so... No change for importing the ZFS pool. Now you're gonna need to hack the zones in. Fo

Re: [zfs-discuss] how to remove sun volume mgr configuration?

2007-07-17 Thread Wee Yeh Tan
On 7/17/07, Richard Elling <[EMAIL PROTECTED]> wrote: > Performance-wise, these are pretty wimpy. You should be able to saturate > the array controller, even without enabling RAID-5 on it. Note that the > T3's implementation of RAID-0 isn't quite the same as other arrays, so it > may perform some

Re: [zfs-discuss] gzip compression throttles system?

2007-05-03 Thread Wee Yeh Tan
Ian, On 5/3/07, Ian Collins <[EMAIL PROTECTED]> wrote: I don't think it was a maxed CPU problem, only one core was loaded and the prstat numbers I could get (the reporting period was erratic) didn't show anything nasty. Do you have the output of 'mpstat 5'? -- Just me, Wire ... Blog: __

Re: [zfs-discuss] learn to quote

2007-04-29 Thread Wee Yeh Tan
On 4/29/07, Christine Tran <[EMAIL PROTECTED]> wrote: Jens Elkner wrote: > So please: http://learn.to/quote We apparently need to learn German as well. -CT It's available in English as well... -- Just me, Wire ... Blog:

Re: [zfs-discuss] HowTo: UPS + ZFS & NFS + no fsync

2007-04-26 Thread Wee Yeh Tan
Cedric, On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote: >> okay let'say that it is not. :) >> Imagine that I setup a box: >> - with Solaris >> - with many HDs (directly attached). >> - use ZFS as the FS >> - export the Data with NFS >> - on an UPS. >> >> Then after reading the : >

Re: Re[2]: [zfs-discuss] HowTo: UPS + ZFS & NFS + no fsync

2007-04-26 Thread Wee Yeh Tan
Robert, On 4/27/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello Wee, Thursday, April 26, 2007, 4:21:00 PM, you wrote: WYT> On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote: >> okay let'say that it is not. :) >> Imagine that I setup a box: >> - with Solaris >> - with many HDs (dire

Re: [zfs-discuss] HowTo: UPS + ZFS & NFS + no fsync

2007-04-26 Thread Wee Yeh Tan
On 4/26/07, cedric briner <[EMAIL PROTECTED]> wrote: okay let'say that it is not. :) Imagine that I setup a box: - with Solaris - with many HDs (directly attached). - use ZFS as the FS - export the Data with NFS - on an UPS. Then after reading the : http://www.solarisinternals.com/wiki

Re: [zfs-discuss] zfs submounts and permissions with autofs

2007-04-25 Thread Wee Yeh Tan
On 4/24/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote: > Is it expected that if I have filesystem tank/foo and tank/foo/bar > (mounted under /tank) then in order to be able to browse via > /net down into tank/foo/bar I need to have group/other permissions > on /tank/foo open? > You are running

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-23 Thread Wee Yeh Tan
On 4/24/07, Richard Elling <[EMAIL PROTECTED]> wrote: Wee Yeh Tan wrote: > I didn't spot anything that reads it from /etc/system. Appreciate any > pointers. The beauty, and curse, of /etc/system is that modules do not need to create an explicit reader. Grr I suspecte

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-23 Thread Wee Yeh Tan
On 4/23/07, Manoj Joseph <[EMAIL PROTECTED]> wrote: Wee Yeh Tan wrote: > On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: >> bash-3.00# mdb -k >> Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md >> ip sctp usba fcp fctl qlc ssd c

Re: Re[6]: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-22 Thread Wee Yeh Tan
On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: bash-3.00# mdb -k Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md ip sctp usba fcp fctl qlc ssd crypto lofs zfs random ptm cpc nfs ] > segmap_percent/D segmap_percent: segmap_percent: 12 (it's static IIRC) segmap_per

Re: Re[4]: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-22 Thread Wee Yeh Tan
On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello Wee, Friday, April 20, 2007, 5:20:00 AM, you wrote: WYT> On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: >> You can limit how much memory zfs can use for its caching. >> WYT> Indeed, but that memory will still be locked. Ho

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-21 Thread Wee Yeh Tan
On 4/20/07, Tim Thomas <[EMAIL PROTECTED]> wrote: My initial reaction is that the world has got by without file systems that can do this for a long time...so I don't see the absence of this as a big deal. On the other hand, it hard to argue against a feature that I admit that this is "typically

Re: Re[2]: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-19 Thread Wee Yeh Tan
On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: You can limit how much memory zfs can use for its caching. Indeed, but that memory will still be locked. How can you tell the system to be "flexible" with the caching? I deem that archiving will not present a cache challenge but we will

Re: [zfs-discuss] Preferred backup mechanism for ZFS?

2007-04-19 Thread Wee Yeh Tan
Hi Tim, I run a setup of SAM-FS for our main file server and we loved the backup/restore parts that you described. The main concerns I have with SAM fronting the entire conversation is data integrity. Unlike ZFS, SAMFS does not do end to end checksumming. We have considered the setup you propo

Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Wee Yeh Tan
On 4/17/07, David R. Litwin <[EMAIL PROTECTED]> wrote: On 17/04/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote: > On 4/17/07, David R. Litwin <[EMAIL PROTECTED]> wrote: > > So, it comes to this: Why, precisely, can ZFS not be > > released under a License which _is_

Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Wee Yeh Tan
On 4/17/07, David R. Litwin <[EMAIL PROTECTED]> wrote: So, it comes to this: Why, precisely, can ZFS not be released under a License which _is_ GPL compatible? So why do you think should it be released under a GPL compatible license? -- Just me, Wire ... __

Re: [zfs-discuss] 6410 expansion shelf

2007-04-03 Thread Wee Yeh Tan
On 4/3/07, Frank Cusack <[EMAIL PROTECTED]> wrote: > As promised. I got my 6140 SATA delivered yesterday and I hooked it > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > "working" for the last 1 hour. I'll be running some benchmarks on it. > I'll probably have a week w

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Wee Yeh Tan
On 3/30/07, Nicholas Lee <[EMAIL PROTECTED]> wrote: How do hard-links work across zfs mount/filesystems in the same pool? No. My guess is that it should be technically possible in the same pool though b

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Wee Yeh Tan
On 3/30/07, Shawn Walker <[EMAIL PROTECTED]> wrote: Actually, recent version control systems can be very efficient at storing binary files. Still no where as efficient as a ZFS snapshot. Careful consideration of the layout of your file system applies regardless of which type of file system it

Re: [zfs-discuss] File level snapshots in ZFS?

2007-03-29 Thread Wee Yeh Tan
On 3/30/07, Shawn Walker <[EMAIL PROTECTED]> wrote: On 29/03/07, Atul Vidwansa <[EMAIL PROTECTED]> wrote: > Hi Richard, > I am not talking about source(ASCII) files. How about versioning > production data? I talked about file level snapshots because > snapshotting entire filesystem does not m

Re: [zfs-discuss] Atomic setting of properties?

2007-03-28 Thread Wee Yeh Tan
On 3/28/07, Fred Oliver <[EMAIL PROTECTED]> wrote: Has consideration been given to setting multiple properties at once in a single zfs set command? For example, consider attempting to maintain quota == reservation, while increasing both. It is impossible to maintain this equality without some ad

Re: [zfs-discuss] 6410 expansion shelf

2007-03-27 Thread Wee Yeh Tan
Cool blog! I'll try a run at this on the benchmark. On 3/27/07, Rayson Ho <[EMAIL PROTECTED]> wrote: BTW, did anyone try this?? http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for Rayson On 3/27/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote: > As promised.

Re: [zfs-discuss] 6410 expansion shelf

2007-03-26 Thread Wee Yeh Tan
On 3/24/07, Frank Cusack <[EMAIL PROTECTED]> wrote: On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <[EMAIL PROTECTED]> wrote: > I should be able to reply to you next Tuesday -- my 6140 SATA > expansion tray is due to arrive. Meanwhile, what kind of problem do > you have w

Re: [zfs-discuss] 6410 expansion shelf

2007-03-23 Thread Wee Yeh Tan
I should be able to reply to you next Tuesday -- my 6140 SATA expansion tray is due to arrive. Meanwhile, what kind of problem do you have with the 3511? -- Just me, Wire ... On 3/23/07, Frank Cusack <[EMAIL PROTECTED]> wrote: Does anyone have a 6140 expansion shelf that they can hook directly

Re: [zfs-discuss] ZFS performance with Oracle

2007-03-18 Thread Wee Yeh Tan
Jeff, This is great information. Thanks for sharing. Quickio is almost required if you want vxfs with Oracle. We ran a benchmark a few years back and found that vxfs is fairly cache hungry and ufs with directio beats vxfs without quickio hands down. Take a look at what mpstat says on xcalls.

Re: [zfs-discuss] zfs bogus (10 u3)?

2007-02-25 Thread Wee Yeh Tan
Jens, What's the output of zpool list zfs list ? On 2/26/07, Jens Elkner <[EMAIL PROTECTED]> wrote: Is somebody able to explain this? elkner.isis /zpool1 > df -h ... zpool1 21T 623G20T 3%/zpool1 ... elkner.isis /zpool1 > ls -al total 1306050271 drwxr-xr-x

Re: [zfs-discuss] Another paper

2007-02-21 Thread Wee Yeh Tan
Correct me if I'm wrong but fma seems like a more appropriate tool to track disk errors. -- Just me, Wire ... On 2/22/07, TJ Easter <[EMAIL PROTECTED]> wrote: All, I think dtrace could be a viable option here. crond to run a dtrace script on a regular basis that times a series of reads an

Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-31 Thread Wee Yeh Tan
On 2/1/07, Marion Hakanson <[EMAIL PROTECTED]> wrote: There's also the potential of too much seeking going on for the raidz pool, since there are 9 LUN's on top of 7 physical disk drives (though how Hitachi divides/stripes those LUN's is not clear to me). Marion, That is the part of your setup

Re: [zfs-discuss] hot spares - in standby?

2007-01-29 Thread Wee Yeh Tan
On 1/30/07, David Magda <[EMAIL PROTECTED]> wrote: What about a rotating spare? When setting up a pool a lot of people would (say) balance things around buses and controllers to minimize single points of failure, and a rotating spare could disrupt this organization, but would it be useful at al

Re: [zfs-discuss] Thumper Origins Q

2007-01-25 Thread Wee Yeh Tan
On 1/25/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: Having snapshots in the filesystem that work so well is really nice. How are y'all quiescing the DB? So the DBA has a cronjob that puts the DB (Oracle) into hot backup mode, takes a snapshot of all affected filesystems (i.e. log + data

Re: [zfs-discuss] Thumper Origins Q

2007-01-24 Thread Wee Yeh Tan
On 1/25/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote: ... after all, what was ZFS going to do with that expensive but useless hardware RAID controller? ... I almost rolled over reading this. This is exactly what I went through when we moved our database server out from Vx** to ZFS. We had a

Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Wee Yeh Tan
On 1/19/07, mike <[EMAIL PROTECTED]> wrote: Would this be the same as failing a drive on purpose to remove it? I was under the impression that was supported, but I wasn't sure if shrinking a ZFS pool would work though. Not quite. I suspect you are thinking about drive replacement rather than

Re: [zfs-discuss] zfs umount -a in a global zone

2007-01-14 Thread Wee Yeh Tan
Hi Robert, On 1/14/07, Robert Milkowski <[EMAIL PROTECTED]> wrote: I did 'zfs umount -a' in a global zone and all (non busy) datasets also in local zone were unmounted (one dataset was delegated to the local zone and other datasets were created inside). Well, I belive it shouldn't be th

Re: [zfs-discuss] Seven questions for a newbie

2007-01-14 Thread Wee Yeh Tan
On 1/15/07, mike <[EMAIL PROTECTED]> wrote: 1) Is a hardware-based RAID behind the scenes needed? Can ZFS safely be considered a replacement for that? I assume that anything below the filesystem level in regards to redundancy could be an added bonus, but is it necessary at all? ZFS is more reli

Re: [zfs-discuss] optimal zpool layout?

2007-01-14 Thread Wee Yeh Tan
On 1/13/07, Richard Elling <[EMAIL PROTECTED]> wrote: And a third choice is cutting your 40GByte drives in two such that you have a total of 6x 20 GByte partitions spread across your 80 and 40 GByte drives. Then install three 2-way mirrors across the disks. Some people like such things, and the

Re: [zfs-discuss] Multiple Read one Writer Filesystem

2007-01-14 Thread Wee Yeh Tan
On 1/15/07, Torrey McMahon <[EMAIL PROTECTED]> wrote: Mike Papper wrote: > > The alternative I am considering is to have a single filesystem > available to many clients using a SAN (iSCSI in this case). However > only one client would mount the ZFS filesystem as read/write while the > others woul

Re: [zfs-discuss] Re: ZFS Usage in Warehousing (lengthy intro)

2006-12-10 Thread Wee Yeh Tan
Luke, On 12/11/06, Luke Lonergan <[EMAIL PROTECTED]> wrote: The performance comes from the parallel version of pgsql, which uses all CPUs and I/O channels together (and not special settings of ZFS). What sets this apart from Oracle is that it's an automatic parallelism that leverages the intern

Re: [zfs-discuss] ZFS compression / ARC interaction

2006-12-07 Thread Wee Yeh Tan
On 12/8/06, Mark Maybee <[EMAIL PROTECTED]> wrote: Yup, your assumption is correct. We currently do compression below the ARC. We have contemplated caching data in compressed form, but have not really explored the idea fully yet. Hmm... interesting idea. That will incur CPU to do a decompres

Re: [zfs-discuss] Creating zfs filesystem on a partition with ufs - Newbie

2006-12-06 Thread Wee Yeh Tan
Ian, The first error is correct in that zpool-create will not, unless forced, create a file system if it knows that another filesystem presides in the target vdev. The second error was caused by your removal of the slice. What I find discerning is that the zpool created. Can you provide the res

Re: [zfs-discuss] Dead drives and ZFS

2006-11-14 Thread Wee Yeh Tan
On 11/14/06, Jeremy Teo <[EMAIL PROTECTED]> wrote: I'm more inclined to "split" instead of "fork". ;) I prefer "split" too since that's what most of the storage guys are using for mirrors. Still, we are not making any progress on helping Rainer out of his predicaments. -- Just me, Wire ... _

Re: [zfs-discuss] Thoughts on patching + zfs root

2006-11-14 Thread Wee Yeh Tan
On 11/11/06, Bart Smaalders <[EMAIL PROTECTED]> wrote: It would seem useful to separate the user's data from the system's data to prevent problems with losing mail, log file data, etc, when either changing boot environments or pivoting root boot environments. I'll be more concerned about the co

Re: [zfs-discuss] A versioning FS

2006-10-08 Thread Wee Yeh Tan
On 10/9/06, Jonathan Edwards <[EMAIL PROTECTED]> wrote: > We want to differentiate files that are created intentionally from > those that are just versions. If files starts showing up on their > own, a lot of my scripts will break. Still, an FV-aware > shell/program/API can accept an environmen

Re: [zfs-discuss] A versioning FS

2006-10-08 Thread Wee Yeh Tan
On 10/7/06, Ben Gollmer <[EMAIL PROTECTED]> wrote: On Oct 6, 2006, at 6:15 PM, Nicolas Williams wrote: > What I'm saying is that I'd like to be able to keep multiple > versions of > my files without "echo *" or "ls" showing them to me by default. Hmm, what about file.txt -> ._file.txt.1, ._file.

Re: [zfs-discuss] A versioning FS

2006-10-08 Thread Wee Yeh Tan
On 10/7/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote: I've never encountered branch being used that way, anywhere. It's used for things like developing release 2.0 while still supporting 1.5 and 1.6. However, especially with merge in svn it might be feasible to use a branch that way. What's

Re: [zfs-discuss] A versioning FS

2006-10-05 Thread Wee Yeh Tan
On 10/6/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote: One of the big problems with CVS and SVN and Microsoft SourceSafe is that you don't have the benefits of version control most of the time, because all commits are *public*. David, That is exactly what "branch" is for in CVS and SVN. Dun

Re: [zfs-discuss] Versioning in ZFS: Do we need it?

2006-10-05 Thread Wee Yeh Tan
Jeremy, The intended use of both are vastly different. A snapshot is a point-in-time image of a file system that as you have pointed out, may have missed several versions of changes regardless of frequency. Versioning (ala VAX -- ok, I feel old now) keeps versions of every changes up to a speci

Re: [zfs-discuss] directory tree removal issue with zfs on Blade 1500/PC rack server IDE disk

2006-10-05 Thread Wee Yeh Tan
On 10/5/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Unmount all the ZFS filesystems and check the permissions on the mount points and the paths leading up to them. I experienced the same problem and narrowed it down to that essentially, chdir("..") in "rm -rf" failed to ascend up the direc

Re: [zfs-discuss] directory tree removal issue with zfs on Blade 1500/PC rack server IDE disk

2006-10-05 Thread Wee Yeh Tan
Check the permission of your mountpoint after you unmount the dataset. Most likely, you have something like rwx--. On 10/5/06, Stefan Urbat <[EMAIL PROTECTED]> wrote: I want to know, if anybody can check/confirm the following issue I observed with a fully patched Solaris 10 u2 with ZFS runn

Re: [zfs-discuss] Questions of ZFS mount point and import Messages

2006-10-04 Thread Wee Yeh Tan
On 10/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote: On Tue, Oct 03, 2006 at 11:22:24PM -0700, Tejung Ted Chiang wrote: > 2. [b]How do we know the zpool by which systems is currently used?[/b] > "zpool import" command does not tell us the system id information. It > is hard to tell the dependencie

Re: [zfs-discuss] Customer problem with zfs

2006-09-25 Thread Wee Yeh Tan
Edward, /etc/zpool.cache contains data pointing to devices involved in a zpool. Changes to ZFS datasets are reflected in the actual zpool so destroying a zfs dataset should not change zpool.cache. zfs destroy is the correct command to destroy a file system. It will be easier if we can know - t

Re: [zfs-discuss] zfs scrub question

2006-09-20 Thread Wee Yeh Tan
Peter, I'll first check /var/adm/messages to see if there are any poblems with the following disks: c10t600A0B800011730E66F444C5EE7Ed0 c10t600A0B800011730E66F644C5EE96d0 c10t600A0B800011652EE5CF44C5EEA7d0 c10t600A0B800011730E66F844C5EEBAd0 The checksum errors seems to concentrat

Re: [zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-16 Thread Wee Yeh Tan
Anantha, I was hoping to see a lot less trace records than that. Was DTrace running the whole time or did you start it just before you saw the problem? Can you sieve thru the trace to see if you can see any subsequent firings whose timestamp differences are big? (e.g. > 1s). You can try this a

Re: [zfs-discuss] Re: Re: Re: Re: Proposal: multiple copies of user data

2006-09-16 Thread Wee Yeh Tan
On 9/15/06, can you guess? <[EMAIL PROTECTED]> wrote: Implementing it at the directory and file levels would be even more flexible: redundancy strategy would no longer be tightly tied to path location, but directories and files could themselves still inherit defaults from the filesystem and p

Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Wee Yeh Tan
On 9/13/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Sure, if you want *everything* in your pool to be mirrored, there is no real need for this feature (you could argue that setting up the pool would be easier if you didn't have to slice up the disk though). Not necessarily. Implementing this

Re: [zfs-discuss] Memory Usage

2006-09-13 Thread Wee Yeh Tan
On 9/13/06, Thomas Burns <[EMAIL PROTECTED]> wrote: BTW -- did I guess right wrt where I need to set arc.c_max (etc/system)? I think you need to use mdb. As Mark and Johansen mentioned, only do this as your last resort. # mdb -kw arc::print -a c_max d3b0f874 c_max = 0x1d0fe800 d3b0f874 /W

Re: [zfs-discuss] Re: Bizzare problem with ZFS filesystem

2006-09-12 Thread Wee Yeh Tan
Anantha, How's the output of: dtrace -F -n 'fbt:zfs::/pid==/{trace(timestamp)}' -- Just me, Wire ... On 9/13/06, Anantha N. Srirama <[EMAIL PROTECTED]> wrote: Here's the information you requested. ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] Used space accounting - problem with snapshots

2006-09-09 Thread Wee Yeh Tan
On 9/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Robert Milkowski wrote: > Hi. > > bash-3.00# zfs get quota f3-1/d611 > NAME PROPERTY VALUE SOURCE > f3-1/d611quota 400G local > bash-3.00# > > bash-3.00# zfs list |

Re: [zfs-discuss] Re: ZFS uses 1.1GB more space, reports conflicting information...

2006-09-06 Thread Wee Yeh Tan
On 9/6/06, UNIX admin <[EMAIL PROTECTED]> wrote: Yes, the man page says that. However, it is possible to mix disks of different sizes in a RAIDZ, and this works. Why does it work? Because RAIDZ stripes are dynamic in size. From that I infer that disks can be any size because the stripes can be

Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-05 Thread Wee Yeh Tan
On 9/5/06, Torrey McMahon <[EMAIL PROTECTED]> wrote: This is simply not true. ZFS would protect against the same type of errors seen on an individual drive as it would on a pool made of HW raid LUN(s). It might be overkill to layer ZFS on top of a LUN that is already protected in some way by the

Re: [zfs-discuss] ZFS uses 1.1GB more space, reports conflicting information...

2006-09-05 Thread Wee Yeh Tan
Hi, On 9/4/06, UNIX admin <[EMAIL PROTECTED]> wrote: [Solaris 10 6/06 i86pc] ... Then I added two more disks to the pool with the `zpool add -fn space c2t10d0 c2t11d0`, whereby I determined that those would be added as a RAID0, which is not what I wanted. `zpool add -f raidz c2t10d0 c2t11d0`

Re: [zfs-discuss] Need some input on a theoretical situation

2006-09-02 Thread Wee Yeh Tan
I imagine that extending "zpool attach" to attach new devices to RAID-Z sets would be exactly what you want. Of course, I am wholy unaware of the implementation details so the technical difficulties might be tremendrous. In any case, there is no removing device yet so either senarios listed by T

Re: Re[2]: [zfs-discuss] ZFS & se6920

2006-08-28 Thread Wee Yeh Tan
On 8/28/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: Saturday, August 26, 2006, 6:43:05 PM, you wrote: WYT> Thanks to all who have responded. I spent 2 weekends working through WYT> the best practices tthat Jerome recommended -- it's quite a mouthful. WYT> On 8/17/06, Roch <[EMAIL PROTECTED]

Re: [zfs-discuss] Oracle on ZFS

2006-08-26 Thread Wee Yeh Tan
Daniel, This is cool. I've convinced my DBA to attempt the same stunt. We are just starting with the testing so I'll post results as I get them. Will appreciate if you can share your zpool layout. -- Just me, Wire ... On 8/26/06, Daniel Rock <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] sc

Re: [zfs-discuss] ZFS & se6920

2006-08-26 Thread Wee Yeh Tan
Thanks to all who have responded. I spent 2 weekends working through the best practices tthat Jerome recommended -- it's quite a mouthful. On 8/17/06, Roch <[EMAIL PROTECTED]> wrote: My general principles are: If you can, to improve you 'Availability' metrics, let ZFS handle on

[zfs-discuss] ZFS & se6920

2006-08-16 Thread Wee Yeh Tan
Hi all, My company will be acquiring the Sun SE6920 for our storage virtualization project and we intend to use quite a bit of ZFS as well. The 2 technologies seems somewhat at odds since the 6920 means layers of hardware abstraction but ZFS seems to prefer more direct access to disk. I tried t

Re: [zfs-discuss] add dataset

2006-07-19 Thread Wee Yeh Tan
On 7/18/06, Zoram Thanga <[EMAIL PROTECTED]> wrote: Which version of Solaris are you using? You should be able to add a dataset if you're running Solaris express. Not sure if this feature was backported to S10u2. It's available in the S10u2 we get from sun.com. -- Just me, Wire ... __