Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread eric kustarz
Robert Milkowski wrote: Hello George, Wednesday, July 26, 2006, 7:27:04 AM, you wrote: GW> Additionally, I've just putback the latest feature set and bugfixes GW> which will be part of s10u3_03. There were some additional performance GW> fixes which may really benefit plus it will provide h

[zfs-discuss] 6424554

2006-07-26 Thread Robert Milkowski
Hello zfs-discuss, Is someone working on a backport (patch) to S10? Any timeframe? -- Best regards, Robert mailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread Edward Pilatowicz
zfs depends on ldi_get_size(), which depends on the device being accessed exporting one of the properties below. i guess the the devices generated by IBMsdd and/or EMCpower/or don't generate these properties. ed On Wed, Jul 26, 2006 at 01:53:31PM -0700, Eric Schrock wrote: > On Wed, Jul 26, 200

Re[2]: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Robert Milkowski
Hello George, Wednesday, July 26, 2006, 7:27:04 AM, you wrote: GW> Additionally, I've just putback the latest feature set and bugfixes GW> which will be part of s10u3_03. There were some additional performance GW> fixes which may really benefit plus it will provide hot spares support. GW> Once

Re[2]: [zfs-discuss] ZFS mirror question

2006-07-26 Thread Robert Milkowski
Hello Eric, Wednesday, July 26, 2006, 8:44:55 PM, you wrote: ES> And no, there is currently no way to remove a dynamically striped disk ES> from a pool. We're working on it. That's interesting (I mean that's something is actually being done about it). Can you give us some specifics (features,

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread Torrey McMahon
Does format show these drives to be available and containing a non-zero size? Eric Schrock wrote: On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote: Eric, Here is the output: # ./dtrace2.dtr dtrace: script './dtrace2.dtr' matched 4 probes CPU IDFUNCTION:

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread Eric Schrock
On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote: > Eric, > > Here is the output: > > # ./dtrace2.dtr > dtrace: script './dtrace2.dtr' matched 4 probes > CPU IDFUNCTION:NAME > 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c > 0 16197

[zfs-discuss] Re: Write cache

2006-07-26 Thread Pawel Wojcik
There is manual, programmatic and start-up control of write cache on SATA drives already available. There is no drive-agnostic (i.e. for all types of drives) control that covers all three ways of cache control - that was shifted into a lower priority item than other sata development stuff. It

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread David Curtis
Eric, Here is the output: # ./dtrace2.dtr dtrace: script './dtrace2.dtr' matched 4 probes CPU IDFUNCTION:NAME 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c 0 16197 ldi_get_otyp:return 0 0 15546ldi_prop_exists:

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread Eric Schrock
So it does look like something's messed up here. Before we pin this down as a driver bug, we should double check that we are indeed opening what we think we're opening, and try to track down why ldi_get_size is failing. Try this: #!/usr/sbin/dtrace -s ldi_open_by_name:entry { trace(stri

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread David Curtis
Eric, Here is what the customer gets trying to create the pool using the software alias: (I added all the ldi_open's to the script) # zpool create -f extdisk vpath1c # ./dtrace.script dtrace: script './dtrace.script' matched 6 probes CPU IDFUNCTION:NAME 0 7233

Re: [zfs-discuss] ZFS mirror question

2006-07-26 Thread Eric Schrock
You want 'zpool attach' instead of 'zpool add'. What the customer did was add it back, but as a dynamic stripe instead of a second half of a mirror. And no, there is currently no way to remove a dynamically striped disk from a pool. We're working on it. - Eric On Wed, Jul 26, 2006 at 02:12:11P

[zfs-discuss] ZFS mirror question

2006-07-26 Thread Jeffrey Evans
Customer created a pool with 2 disks that were mirrored. He then wanted to see what would happen if he destroyed the mirror. He offlined the disk c1t5d0 then "destroy" it. Hi ZFS question. He then added back to the pool with the intention of it replacing itself as the mirror. However the pool d

[zfs-discuss] Re: Write cache

2006-07-26 Thread Bart Smaalders
Jesus Cea wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Neil Perrin wrote: I suppose if you know the disk only contains zfs slices then write caching could be manually enabled using "format -e" -> cache -> write_cache -> enable When will we have write cache control over ATA/SATA drives

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread Edward Pilatowicz
zfs should work fine with disks under the control of solaris mpxio. i don't know about any of the other multipathing solutions. if you're trying to use a device that's controlled by another multipathing solution, you might want to try specifying the full path to the device, ex: zpool creat

Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Brian Hechinger
On Wed, Jul 26, 2006 at 08:38:16AM -0600, Neil Perrin wrote: > > > >GX620 on my desk at work and I run snv_40 on the Latitude D610 that I > >carry with me. In both cases the machines only have one disk, so I need > >to split it up for UFS for the OS and ZFS for my data. How do I turn on > >writ

Write cache (was: Re: [zfs-discuss] How to best layout our filesystems)

2006-07-26 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Neil Perrin wrote: > I suppose if you know > the disk only contains zfs slices then write caching could be > manually enabled using "format -e" -> cache -> write_cache -> enable When will we have write cache control over ATA/SATA drives? :-). - -- Je

Re: [zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread Eric Schrock
This suggests that there is some kind of bug in the layered storage software. ZFS doesn't do anything special to the underlying storage device; it merely relies on a few ldi_*() routines. I would try running the following dtrace script: #!/usr/sbin/dtrace -s vdev_disk_open:return, ldi_open_by_n

[zfs-discuss] zfs questions from Sun customer

2006-07-26 Thread David Curtis
Please reply to [EMAIL PROTECTED] Background / configuration ** zpool will not create a storage pool on fibre channel storage. I'm attached to an IBM SVC using the IBMsdd driver. I have no problem using SVM metadevices and UFS on these devices. List steps to reproduce th

Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Neil Perrin
Brian Hechinger wrote On 07/26/06 06:49,: On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote: If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let it label and use the disks, it will automatically turn on the write cache for you. What if you can't give ZFS whol

Re: [zfs-discuss] How to best layout our filesystems

2006-07-26 Thread Brian Hechinger
On Tue, Jul 25, 2006 at 03:54:22PM -0700, Eric Schrock wrote: > > If you give zpool(1M) 'whole disks' (i.e. no 's0' slice number) and let > it label and use the disks, it will automatically turn on the write > cache for you. What if you can't give ZFS whole disks? I run snv_38 on the Optiplex GX

Re: [zfs-discuss] ZFS needs a viable backup mechanism

2006-07-26 Thread Darren J Moffat
Richard Elling wrote: Craig Morgan wrote: Spare a thought also for the remote serviceability aspects of these systems, if customers raise calls/escalations against such systems then our remote support/solution centre staff would find such an output useful in identifying and verifying the confi

Re: [zfs-discuss] Flushing synchronous writes to mirrors

2006-07-26 Thread Jeff Bonwick
> For a synchronous write to a pool with mirrored disks, does the write > unblock after just one of the disks' write caches is flushed, > or only after all of the disks' caches are flushed? The latter. We don't consider a write to be committed until the data is on stable storage at full replicati