Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Cindy Swearingen
Hi Michelle, Your previous mail about the disk label reverting to EFI makes me wonder whether you used the format -e option to relabel the disk, but your disk label below looks fine. This also might be a known bug (6419310), whose workaround is to use the -f option to zpool attach. An interim

Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small connect another disk

2010-01-28 Thread Cindy Swearingen
I think the SATA(2)-->SATA(1) connection will negotiate correctly, but maybe some hardware expert will confirm. cs On 01/28/10 15:27, dick hoogendijk wrote: On Thu, 2010-01-28 at 08:44 -0700, Cindy Swearingen wrote: Or, if possible, connect another larger disk and attach it to the origi

Re: [zfs-discuss] ZFS Flash Jumpstart and mini-root version

2010-01-29 Thread Cindy Swearingen
Hi Tony, I'm no JumpStart expert but it looks to me like the error is on the pool entry in the profile. I would retest this install by changing the pool entry in the profile like this: install_type flash_install archive_location nfs://192.168.1.230/export/install/media/sol10u8.flar

Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-29 Thread Cindy Swearingen
Hi Michelle, You're almost there, but install the bootblocks in s0: # installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c19d0s0 Thanks, Cindy On 01/29/10 11:10, Michelle Knight wrote: Well, I nearly got there. I used -f to force the overwrite and then installed grub to slice 8 (w

Re: [zfs-discuss] cannot attach c5d0s0 to c4d0s0: device is too small

2010-01-29 Thread Cindy Swearingen
from either BE in either pool. I thought beadm would be similar, but let me find out. Thanks, Cindy On 01/29/10 11:07, Dick Hoogendijk wrote: Op 28-1-2010 17:35, Cindy Swearingen schreef: Thomas, Excellent and much better suggestion... :-) You can use beadm to specify another root pool by usin

Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-30 Thread Cindy Swearingen
Michelle, Yes, the bootblocks and the pool coexist, even happily sometimes. In general, you shouldn't have to deal with the boot partition stuff that you see in the disk format output. If I could hide all this low- level stuff from you, I would, because its so dang confusing. Looks like you got

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Cindy Swearingen
Hi-- Were you trying to swap out a drive in your pool's raidz1 VDEV with a spare device? Was that your original intention? If so, then you need to use the zpool replace command to replace one disk with another disk including a spare. I would put the disks back to where they were and retry with

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Cindy Swearingen
Its Monday morning so it still doesn't make sense. :-) I suggested putting the disks back because I'm still not sure if you physically swapped c7t11d0 for c7t9d0 or if c7t9d0 is still connected and part of your pool. You might trying detaching the spare as described in the docs. If you put the d

Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-02-01 Thread Cindy Swearingen
You are correct. Should be fine without -m. Thanks, Cindy On 01/30/10 09:15, Fajar A. Nugraha wrote: On Sat, Jan 30, 2010 at 2:02 AM, Cindy Swearingen wrote: Hi Michelle, You're almost there, but install the bootblocks in s0: # installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Cindy Swearingen
ZFS can generally detect device changes on Sun hardware, but for other hardware, the behavior is unknown. The most harmful pool problem I see besides inadequate redundancy levels or no backups, is device changes. Recovery can be difficult. Follow recommended practices for replacing devices in a

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Cindy Swearingen
depends on the driver-->ZFS interaction and we can't speak for all hardware. Thanks, Cindy On 02/01/10 12:52, Frank Cusack wrote: On February 1, 2010 10:19:24 AM -0700 Cindy Swearingen wrote: ZFS has recommended ways for swapping disks so if the pool is exported, the system shutdown

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Cindy Swearingen
Hi, Testing how ZFS reacts to a failed disk can be difficult to anticipate because some systems don't react well when you remove a disk. On an x4500, for example, you have to unconfigure a disk before you can remove it. Before removing a disk, I would consult your h/w docs to see what the recomm

Re: [zfs-discuss] zpool status output confusing

2010-02-01 Thread Cindy Swearingen
Frank, ZFS, Sun device drivers, and the MPxIO stack all work as expected. Cindy On 02/01/10 14:55, Frank Cusack wrote: On February 1, 2010 4:15:10 PM -0500 Frank Cusack wrote: On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen wrote: Whether disk swapping on the fly or a controller

Re: [zfs-discuss] zpool status output confusing

2010-02-02 Thread Cindy Swearingen
Even if the pool is created with whole disks, you'll need to use the s* identifier as I provided in the earlier reply: # zdb -l /dev/dsk/cvtxdysz Cindy On 02/02/10 01:07, Tonmaus wrote: If I run # zdb -l /dev/dsk/c#t#d# the result is "failed to unpack label" for any disk attached to contro

Re: [zfs-discuss] How to grow ZFS on growing pool?

2010-02-02 Thread Cindy Swearingen
Hi Joerg, Eabling the autoexpand property after the disk replacement is complete should expand the pool. This looks like a bug. I can reproduce this issue with files. It seems to be working as expected for disks. See the output below. Thanks, Cindy Create pool test with 2 68 GB drives: # zpool

Re: [zfs-discuss] How to grow ZFS on growing pool?

2010-02-02 Thread Cindy Swearingen
Hi David, This feature integrated into build 117, which would be beyond your OpenSolaris 2009.06. We anticipate this feature will be available in an upcoming Solaris 10 release. You can read about it here: http://docs.sun.com/app/docs/doc/817-2271/githb?a=view ZFS Device Replacement Enhancemen

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Cindy Swearingen
Hi Brian, If you are considering testing dedup, particularly on large datasets, see the list of known issues, here: http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup Start with build 132. Thanks, Cindy On 02/04/10 16:19, Brian wrote: I am Starting to put together a home NAS s

Re: [zfs-discuss] Autoreplace property not accounted ?

2010-02-05 Thread Cindy Swearingen
Hi Francois, The autoreplace property works independently of the spare feature. Spares are activated automatically when a device in the main pool fails. Thanks, Cindy On 02/05/10 09:43, Francois wrote: Hi list, I've a strange behaviour with autoreplace property. It is set to off by default

Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-06 Thread Cindy Swearingen
Hi Cesare, If you want another way to replicate pools, you might be interested in the zpool split feature that Mark Musante integrated recently. You can read about it here: http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck Cindy - Original Message - From: Cesare Date: Sa

Re: [zfs-discuss] Recover ZFS Array after OS Crash?

2010-02-08 Thread Cindy Swearingen
broke the mirror and attach to different server where there is a backup environment and then rebuild the mirror). Thanks. Cesare On Sat, Feb 6, 2010 at 6:04 PM, Cindy Swearingen wrote: Hi Cesare, If you want another way to replicate pools, you might be interested in the zpool split feature that

Re: [zfs-discuss] zpool list size

2010-02-08 Thread Cindy Swearingen
Hi Richard, I last updated this FAQ on 1/19. Which part is not well-maintained? :-) Cindy On 02/08/10 14:50, Richard Elling wrote: This is a FAQ, but the FAQ is not well maintained :-( http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq On Feb 8, 2010, at 1:35 PM, Lasse Osterild wro

Re: [zfs-discuss] zpool list size

2010-02-08 Thread Cindy Swearingen
Hi Lasse, I expanded this entry to include more details of the zpool list and zfs list reporting. See if the new explanation provides enough details. Thanks, Cindy On 02/08/10 16:51, Lasse Osterild wrote: On 09/02/2010, at 00.23, Daniel Carosone wrote: On Mon, Feb 08, 2010 at 11:28:11PM +0

Re: [zfs-discuss] zfs promote

2010-02-11 Thread Cindy Swearingen
Hi Tester, It is difficult for me to see all that is going on here. Can you provide the steps and the complete output? I tried to reproduce this on latest Nevada bits and I can't. The snapshot sizing looks correct to me after a snapshot/clone promotion. Thanks, Cindy # zfs create tank/fs1

Re: [zfs-discuss] zfs import fails even though all disks are online

2010-02-11 Thread Cindy Swearingen
Hi Marc, I've not seen an unimportable pool when all the devices are reported as ONLINE. You might see if the fmdump -eV output reports any issues that happened prior to this failure. You could also attempt to rename the /etc/zfs/zpool.cache file and then try to re-import the pool so that the d

Re: [zfs-discuss] available space

2010-02-15 Thread Cindy Swearingen
Hi Charles, What kind of pool is this? The SIZE and AVAIL amounts will vary depending on the ZFS redundancy and whether the deflated or inflated amounts are displayed. I attempted to explain the differences in the zpool list/zfs list display, here: http://hub.opensolaris.org/bin/view/Communi

Re: [zfs-discuss] zfs promote

2010-02-15 Thread Cindy Swearingen
Hi-- From your pre-promotion output, both fs1-patch and snap1 are referencing the same 16.4 GB, which makes sense. I don't see how fs1 could be a clone of fs1-patch because it should be REFER'ing 16.4 GB as well in your pre-promotion zfs list. If you snapshot, clone, and promote, then the sna

Re: [zfs-discuss] false DEGRADED status based on "cannot open" device at boot.

2010-02-17 Thread Cindy Swearingen
Hi Dennis, You might be running into this issue: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6856341 The workaround is to force load the drivers. Thanks, Cindy On 02/17/10 14:33, Dennis Clarke wrote: I find that some servers display a DEGRADED zpool status at boot. More troub

Re: [zfs-discuss] Help with corrupted pool

2010-02-18 Thread Cindy Swearingen
Hi Ethan, Great job putting this pool back together... I would agree with the disk-by-disk replacement by using the zpool replace command. You can read about this command here: http://docs.sun.com/app/docs/doc/817-2271/gazgd?a=view Having a recent full backup of your data before making any mor

Re: [zfs-discuss] Killing an EFI label

2010-02-18 Thread Cindy Swearingen
Hi David, Its a life-long curse to describe the format utility. Trust me. :-) I think you want to relabel some disks with an EFI label to SMI label to be used in your ZFS root pool, and you have overlapping slices on one disk. I don't think ZFS would let you attach this disk. To fix the overlap

Re: [zfs-discuss] How to resize ZFS partion or add a new one?

2010-02-18 Thread Cindy Swearingen
Frank, I can't comment on everything happening here, but please review the ZFS root partition information in this section: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Replacing/Relabeling the Root Pool Disk The p0 partition identifies the larger Solaris partition,

Re: [zfs-discuss] rule of thumb for scrub

2010-02-19 Thread Cindy Swearingen
Hi Harry, Our current scrubbing guideline is described here: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide Run zpool scrub on a regular basis to identify data integrity problems. If you have consumer-quality drives, consider a weekly scrubbing schedule. If you have dat

Re: [zfs-discuss] Listing snapshots in a pool

2010-02-22 Thread Cindy Swearingen
Hi David, I can't find any other solution than what you have already determined, which is this one: # zfs list -r -t snapshot tank The -d option integrated into b114. I'm running b132 and I still can't get any combination of zfs list -d to work. Its Monday and my brain is slow to warm up. See

Re: [zfs-discuss] scrub in 132

2010-02-22 Thread Cindy Swearingen
Hi Dirk, I'm not seeing anything specific to hanging scrubs on b 132 and I can't reproduce it. Any hardware changes or failures directly before the scrub? You can rule out any hardware issues by checking fmdump -eV, iostat -En, or /var/adm/messages output. Thanks, Cindy On 02/20/10 12:56, dir

Re: [zfs-discuss] ZFS Pool problems

2010-02-22 Thread Cindy Swearingen
Hi Jeff, The vmware pool is unavailable because the only device in the pool, c7t0d0, is unavailable. This problem is probably due to the device failing or being removed accidentally. You can follow the steps at the top of this section to help you diagnose the c7t0d0 problems: http://www.solari

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Cindy Swearingen
Ray, Log removal integrated into build 125, so yes, if you upgraded to at least OpenSolaris build 125 you could fix this problem. See the syntax below on my b133 system. In this particular case, importing the pool from b125 or later media and attempting to remove the log device could not fix thi

Re: [zfs-discuss] Whoops, accidentally created a new slog instead of mirroring

2010-02-25 Thread Cindy Swearingen
Correct, if you upgraded this pool, you would not be able to import it back on your existing Solaris 10 system. My advice would be to wait. Cindy On 02/25/10 13:05, Ray Van Dolson wrote: On Thu, Feb 25, 2010 at 11:55:35AM -0800, Cindy Swearingen wrote: Ray, Log removal integrated into build

Re: [zfs-discuss] Installing Solaris 10 with ZFS Root FS

2010-03-01 Thread Cindy Swearingen
Hi Romain, The option to select a ZFS root file system or a UFS root file system is available starting in the Solaris 10 10/08 release. Which Solaris 10 release are you trying to install? Thanks, Cindy On 03/01/10 09:23, Romain LAMAISON wrote: Hi all, I wish to install a Solaris 10 on a ZFS

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-12 Thread Cindy Swearingen
Hi John, What is the error when you attempt to import this pool? Thanks, Cindy On 10/11/11 18:17, John D Groenveld wrote: Banging my head against a Seagate 3TB USB3 drive. Its marketing name is: Seagate Expansion 3 TB USB 3.0 Desktop External Hard Drive STAY3000102 format(1M) shows it identif

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-12 Thread Cindy Swearingen
In the steps below, you're missing a zpool import step. I would like to see the error message when the zpool import step fails. Thanks, Cindy On 10/12/11 11:29, John D Groenveld wrote: In message <4e95cb2a.30...@oracle.com>, Cindy Swearingen writes: What is the error when you

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-13 Thread Cindy Swearingen
John, Any USB-related messages in /var/adm/messages for this device? Thanks, Cindy On 10/12/11 11:29, John D Groenveld wrote: In message <4e95cb2a.30...@oracle.com>, Cindy Swearingen writes: What is the error when you attempt to import this pool? "cannot import 'fo

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen
Hi John, I'm going to file a CR to get this issue reviewed by the USB team first, but if you could humor me with another test: Can you run newfs to create a UFS file system on this device and mount it? Thanks, Cindy On 10/18/11 08:18, John D Groenveld wrote: In message <201110150202.p9f22w2n

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen
Yeah, okay, duh. I should have known that large sector size support is only available for a non-root ZFS file system. A couple more things if you're still interested: 1. If you re-create the pool on the whole disk, like this: # zpool create foo c1t0d0 Then, resend the prtvtoc output for c1t0d0

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-18 Thread Cindy Swearingen
Hi Paul, Your 1-3 is very sensible advice and I must ask about this statement: >I have yet to have any data loss with ZFS. Maybe this goes without saying, but I think you are using ZFS redundancy. Thanks, Cindy On 10/18/11 08:52, Paul Kraus wrote: On Tue, Oct 18, 2011 at 9:38 AM, Gregory Sh

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-10-18 Thread Cindy Swearingen
This is CR 7102272. cs On 10/18/11 10:50, John D Groenveld wrote: In message <4e9da8b1.7020...@oracle.com>, Cindy Swearingen writes: 1. If you re-create the pool on the whole disk, like this: # zpool create foo c1t0d0 Then, resend the prtvtoc output for c1t0d0s0. # zpool create

Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread Cindy Swearingen
Hi John, CR 7102272: ZFS storage pool created on a 3 TB USB 3.0 device has device label problems Let us know if this is still a problem in the OS11 FCS release. Thanks, Cindy On 11/10/11 08:55, John D Groenveld wrote: In message<4e9db04b.80...@oracle.com>, Cindy Swearingen

Re: [zfs-discuss] bug moving files between two zfs filesystems (too many open files)

2011-11-29 Thread Cindy Swearingen
I think the "too many open files" is a generic error message about running out of file descriptors. You should check your shell ulimit information. On 11/29/11 09:28, sol wrote: Hello Has anyone else come across a bug moving files between two zfs file systems? I used "mv /my/zfs/filesystem/fi

Re: [zfs-discuss] ZFS smb/cifs shares in Solaris 11 (some observations)

2011-11-29 Thread Cindy Swearingen
Hi Sol, For 1) and several others, review the ZFS Admin Guide for a detailed description of the share changes, here: http://docs.oracle.com/cd/E23824_01/html/821-1448/gayne.html For 2-4), You can't rename a share. You would have to remove it and recreate it with the new name. For 6), I think y

Re: [zfs-discuss] gaining access to var from a live cd

2011-11-30 Thread Cindy Swearingen
Hi Francois, A similar recovery process in OS11 is to just mount the BE, like this: # beadm mount s11_175 /mnt # ls /mnt/var adm croninetlogadm preservetmp ai db infomailrun tpm apache2 dhcpinstalladm nfs

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-15 Thread Cindy Swearingen
Hi Anon, The disk that you attach to the root pool will need an SMI label and a slice 0. The syntax to attach a disk to create a mirrored root pool is like this, for example: # zpool attach rpool c1t0d0s0 c1t1d0s0 Thanks, Cindy On 12/15/11 16:20, Anonymous Remailer (austria) wrote: On Sola

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen
Hi Tim, No, in current Solaris releases the boot blocks are installed automatically with a zpool attach operation on a root pool. Thanks, Cindy On 12/15/11 17:13, Tim Cook wrote: Do you still need to do the grub install? On Dec 15, 2011 5:40 PM, "Cindy Swearingen" mailto:cin

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen
to do the partitioning by hand, which is just silly to fight with anyway. Gregg Sent from my iPhone On Dec 15, 2011, at 6:13 PM, Tim Cook mailto:t...@cook.ms>> wrote: Do you still need to do the grub install? On Dec 15, 2011 5:40 PM, "Cindy Swearingen" mailto:cindy.swear

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Cindy Swearingen
Yep, well said, understood, point taken, I hear you, you're preaching to the choir. Have faith in Santa. A few comments: 1. I need more info on the x86 install issue. I haven't seen this problem myself. 2. We don't use slice2 for anything and its not recommended. 3. The SMI disk is a long-stan

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Cindy Swearingen
wrote: On Thu, Dec 15, 2011 at 04:39:07PM -0700, Cindy Swearingen wrote: Hi Anon, The disk that you attach to the root pool will need an SMI label and a slice 0. The syntax to attach a disk to create a mirrored root pool is like this, for example: # zpool attach rpool c1t0d0s0 c1t1d0s0 BTW

Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-19 Thread Cindy Swearingen
that go in? Was it in sol10u9? Thanks, Andrew *From: *Cindy Swearingen mailto:cindy.swearin...@oracle.com>> *Subject: **Re: [zfs-discuss] Can I create a mirror for a root rpool?* *Date: *December 16, 2011 10:38:21 AM CST *To: *Tim Cook mailto:t...@cook.ms>> *Cc: *mailto:zfs-discuss@

Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-01 Thread Cindy Swearingen
Hi Jan, These commands will tell you if FMA faults are logged: # fmdump # fmadm faulty This command will tell you if errors are accumulating on this disk: # fmdump -eV | more Thanks, Cindy On 02/01/12 11:20, Jan Hellevik wrote: I suspect that something is wrong with one of my disks. This

Re: [zfs-discuss] Strange send failure

2012-02-09 Thread Cindy Swearingen
Hi Ian, This looks like CR 7097870. To resolve this problem, apply the latest s11 SRU to both systems. Thanks, Cindy On 02/08/12 17:55, Ian Collins wrote: Hello, I'm attempting to dry run the send the root data set of a zone from one Solaris 11 host to another: sudo zfs send -r rpool/zoneR

Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Cindy Swearingen
Hi Bob, Not many options because you can't attach disks to convert a non-redundant pool to a RAIDZ pool. To me, the best solution is to get one more disk (for a total of 4 disks) to create a mirrored pool. Mirrored pools provide more flexibility. See 1 below. See the options below. Thanks, Ci

Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Cindy Swearingen
> In theory, instead of this missing > disk approach I could create a two-disk raidz pool and later add the > third disk to it, right? No, you can't add a 3rd disk to an existing RAIDZ vdev of two disks. You would want to add another 2 disk RAIDZ vdev. See Example 4-2 in this section: http://do

Re: [zfs-discuss] Accessing Data from a detached device.

2012-03-29 Thread Cindy Swearingen
Hi Matt, There is no easy way to access data from a detached device. You could try to force import it on another system or under a different name on the same system with the remaining device. The easiest way is to split the mirrored pool. See the steps below. Thanks, Cindy # zpool status po

Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen
Hi Peter, The root pool disk labeling/partitioning is not so easy. I don't know which OpenIndiana release this is but in a previous Solaris release we had a bug that caused the error message below and the workaround is exactly what you did, use the -f option. We don't yet have an easy way to cl

Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Cindy Swearingen
System Step 9 uses the format-->disk-->partition-->modify option and sets the free hog space to slice 0. Then, you press return for each existing slice to zero them out. This creates one large slice 0. cs On 04/12/12 11:48, Cindy Swearingen wrote: Hi Peter, The root pool disk labeling/par

Re: [zfs-discuss] zpool split failing

2012-04-16 Thread Cindy Swearingen
Hi Matt, I don't have a way to reproduce this issue and I don't know why this is failing. Maybe someone else does. I know someone who recently split a root pool running the S11 FCS release without problems. I'm not a fan of root pools on external USB devices. I haven't tested these steps in a w

Re: [zfs-discuss] zpool split failing

2012-04-17 Thread Cindy Swearingen
n both mirror devices were online. Is this a know issue with ZFS ? bug ? cheers Matt On 04/16/12 10:05 PM, Cindy Swearingen wrote: Hi Matt, I don't have a way to reproduce this issue and I don't know why this is failing. Maybe someone else does. I know someone who recently split

Re: [zfs-discuss] Aaron Toponce: Install ZFS on Debian GNU/Linux

2012-04-18 Thread Cindy Swearingen
>Hmmm, how come they have encryption and we don't? As in Solaris releases, or some other "we"? http://docs.oracle.com/cd/E23824_01/html/821-1448/gkkih.html https://blogs.oracle.com/darren/entry/my_11_favourite_solaris_11 Thanks, Cindy On 04/18/12 05:43, Jim Klimov wrote: 2012-04-18 6:57, Dav

Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Cindy Swearingen
Hi Karl, I like to verify that no dead or dying disk is killing pool performance and your zpool status looks good. Jim has replied with some ideas to check your individual device performance. Otherwise, you might be impacted by this CR: 7060894 zfs recv is excruciatingly slow This CR covers bo

Re: [zfs-discuss] slow zfs send

2012-05-07 Thread Cindy Swearingen
Hi Karl, Someone sitting across the table from me (who saw my posting) informs me that CR 7060894 would not impact Solaris 10 releases, so kindly withdrawn my comment about CR 7060894. Thanks, Cindy On 5/7/12 11:35 AM, Cindy Swearingen wrote: Hi Karl, I like to verify that no dead or dying

Re: [zfs-discuss] Spare drive inherited cksum errors?

2012-05-29 Thread Cindy Swearingen
Hi-- You don't see what release this is but I think that seeing the checkum error accumulation on the spare was a zpool status formatting bug that I have seen myself. This is fixed in a later Solaris release. Thanks, Cindy On 05/28/12 22:21, Stephan Budach wrote: Hi all, just to wrap this is

Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Cindy Swearingen
Hi Hans, Its important to identify your OS release to determine if booting from a 4k disk is supported. Thanks, Cindy On 06/15/12 06:14, Hans J Albertsson wrote: I've got my root pool on a mirror on 2 512 byte blocksize disks. I want to move the root pool to two 2 TB disks with 4k blocks. T

Re: [zfs-discuss] Understanding ZFS recovery

2012-07-12 Thread Cindy Swearingen
Hi Rich, I don't think anyone can say definitively how this problem resolved, but I believe that the dd command overwrote some of the disk label, as you describe below. Your format output below looks like you relabeled the disk and maybe that was enough to resolve this problem. I have had succe

Re: [zfs-discuss] [osol-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-13 Thread Cindy Swearingen
I don't think that xattrs were ever intended or designed for /proc content. I could file an RFE for you if you wish. Thanks, Cindy On 07/13/12 14:00, ольга крыжановская wrote: Yes, accessing the files through runat works. I think /proc (and /dev/fd, which has the same trouble but only works

Re: [zfs-discuss] [osol-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-16 Thread Cindy Swearingen
02:33, Cindy Swearingen wrote: I don't think that xattrs were ever intended or designed for /proc content. I could file an RFE for you if you wish. So Oracle Newspeak now calls it an RFE if you want a real bug fixed, huh? ;-) This is a real bug in procfs. Problem is, procfs can'

Re: [zfs-discuss] Has anyone switched from IR -> IT firmware on the fly ? (existing zpool on LSI 9211-8i)

2012-07-18 Thread Cindy Swearingen
Here's a better link below. I have seen enough bad things happen to pool devices when hardware is changed or firmware is updated to recommend that the pool is exported first, even an HBA firmware update. Either shutting the system down (where pool is hosted) or exporting the pool should do it.

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen
Hi-- Patches are available to fix this so I would suggest that you request them from MOS support. This fix fell through the cracks and we tried really hard to get it in the current Solaris 10 release but sometimes things don't work in your favor. The patches are available though. Relabeling dis

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-25 Thread Cindy Swearingen
Hi-- I guess I can't begin to understand patching. Yes, you provided a whole disk to zpool create but it actually creates a part(ition) 0 as you can see in the output below. Part TagFlag First SectorSizeLast Sector 0 usrwm 256 19.99GB

Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-08-01 Thread Cindy Swearingen
size changes 6430818 Solaris needs mechanism of dynamically increasing LUN size -Original Message- From: Hung-Sheng Tsao (LaoTsao) Ph.D [mailto:laot...@gmail.com] Sent: 2012. július 26. 14:49 To: Habony, Zsolt Cc: Cindy Swearingen; Sašo Kiselkov; zfs-discuss@opensolaris.org Subject: Re:

Re: [zfs-discuss] Missing disk space

2012-08-03 Thread Cindy Swearingen
You said you're new to ZFS so might consider using zpool list and zfs list rather df -k to reconcile your disk space. In addition, your pool type (mirrored on RAIDZ) provides a different space perspective in zpool list that is not always easy to understand. http://docs.oracle.com/cd/E23824_01/ht

Re: [zfs-discuss] Sudden and Dramatic Performance Drop-off

2012-10-04 Thread Cindy Swearingen
Hi Charles, Yes, a faulty or failing disk can kill performance. I would see if FMA has generated any faults: # fmadm faulty Or, if any of the devices are collecting errors: # fmdump -eV | more Thanks, Cindy On 10/04/12 11:22, Knipe, Charles wrote: Hey guys, I’ve run into another ZFS perf

Re: [zfs-discuss] Segfault running "zfs create -o readonly=off tank/test" on Solaris 11 Express 11/11

2012-10-23 Thread Cindy Swearingen
Hi Andreas, Which release is this... Can you provide the /etc/release info? It works fine for me on a S11 Express (b162) system: # zfs create -o readonly=off pond/amy # zfs get readonly pond/amy NAME PROPERTY VALUE SOURCE pond/amy readonly off local This is somewhat redundant sy

Re: [zfs-discuss] VXFS to ZFS

2012-12-05 Thread Cindy Swearingen
Hi Morris, I hope someone has done this recently and can comment, but the process is mostly manual and it will depend on how much gear you have. For example, if you have some extra disks, you can build a minimal ZFS storage pool to hold the bulk of your data. Then, you can do a live migration of

Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Cindy Swearingen
Hey Sol, Can you send me the core file, please? I would like to file a bug for this problem. Thanks, Cindy On 12/14/12 02:21, sol wrote: Here it is: # pstack core.format1 core 'core.format1' of 3351: format - lwp# 1 / thread# 1 0806de73 can_efi_disk_be_ex

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-17 Thread Cindy Swearingen
Hi Jamie, No doubt. This is a bad bug and we apologize. Below is a misconception that this bug is related to the VM2 project. It is not. Its related to a problem that was introduced in the ZFS ARC code. If you would send me your SR number privately, we can work with the support person to correc

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-18 Thread Cindy Swearingen
Hi Sol, The appliance is affected as well. I apologize. The MOS article is for internal diagnostics. I'll provide a set of steps to identify this problem as soon as I understand them better. Thanks, Cindy On 12/18/12 05:27, sol wrote: *From:* Cindy Swearingen No doubt. This

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-19 Thread Cindy Swearingen
impacted by this problem. If scrubbing the pool finds permanent metadata errors, then you should open an SR. B. If zdb doesn't complete successfully, open an SR. On 12/18/12 09:45, Cindy Swearingen wrote: Hi Sol, The appliance is affected as well. I apologize. The MOS article is for int

Re: [zfs-discuss] Pool performance when nearly full

2012-12-20 Thread Cindy Swearingen
Hi Sol, You can review the Solaris 11 ZFS best practices info, here: http://docs.oracle.com/cd/E26502_01/html/E29007/practice-1.html#scrolltoc The above section also provides info about the full pool performance penalty. For S11 releases, we're going to increase the 80% pool capacity recommend

Re: [zfs-discuss] zfs receive options (was S11 vs illumos zfs compatiblity)

2012-12-21 Thread Cindy Swearingen
Hi Ned, Which man page are you referring to? I see the zfs receive -o syntax in the S11 man page. The bottom line is that not all properties can be set on the receiving side and the syntax is one property setting per -o option. See below for several examples. Thanks, Cindy I don't think ver

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-30 Thread cindy swearingen
Existing Solaris 10 releases are not impacted. S10u11 isn't released yet so I think we can assume that this upcoming Solaris 10 release will include a preventative fix. Thanks, Cindy On Thu, Dec 27, 2012 at 11:11 PM, Andras Spitzer wrote: > Josh, > > You mention that Oracle is preparing patches

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Cindy Swearingen
Free advice is cheap... I personally don't see the advantage of caching reads and logging writes to the same devices. (Is this recommended?) If this pool is serving CIFS/NFS, I would recommend testing for best performance with a mirrored log device first without a separate cache device: # zpool

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Cindy Swearingen
Hi Jamie, Yes, that is correct. The S11u1 version of this bug is: https://bug.oraclecorp.com/pls/bug/webbug_print.show?c_rptno=15852599 and has this notation which means Solaris 11.1 SRU 3.4: Changeset pushed to build 0.175.1.3.0.4.0 Thanks, Cindy On 01/11/13 19:10, Jamie Krier wrote: It

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-14 Thread Cindy Swearingen
I believe the bug.oraclecorp.com URL is accessible with a support contract, but its difficult for me to test. I should have mentioned it. I apologize. cs On 01/14/13 14:02, Nico Williams wrote: On Mon, Jan 14, 2013 at 1:48 PM, Tomas Forsman wrote: https://bug.oraclecorp.com/pls/bug/webbug_pr

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-16 Thread cindy swearingen
Hey Ned and Everyone, This was new news to use too and we're just talking over some options yesterday afternoon so please give us a chance to regroup and provide some alternatives. This list will be shutdown but we can start a new one on java.net. There is a huge ecosystem around Solaris and ZFS,

Re: [zfs-discuss] zfs-discuss mailing list & opensolaris EOL

2013-02-18 Thread Cindy Swearingen
Hi Jim, We will be restaging the ZFS community info, most likely on OTN. The zfs discussion list archive cannot be migrated to the new list on java.net, but you can pick it up here: http://www.mail-archive.com/zfs-discuss@opensolaris.org/ We are looking at other ways to make the zfs discuss li

Re: [zfs-discuss] partioned cache devices

2013-03-19 Thread Cindy Swearingen
Hi Andrew, Your original syntax was incorrect. A p* device is a larger container for the d* device or s* devices. In the case of a cache device, you need to specify a d* or s* device. That you can add p* devices to a pool is a bug. Adding different slices from c25t10d1 as both log and cache dev

Re: [zfs-discuss] What would be the best tutorial cum reference doc for ZFS

2013-03-19 Thread Cindy Swearingen
Hi Hans, Start with the ZFS Admin Guide, here: http://docs.oracle.com/cd/E26502_01/html/E29007/index.html Or, start with your specific questions. Thanks, Cindy On 03/19/13 03:30, Hans J. Albertsson wrote: as used on Illumos? I've seen a few tutorials written by people who obviously are very

Re: [zfs-discuss] This mailing list EOL???

2013-03-20 Thread Cindy Swearingen
Hi Ned, This list is migrating to java.net and will not be available in its current form after March 24, 2013. The archive of this list is available here: http://www.mail-archive.com/zfs-discuss@opensolaris.org/ I will provide an invitation to the new list shortly. Thanks for your patience.

[zfs-discuss] Please join us on the new zfs discuss list on java.net

2013-03-20 Thread Cindy Swearingen
Hi Everyone, The ZFS discussion list is moving to java.net. This opensolaris/zfs discussion will not be available after March 24. There is no way to migrate the existing list to the new list. The solaris-zfs project is here: http://java.net/projects/solaris-zfs See the steps below to join the

[zfs-discuss] LAST CALL: zfs-discuss is moving Sunday, March 24, 2013

2013-03-22 Thread Cindy Swearingen
I hope to see everyone on the other side... *** The ZFS discussion list is moving to java.net. This opensolaris/zfs discussion will not be available after March 24. There is no way to migrate the existing list to the new list. The solaris-zfs project is here

Re: [zfs-discuss] Narrow escape with FAULTED disks

2010-08-17 Thread Cindy Swearingen
Hi Mark, I would recheck with fmdump to see if you have any persistent errors on the second disk. The fmdump command will display faults and fmdump -eV will display errors (persistent faults that have turned into errors based on some criteria). If fmdump -eV doesn't show any activity for that

Re: [zfs-discuss] Narrow escape with FAULTED disks

2010-08-18 Thread Cindy Swearingen
Its hard to tell what caused the smart predictive failure message, like a temp fluctuation. If ZFS noticed that a disk wasn't available yet, then I would expect a message to that effect. In any case, I think I would have a replacement disk available. The important thing is that you continue to m

Re: [zfs-discuss] Please help destroy pool.

2010-08-18 Thread Cindy Swearingen
Hi Alxen4, If /tank/macbook0-data is a ZFS volume that has been shared as an iSCSI LUN, then you will need to unshare/remove those features before removing it. Thanks, Cindy On 08/18/10 00:10, Alxen4 wrote: I have a pool with zvolume (Opensolaris b134) When I try zpool destroy tank I get "po

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Cindy Swearingen
Hi Rainer, I'm no device expert but we see this problem when firmware updates or other device/controller changes change the device ID associated with the devices in the pool. In general, ZFS can handle controller/device changes if the driver generates or fabricates device IDs. You can view devic

<    1   2   3   4   5   6   7   8   >