[zfs-discuss] Mixing RAID levels in a pool
Hi, I had a plan to set up a zfs pool with different raid levels but I ran into an issue based on some testing I've done in a VM. I have 3x 750 GB hard drives and 2x 320 GB hard drives available, and I want to set up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to the same pool. I tested detaching a drive and it seems to seriously mess up the entire pool and I can't figure out a way to recover. # zpool create mypool raidz c1t1d0 c1t2d0 c1t3d0 # zpool add -f mypool mirror c1t4d0 c1t5d0 # zpool status pool: mypool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz1ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 mirrorONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 errors: No known data errors So far so good...now try to detach a drive. # zpool detach mypool c1t4d0 # zpool status pool: mypool state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM mypool ONLINE 0 0 0 raidz1ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t5d0ONLINE 0 0 0 c1t5d0 was part of a mirror but with c1t4d0 removed it now appears as a single drive. Is there a way to recover from this by recreating the mirror with c1t4d0? I've also heard that you can upgrade disks in a raidz one at a time to a higher capacity but I can't detach or remove any of the disks in the raidz. I'm guessing that is because there's no spare drive and the only way to do it is to remove the drive physically and stick a new one in. It would be degraded and a zfs replace could be done. Is that right? Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SATA controller suggestion
Tim wrote: > > > > **pci or pci-x. Yes, you might see *SOME* loss in speed from a pci > interface, but let's be honest, there aren't a whole lot of users on > this list that have the infrastructure to use greater than 100MB/sec who > are asking this sort of question. A PCI bus should have no issues > pushing that. > Hm. If it's a system with only 1 PCI bus, there are still a few things to consider here. If it's plain old 33mhz, 32 bit PCI your 100MB/s(ish) usable bandwidth is actually total bandwidth. That's 50MB/s in and 50MB/s out, if you are copying disk to disk... I am about to update my home server for exactly the issue of saturating my PCI bus... It's even worse for me, as I'm mirroring, so, that works out to closer to 33MB/s read, 33MB/s write + 33 MB/s write to the mirror. All in all, it blows. I'm looking into one of the new gigabyte NVIDIA based systems with the 750aSLI chipsets. I'm *hoping* the Solaris nv_sata drivers will work with the new chipset (or that we are on the way to updating them...). My other box that's using the Nforce 570 works like a champ, and I'm hoping to recapture that magic. (I actually wanted to buy some more 570 based MB's but cannot get 'em in Australia any more... :) Cheers! Nathan. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs create or normal directories
I'm quite new to ZFS. It is so very easy to create new filesystems using "zfs create zpool/fs" that sometimes I doubt what to do: create a directory (like on ufs) or do a zfs create.. Can somebody give some advise on -when- to use a "normal" directory and -when- it is better to create a "zpool/fssysstem" I know this is related to personal taste, but -some- good advice might exist ;-) -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D ++ http://nagual.nl/ + SunOS sxde 01/08 ++ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Cannot delete errored file
Weird. I have no idea how you could remove that file (beside destroying the entire filesystem)... One other thing I noticed: NAMESTATE READ WRITE CKSUM rpool ONLINE 0 0 8 raidz1ONLINE 0 0 8 c0t7d0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 When you see non-zero CKSUM error counters at the pool or raidz1/z2 vdev level, but no error on the devices like this, it means that ZFS couldn't correct the corruption errors after multiple attempts of reconstructing the stripes, each time assuming a different device was corrupting data. IOW it means that 2+ (in a raidz1) or 3+ (in a raidz2) devices returned corrupted data in the same stripe. Since it is statistically improbable to have that many silent data corruption in the same stripe, most likely this condition indicates a hardware problem. I suggest running memtest to stress-test your cpu/mem/mobo. -marc ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mixing RAID levels in a pool
On 07 June, 2008 - Fu Leow sent me these 2,0K bytes: > Hi, > > I had a plan to set up a zfs pool with different raid levels but I ran > into an issue based on some testing I've done in a VM. I have 3x 750 > GB hard drives and 2x 320 GB hard drives available, and I want to set > up a RAIDZ for the 750 GB and mirror for the 320 GB and add it all to > the same pool. > > # zpool detach mypool c1t4d0 > # zpool status > pool: mypool > state: ONLINE > scrub: none requested > config: > > NAMESTATE READ WRITE CKSUM > mypool ONLINE 0 0 0 > raidz1ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t5d0ONLINE 0 0 0 > > c1t5d0 was part of a mirror but with c1t4d0 removed it now appears as > a single drive. Is there a way to recover from this by recreating the > mirror with c1t4d0? zpool attach mypool c1t5d0 c1t4d0 > I've also heard that you can upgrade disks in a raidz one at a time to > a higher capacity but I can't detach or remove any of the disks in the > raidz. I'm guessing that is because there's no spare drive and the > only way to do it is to remove the drive physically and stick a new > one in. It would be degraded and a zfs replace could be done. Is that > right? zpool replace mypool c1t1d0 c1t6d0 should work.. .. or just yank a drive out and put a different one in, and then run zpool scrub mypool repeat for t1..t3 /Tomas -- Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Umeå `- Sysadmin at {cs,acc}.umu.se ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs create or normal directories
On 07 June, 2008 - Dick Hoogendijk sent me these 0,6K bytes: > I'm quite new to ZFS. It is so very easy to create new filesystems > using "zfs create zpool/fs" that sometimes I doubt what to do: create a > directory (like on ufs) or do a zfs create.. > > Can somebody give some advise on -when- to use a "normal" directory > and -when- it is better to create a "zpool/fssysstem" > > I know this is related to personal taste, but -some- good advice might > exist ;-) When you need different accounting (df) or FS options (compression, ...) for a specific tree.. /Tomas -- Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Umeå `- Sysadmin at {cs,acc}.umu.se ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Cannot delete errored file
Thanks Marc - I'll run memtest on Monday, and re-seat memory/cpu//cards etc. If that fails, I'll try moving the devices onto a different SATA controller. Failing that I'll rebuild from scratch. Failing that, I'll get a new motherboard! Ben This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mixing RAID levels in a pool
> c1t5d0 was part of a mirror but with c1t4d0 removed it now appears as > a single drive. Is there a way to recover from this by recreating the > mirror with c1t4d0? Detaching a drive from a two-way mirror effectively breaks it up and turns it into a single drive. That's normal. Just attach it back to c1t5d0 and it'll become a mirror again. Retry your experiment by detaching a drive from the RAID-Z array and you'll see what you were expecting. -mg ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs send/receive issue
Hi Folks, I'm trying to backup my /export folder to an USB disk by zfs send/receive. But zfs receive try to mount the dataset to a mountpoint which is already mounted on the existing zfs system, and failed. see below: 1) zfs list # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 5.63G 69.2G 56.5K /rpool [EMAIL PROTECTED] 18K - 55.5K - rpool/ROOT 3.38G 69.2G18K /rpool/ROOT rpool/[EMAIL PROTECTED] 0 -18K - rpool/ROOT/opensolaris 3.38G 69.2G 2.47G legacy rpool/ROOT/[EMAIL PROTECTED] 60.0M - 2.22G - rpool/ROOT/opensolaris/opt 863M 69.2G 863M /opt rpool/ROOT/opensolaris/[EMAIL PROTECTED]72K - 3.60M - rpool/export2.25G 69.2G19K /export rpool/[EMAIL PROTECTED] 15K -19K - rpool/export/home 2.25G 69.2G 2.25G /export/home rpool/export/[EMAIL PROTECTED] 19K -21K - 2) create a pool on my usb disk #zpool create tank /dev/dsk/c7t0d0 3) backup to the USB disk # zfs send -R rpool/[EMAIL PROTECTED] | zfs receive -dF tank cannot mount '/export': directory is not empty See, /export is already mounted by rpool/export, and of course it failed when "zfs receive" want to mount tank/export to /export. Any suggestions? Thanks, -Aubrey ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?
>> >> The problem with that argument is that 10.000 users on one vxfs or UFS >> filesystem is no problem at all, be it /var/mail or home directories. >> You don't even need a fast server for that. 10.000 zfs file systems is >> a problem. >> >> So, if it makes you happier, substitute mail with home directories. >> > > If you feel strongly, please pile onto CR 6557894 > http://bugs.opensolaris.org/view_bug.do?bug_id=6557894 > If we continue to talk about it on the alias, we will just end up > finding ways to solve the business problem using available > technologies. If I need to count useage I can use du. But if you can implement space usage info on a per-uid basis you are not far from quota per uid... > > A single file system serving 10,000 home directories doesn't scale > either, unless the vast majority are unused -- in which case it is a > practical problem for much less than 10,000 home directories. > I think you will find that the people who scale out have a better > long-term strategy. We have a file system (vxfs) that is serving 30,000 home directories. Yes, most of those are unused, but we still have to have them as we don't know when the student will use it. If this where zfs we would have to create 30,000 filesystem. Every file system has a cost in RAM and in performance. So, in ufs or vxfs unused home directories costs close to nothing. In zfs they have a very real cost. > > The limitations of UFS do become apparent as you try to scale > to the size permitted with ZFS. For example, the largest UFS > file system supported is 16 TBytes, or 1/4 of a thumper. So if you > are telling me that you are serving 10,000 home directories in > a 16 TByte UFS file system with quotas (1.6 GBytes/user? I've > got 16 GBytes in my phone :-), then I will definitely buy you a > beer. And aspirin. I'll bring a calendar so we can measure the > fsck time when the log can't be replayed. Actually, you'd > probably run out of inodes long before you filled it up. I wonder > how long it would take to run quotacheck? But I digress. Let's > just agree that UFS won't scale well and the people who do > serve UFS as home directories for large populations tend to use > multiple file systems. We have 30,000 accounts on a 1TByte file system. If we need to we could make 16 1Tb file systems, no problem. But 30,000 file systems on one server? Maybe not so good... If we could lower the cost of a zfs file system to zero all would be good for my usages. The best thing to do is probably AFS on ZFS. AFS can handle many volumes (file systems) and ZFS is very good at the storage. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs create or normal directories
Dick Hoogendijk wrote: > I'm quite new to ZFS. It is so very easy to create new filesystems > using "zfs create zpool/fs" that sometimes I doubt what to do: create a > directory (like on ufs) or do a zfs create.. > > Can somebody give some advise on -when- to use a "normal" directory > and -when- it is better to create a "zpool/fssysstem" My guess is this - A filesystem gives you resource management, snapshots and statistics (fsstat). A filesystem per project could be useful for archiving, version control, etc. With resource management comes responsibility, which means you have to make *decisions*. You have to decide if you want to dedicate resources or want to implement a separate snapshot policy. If you expect to need lots of directories, it's probably easier to keep them that way, unless you don't fear the forest for the trees... Hmmm, lousy metaphor. Cheers, Henk ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Get your SXCE on ZFS here!
[EMAIL PROTECTED] wrote: > Uwe, > > Please see pages 55-80 of the ZFS Admin Guide, here: > > http://opensolaris.org/os/community/zfs/docs/ > > Basically, the process is to upgrade from nv81 to nv90 by using the > standard upgrade feature. Then, use lucreate to migrate your UFS root > file system to a ZFS file system, like this: > > 1. Verify you have a current backup. > 2. Read the known issues and requirements. > 3. Upgrade to nv81 to nv90 using the standard upgrade feature. > 4. Migrate your UFS root file system to a ZFS root file system, > like this: > # zpool create rpool mirror c0t1d0s0 c0t2d0s0 > # lucreate -c c0t0d0s0 -n zfsBE -p rpool > 5. Activate the ZFS BE, like this: > # luactivate zfsBE > > Please see the doc for more examples of this process. > > Cindy > Hi Cindy, unfortunately, this approach fails for me, because lucreate errors out (see below). Does anybody know, if this is a known issue? - Thomas # lucreate -n nv90ext -p ext1 Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . Copying. WARNING: The file contains a list of <45> potential problems (issues) that were encountered while populating boot environment . INFORMATION: You must review the issues listed in and determine if any must be resolved. In general, you can ignore warnings about files that were skipped because they did not exist or could not be opened. You cannot ignore errors such as directories or files that could not be created, or file systems running out of disk space. You must manually resolve any such problems before you activate boot environment . Creating shared file system mount points. Creating compare databases for boot environment . Creating compare database for file system . Updating compare databases on boot environment . Making boot environment bootable. ERROR: Unable to determine the configuration of the target boot environment . ERROR: Update of loader failed. ERROR: Cannot make ABE bootable. Making the ABE bootable FAILED. ERROR: Unable to make boot environment bootable. ERROR: Unable to populate file systems on boot environment . ERROR: Cannot make file systems for boot environment . $ cat /tmp/lucopy.errors.5981 Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/template" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/latest" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/1/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/4/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/5/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/14/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/16/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/18/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/19/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/23/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/25/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/28/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/37/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/43/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/44/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/45/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/46/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/47/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/48/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/51/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/52/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/53/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/55/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/56/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/57/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/58/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/59/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/60/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/62/ctl" Restoring existing "/.alt.tmp.b-aEb.mnt/system/contract/process/63/ctl" Restoring ex
Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?
On Fri, Jun 6, 2008 at 4:14 PM, Peter Tribble <[EMAIL PROTECTED]> wrote: ... very big snip ... > (Although I have to say that, in a previous job, scrapping user quotas > entirely > not only resulted in happier users, much less work for the helpdesk, and - > paradoxically - largely eliminated systems running out of space.) [Hi Peter] Agreed. So, one has to re-evaluate "legacy" thinking in the context of inexpensive storage offered by ZFS in combination with cost effective disk drives and ask the question: what lowers the total cost of ownership and provides the best user experience? Option a) A complex quota based system implemented on top of providing the "correct" required system storage capacity. Option b) A ZFS based storage system with x2 or x4 (etc) times the "correct" required storage capacity with a once a day cron job to remind the storage hogs (users) to trim their disk space, or face the "or else" option (the stick approach). And perhaps a few quotas on filesystems used by applications or users know to be problematic. I would submit that Option b) will provide a lower cost, in terms of total system cost, over time - expecially given the price performance of modern disk drives in combination with high performance log and cache devices (if required). Every time I've come across a usage scenario where the submitter asks for per user quotas, its usually a university type scenario where univeristies are notorious for providing lots of CPU horsepower (many, many servers) attached to a simply dismal amount of back-end storage. Where users are, and continue to be, miffed at the dismal amount of storage they are offered. This is legacy thinking looking for a "legacy-thinking-compliant" solution to a problem that has already been solved by ZFS and the current generation of high capacity, ultra low-cost per terabyte, offered by modern hardware. IOW - it's a people issue, rather than a technological issue. Regards, -- Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED] Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?
For the cifs side of the house, I think it would be in Sun's best interest to work with a third party vendor like NTP software. The quota functionality they provide is far more robust than anything I expect we'll ever see come directly with zfs. And rightly so... it's what they specialize in. http://www.ntpsoftware.com/products/qfs/?adrid= On Sat, Jun 7, 2008 at 2:20 PM, Al Hopper <[EMAIL PROTECTED]> wrote: > On Fri, Jun 6, 2008 at 4:14 PM, Peter Tribble <[EMAIL PROTECTED]> > wrote: > ... very big snip ... > > (Although I have to say that, in a previous job, scrapping user quotas > entirely > > not only resulted in happier users, much less work for the helpdesk, and > - > > paradoxically - largely eliminated systems running out of space.) > > [Hi Peter] > > Agreed. > > So, one has to re-evaluate "legacy" thinking in the context of > inexpensive storage offered by ZFS in combination with cost effective > disk drives and ask the question: what lowers the total cost of > ownership and provides the best user experience? > > Option a) A complex quota based system implemented on top of providing > the "correct" required system storage capacity. > > Option b) A ZFS based storage system with x2 or x4 (etc) times the > "correct" required storage capacity with a once a day cron job to > remind the storage hogs (users) to trim their disk space, or face the > "or else" option (the stick approach). And perhaps a few quotas on > filesystems used by applications or users know to be problematic. > > I would submit that Option b) will provide a lower cost, in terms of > total system cost, over time - expecially given the price performance > of modern disk drives in combination with high performance log and > cache devices (if required). > > Every time I've come across a usage scenario where the submitter asks > for per user quotas, its usually a university type scenario where > univeristies are notorious for providing lots of CPU horsepower (many, > many servers) attached to a simply dismal amount of back-end storage. > Where users are, and continue to be, miffed at the dismal amount of > storage they are offered. This is legacy thinking looking for a > "legacy-thinking-compliant" solution to a problem that has already > been solved by ZFS and the current generation of high capacity, ultra > low-cost per terabyte, offered by modern hardware. IOW - it's a > people issue, rather than a technological issue. > > Regards, > > -- > Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED] > Voice: 972.379.2133 Timezone: US CDT > OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 > http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?
On Sat, 7 Jun 2008, Mattias Pantzare wrote: > > If I need to count useage I can use du. But if you can implement space > usage info on a per-uid basis you are not far from quota per uid... That sounds like quite a challenge. UIDs are just numbers and new ones can appear at any time. Files with existing UIDs can have their UIDs switched from one to another at any time. The space used per UID needs to be tallied continuously and needs to track every change, including real-time file growth and truncation. We are ultimately talking about 128 bit counters here. Instead of having one counter per filesystem we now have potentially hundreds of thousands, which represents substantial memory. Multicore systems have the additional challenge that this complex information needs to be effectively shared between cores. Imagine if you have 512 CPU cores, all of which are running some of the ZFS code and have their own caches which become invalidated whenever one of those counters is updated. This sounds like a no-go for an almost infinite-sized pooled "last word" filesystem like ZFS. ZFS is already quite lazy at evaluating space consumption. With ZFS, 'du' does not always reflect true usage since updates are delayed. Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mixing RAID levels in a pool
On Sat, Jun 7, 2008 at 4:13 AM, Mario Goebbels <[EMAIL PROTECTED]> wrote: >> c1t5d0 was part of a mirror but with c1t4d0 removed it now appears as >> a single drive. Is there a way to recover from this by recreating the >> mirror with c1t4d0? > > Detaching a drive from a two-way mirror effectively breaks it up and > turns it into a single drive. That's normal. Just attach it back to > c1t5d0 and it'll become a mirror again. > > Retry your experiment by detaching a drive from the RAID-Z array and > you'll see what you were expecting. > > -mg > Thank you Tomas and Mario. I was able to recreate the mirror by reattaching. It seems that once a vdev has been added to a pool it is no longer possible to remove it. Is that right? For example if I created a pool that consisted of 3 single drives there would not be a way to remove a drive and reduce the number of devices in a pool from 3 to 2. Even if there is enough space in those 2 remaining drives to hold all the data. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] bug id 6462690, SYNC_NV issue
Hello Bill, Wednesday, June 4, 2008, 12:37:38 AM, you wrote: BS> I'm pretty sure that this bug is fixed in Solaris 10U5, patch BS> 127127-11 and 127128-11 (note: 6462690 sd driver should set BS> SYNC_NV bit when issuing SYNCHRONIZE CACHE to SBC-2 devices). BS> However, a test system with new 6140 arrays still seems to be BS> suffering from lots of cache flushes. This is verified by setting BS> "zfs_nocacheflush=1" and seeing noticeable improvement in BS> performance. Can someone verify that this fix is indeed in the BS> above release and patch set. If so, then are there parameters BS> that need to be set for this to be active or is there firmware or BS> other updates that need to be loaded into this new array? Any help appreciated. BS> Well, the real question is how 6140 reacts to SYNC_NV - probably it doesn't care... -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mixing RAID levels in a pool
On 07 June, 2008 - Fu Leow sent me these 1,1K bytes: > On Sat, Jun 7, 2008 at 4:13 AM, Mario Goebbels <[EMAIL PROTECTED]> wrote: > >> c1t5d0 was part of a mirror but with c1t4d0 removed it now appears as > >> a single drive. Is there a way to recover from this by recreating the > >> mirror with c1t4d0? > > > > Detaching a drive from a two-way mirror effectively breaks it up and > > turns it into a single drive. That's normal. Just attach it back to > > c1t5d0 and it'll become a mirror again. > > > > Retry your experiment by detaching a drive from the RAID-Z array and > > you'll see what you were expecting. > > > > -mg > > > > Thank you Tomas and Mario. I was able to recreate the mirror by reattaching. > > It seems that once a vdev has been added to a pool it is no longer > possible to remove it. Is that right? For example if I created a pool > that consisted of 3 single drives there would not be a way to remove a > drive and reduce the number of devices in a pool from 3 to 2. Even if > there is enough space in those 2 remaining drives to hold all the > data. Currently, yes. It's being worked on as far as I know. /Tomas -- Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Umeå `- Sysadmin at {cs,acc}.umu.se ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss