Re: [zfs-discuss] ZFS support for USB disks 120GB Western Digital
We don't have a specific supported configuration method for usb devices for ZFS. Most people are using them as mirrors or backups for their laptop data. It's really up to you. There are a few threads from the discuss archives where people have discussed some different possible configs for usb storage or ways they've used it. One is here: http://www.opensolaris.org/jive/thread.jspa?messageID=25144戸 where David Bustos also mentions Artem's blog as a go to. Perhaps we also should add something to this effect in the FAQ Also, depending on how you intend to use the disk, a known issue is this: 6424510 usb ignores DKIOCFLUSHWRITECACHE Noel On Jul 18, 2006, at 11:53 PM, Stefan Parvu wrote: Hey, I have a portable harddisk Western Digital 120GB USB. Im running Nevada b42a on Thinkpad T43. Is this a supported configuration for setting up ZFS on portable disks ? Found out some old blogs about this topic: http://blogs.sun.com/roller/page/artem?entry=zfs_on_the_go and some other info under: http://www.sun.com/io_technologies/USB-Faq.html Is this information still valid ? Under ZFS FAQ there is no mention of this topic, a good idea is to add a section about ZFS on mobile devices. Thanks, Stefan # rmformat Looking for devices... 1. Volmgt Node: /vol/dev/aliases/cdrom0 Logical Node: /dev/rdsk/c1t0d0s2 Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: MATSHITA UJDA765 DVD/CDRW 1.70 Device Type: DVD Reader Bus: IDE Size: Label: Access permissions: 2. Volmgt Node: /vol/dev/aliases/rmdisk0 Logical Node: /dev/rdsk/c2t0d0p0 Physical Node: /[EMAIL PROTECTED],0/pci1014,[EMAIL PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED]/ [EMAIL PROTECTED],0 Connected Device: WDC WD12 00VE-00KWT0 Device Type: Removable Bus: USB Size: 114.5 GB Label: Access permissions: Medium is not write protected. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] add dataset
Hi All, Thanks for the replies. Yes, it was related to the versions. I had U2 May assembly which did not work. However, the 9th June release worked well. Thanks again. Roshan - Original Message - From: Zoram Thanga <[EMAIL PROTECTED]> Date: Tuesday, July 18, 2006 12:25 pm Subject: Re: [zfs-discuss] add dataset To: Roshan Perera <[EMAIL PROTECTED]> Cc: [EMAIL PROTECTED], zfs-discuss@opensolaris.org > Which version of Solaris are you using? You should be able to add > a > dataset if you're running Solaris express. Not sure if this > feature was > backported to S10u2. > > global# uname -a > SunOS psonali1 5.11 snv_42 sun4u sparc SUNW,Sun-Fire-V210 > global# zonecfg -z fozoone > fozoone: No such zone configured > Use 'create' to begin configuring a new zone. > zonecfg:fozoone> create > zonecfg:fozoone> add dataset > zonecfg:fozoone:dataset> set name=fooset > zonecfg:fozoone:dataset> end > zonecfg:fozoone> > > > Thanks, > Zoram > > > Roshan Perera wrote: > > Hi, > > > > a simple question.. > > > > is add dataset not part of zonecfg ? > > > > global# zonecfg -z myzone (OK) > > zonecfg:myzone> add dataset (fails as there is no dataset option) > > zonecfg:myzone> add zfs (fails as there is no dataset option) > > > > Basically how do I add a dataset to a zone ? > > > > Thanks > > > > Roshan > > > > please cc me [EMAIL PROTECTED] > > > > > > > > > > ___ > > zfs-discuss mailing list > > zfs-discuss@opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Zoram Thanga, Sun Cluster Development. > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] add dataset
On 7/18/06, Zoram Thanga <[EMAIL PROTECTED]> wrote: Which version of Solaris are you using? You should be able to add a dataset if you're running Solaris express. Not sure if this feature was backported to S10u2. It's available in the S10u2 we get from sun.com. -- Just me, Wire ... ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: howto reduce ?zfs introduced? noise
I have tested it, and it is _much_ better now. Unfortunately adding "set txg_time = 60" in /etc/system does not set this value upon system startup. It only works using mdb at runtime. Do you have an idea, what might be wrong? Cheers, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Q: T2000: raidctl vs. zpool status
Just FYI: cust removed /etc/zfs/zpool.cache and rebooted. using "zpool import", he was then able to import the pool anew. We're still interested in your opinion on this - so pls. keep those emails coming :-) TIA Michael PS: pls keep Steffen on your replies as well, he's not on the list. Michael Schuster - Sun Microsystems wrote: Hi all, IHACWHAC (I have a colleague who has a customer - hello, if you're listening :-) who's trying to build and test a scenario where he can salvage the data off the (internal ?) disks of a T2000 in case the sysboard and with it the on-board raid controller dies. If I understood correctly, he replaces the motherboard, does some magic to get the raid config back, but even when raidctl says "I'm fine", zpool complains that it cannot open one of the replicas: # raidctl RAIDVolume RAIDRAIDDisk Volume TypeStatus DiskStatus -- c0t0d0 IM OK c0t0d0 OK c0t1d0 OK c0t2d0 IM OK c0t2d0 OK c0t3d0 OK # zpool status -x pool: dpool state: FAULTED status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: none requested config: NAMESTATE READ WRITE CKSUM dpool UNAVAIL 0 0 0 insufficient replicas c0t2d0UNAVAIL 0 0 0 cannot open # what the customer does to achieve this is documented in the attachment (sorry about the German comments, but I thought translating them would have been a bit much to ask). TIA for any comments, etc. Michael -- Michael Schuster (+49 89) 46008-2974 / x62974 visit the online support center: http://www.sun.com/osc/ Recursion, n.: see 'Recursion' ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: howto reduce ?zfs introduced? noise
Did you write your /etc/system entry as follows? set zfs:txg_time=60 the txg_time parameter belongs to the zfs module, so you have to prefix the module name. Thanks, Zoram Thomas Maier-Komor wrote: I have tested it, and it is _much_ better now. Unfortunately adding "set txg_time = 60" in /etc/system does not set this value upon system startup. It only works using mdb at runtime. Do you have an idea, what might be wrong? Cheers, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Zoram Thanga, Sun Cluster Development. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Enabling compression/encryption on a populated filesystem
Darren Reed wrote: Bill Moore wrote: On Wed, Jul 19, 2006 at 03:10:00AM +0200, [EMAIL PROTECTED] wrote: So how many of the 128 bits of the blockpointer are used for things other than to point where the block is? 128 *bits*? What filesystem have you been using? :) We've got luxury-class block pointers that are 128 *bytes*. We get away with it For both the encryption and checksum use, it wouldn't be unreasonable to see the requirements here expand (maybe double?) sometime in the near future. I don't believe it does need to grow at all. Certainly not for checksum or compression and at this stage I don't seem to need any more space for crypto either. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Fun with ZFS and iscsi volumes
On Tuesday 18 July 2006 01:06, Jason Hoffman wrote: > 2) Filebench RAIDZ of 3x3 vs "RAID0" vs RAIDZ of 1x9 vs RAIDZ of 2x9 > a) Varmail (50:50 reads-writes): > - 2473.0 ops/s (RAIDZ of 3x3) > - 4316.8 ops/s (RAID0), > - 13144.8 ops/s (RAIDZ of 1x9), > - 11363.7 ops/s (RAIDZ of 2x9) How come RAID0 (9 striped volumes) is a lot slower than a RAIDZ of 9 volumes? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Can't remove corrupt file
I had a checksum error occur in a file. Since only one file is corrupt (and it's a link library at that) I don't want to blow away the whole pool to remove the corrupt file. However, I can't figure out any way to unlink the file. Using "rm" to try to unlink the file I get EIO: % rm llib-lip.ln rm: llib-lip.ln not removed: I/O error Trying to truncate it is also no dice: % cat >llib-lip.ln llib-lip.ln: I/O error What are the expected paths for recovery here? I took a look at: http://www.sun.com/msg/ZFS-8000-8A That page isn't helpful since it just says to "restore the file". Well, you can't restore a file if you can't cleanup the old corrupted one! (Also BTW that page has a typo, you might want to get the typo fixed, I didn't know where the doc bugs should go for those messages) - Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can't remove corrupt file
On Wed, 19 Jul 2006, Eric Lowe wrote: (Also BTW that page has a typo, you might want to get the typo fixed, I didn't know where the doc bugs should go for those messages) - Eric Product: event_registry Category: events Sub-Category: msg -tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Big JBOD: what would you do?
Richard Elling schrieb: First, let's convince everyone to mirror and not RAID-Z[2] -- boil one ocean at a time, there are only 5 you know... :-) For maximum protection 4-disk RAID-Z2 is *always* better than 4-disk RAID-1+0. With more disks use multiple 4-disk RAID-Z2 packs. Daniel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Can't remove corrupt file
(Also BTW that page has a typo, you might want to get the typo fixed, I didn't know where the doc bugs should go for those messages) Product: event_registry Category: events Sub-Category: msg Thanks, I filed 6450642. - Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Big JBOD: what would you do?
Eric Schrock wrote: One thing I would pay attention to is the future world of native ZFS root. On a thumper, you only have two drives which are bootable from the BIOS. For any application in which reliability is important, you would have these two drives mirrored as your root filesystem. There can be no hot spares for this pool, because any device you hot spare in will not be readable from the BIOS. For all the Thumper raidz2 models, I would assume only having 46 disks. This gives a nice bias towards one of the following configurations: - 5x(7+2), 1 hot spare, 21.0TB - 4x(9+2), 2 hot spares, 18.0TB - 6x(5+2), 4 hot spares, 15.0TB And in order to mitigate the impact of the lack of root spares in the scenario above, I'd go for plenty of hot spares, and do a manual swap of one hot-spare with the failing root mirror. Henk ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS support for USB disks 120GB Western Digital
I've updated the blog entry, no hacking around is necessary anymore. 'svcadm disable volfs' is still recommended (vold/volfs will be completely removed soon). USB is just an interface board slapped on the disk, there are disks with both USB and SATA interfaces - you can connect with either cable and it won't make a difference to ZFS because it tracks disks by devid, not device name. So what you do with your disks is really up to you. If you just want a single-disk zpool (though ZFS doesn't favor that), then probably 'zpool export' before disconnecting and 'zpool import' after connecting will do it for you. -Artem. Noel Dellofano wrote: We don't have a specific supported configuration method for usb devices for ZFS. Most people are using them as mirrors or backups for their laptop data. It's really up to you. There are a few threads from the discuss archives where people have discussed some different possible configs for usb storage or ways they've used it. One is here: http://www.opensolaris.org/jive/thread.jspa?messageID=25144戸 where David Bustos also mentions Artem's blog as a go to. Perhaps we also should add something to this effect in the FAQ Also, depending on how you intend to use the disk, a known issue is this: 6424510 usb ignores DKIOCFLUSHWRITECACHE Noel On Jul 18, 2006, at 11:53 PM, Stefan Parvu wrote: Hey, I have a portable harddisk Western Digital 120GB USB. Im running Nevada b42a on Thinkpad T43. Is this a supported configuration for setting up ZFS on portable disks ? Found out some old blogs about this topic: http://blogs.sun.com/roller/page/artem?entry=zfs_on_the_go and some other info under: http://www.sun.com/io_technologies/USB-Faq.html Is this information still valid ? Under ZFS FAQ there is no mention of this topic, a good idea is to add a section about ZFS on mobile devices. Thanks, Stefan # rmformat Looking for devices... 1. Volmgt Node: /vol/dev/aliases/cdrom0 Logical Node: /dev/rdsk/c1t0d0s2 Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],2/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: MATSHITA UJDA765 DVD/CDRW 1.70 Device Type: DVD Reader Bus: IDE Size: Label: Access permissions: 2. Volmgt Node: /vol/dev/aliases/rmdisk0 Logical Node: /dev/rdsk/c2t0d0p0 Physical Node: /[EMAIL PROTECTED],0/pci1014,[EMAIL PROTECTED],7/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 Connected Device: WDC WD12 00VE-00KWT0 Device Type: Removable Bus: USB Size: 114.5 GB Label: Access permissions: Medium is not write protected. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
On 7/17/06, Jonathan Wheeler <[EMAIL PROTECTED]> wrote: Hi All,I've just built an 8 disk zfs storage box, and I'm in the testing phase before I put it into production. I've run into some unusual results, and I was hoping the community could offer some suggestions. I've bascially made the switch to Solaris on the promises of ZFS alone (yes I'm that excited about it!), so naturally I'm looking forward to some great performance - but it appears I'm going to need some help finding all of it. One major concern Jonathan has is the 7-raidz write performance. (I see no big surprise in 'read' results.) "The really interesting numbers happen at 7 disks - it's slower then with 4, in all tests." I randomly picked 3 results from his several runs: -Per Char- --Block--- -Rewrite-- MB K/sec %CPU K/sec %CPU K/sec %CPU == == 4-disk 8196 57965 67.9 123268 27.6 78712 17.17-disk 8196 49454 57.1 92149 20.1 73013 16.0 8-disk 8196 61345 70.7 139259 28.5 89545 20.8 I looked at the corresponding dtrace data for7 and 8-raidz cases. (Should have also asked for 4-raidz data. Jonathan, you can still send 4-raidz data to me offline.) In 7-raidz, each disk had writes in two sizes: 214-block or 85-block, equally. DEVICE BLKs COUNT sd1 85 27855 214 27882 sd2 85 27854 214 27868 sd3 85 27849 214 27884 ...In 8-raidz, sd1,3,5,7 had either 220 or 221-block writes, equally. sd2,4,6,8 had 100% of 146-block writes. DEVICE BLKs COUNT sd1 220 16325 221 16338 sd2 146 49001 sd3 220 16335 221 16333 sd4 146 49005 sd5 220 16340 221 16324 sd6 146 49001 sd7 220 16332 221 16333 sd8 146 49009 In terms of average write response time, in 7-raidz DEVICE WRITE AVG.ms --- --- -- sd1 63990 54.03 sd2 64000 53.65 sd3 63898 55.48 sd4 64190 54.14 sd5 64091 54.81 sd6 63967 57.83 sd7 64092 54.19 in 8-raidz DEVICE WRITE AVG.ms --- --- -- sd1 42276 6.64 sd2 58467 19.66 sd3 42287 6.24 sd4 55198 20.01 sd5 42285 6.64 sd6 58409 22.90 sd7 42235 6.88 sd8 54967 24.46 At bdev level, 8-raidz shows much better turnaroundtime than 7-raidz, while disk 1,3,5,7 (larger writes) are better than 2,4,6,8 (smaller writes). So 8-raidz wins by larger writes and much better response time for each write, but why these two differences? and why the disparity between odd- and even-number disks within 8-raidz?Tao ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re[2]: [zfs-discuss] ZFS support for USB disks 120GB Western Digital
Hello Artem, Thursday, July 20, 2006, 12:37:06 AM, you wrote: AK> I've updated the blog entry, no hacking around is necessary anymore. 'svcadm AK> disable volfs' is still recommended (vold/volfs will be completely removed AK> soon). USB is just an interface board slapped on the disk, there are disks with AK> both USB and SATA interfaces - you can connect with either cable and it won't AK> make a difference to ZFS because it tracks disks by devid, not device name. AK> So what you do with your disks is really up to you. If you just want a AK> single-disk zpool (though ZFS doesn't favor that), then probably 'zpool export' AK> before disconnecting and 'zpool import' after connecting will do it for you. If you won't do it (zpool export) it will end-up probably with a system panic right now. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss