Re: [zfs-discuss] Fileserver performance tests
Hi, compression is off. I've checked rw-perfomance with 20 simultaneous cp and with the following... #!/usr/bin/bash for ((i=1; i<=20; i++)) do cp lala$i lulu$i & done (lala1-20 are 2gb files) ...and ended up with 546mb/s. Not too bad at all. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Is this a bug or a feature ?
Hello eric, Wednesday, October 10, 2007, 7:31:04 PM, you wrote: ek> On Oct 10, 2007, at 11:23 AM, Bernhard Duebi wrote: >> Hi everybody, >> >> I tested the following scenario: >> >> I have two machine attached to the same SAN LUN. >> Both machines run Solaris 10 Update 4. >> Machine A is active with zpool01 imported. >> Machine B is inactive. >> Machine A crashes. >> Machine B imports zpool01 >> Machine A comes back >> >> Now the problem is, that when machine A comes back, it imports >> zpool01 even if it belongs to machine B now. >> I've seen this problem some time ago in a blog, but don't remember >> where. Will this be fixed ? ek> See: ek> http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end ek> The changes are already in OpenSolaris, and will make it in s10u5. In a way he could workaround it today - manually imporot pool with -R option. It means that everytime server reboots you will have to manually import a pool but at least you won't endup with two hosts with the same pool. You could also write a script to import a pool with -R but without forcr option. So during normal (clean) reboots a pool will be imported automatically. Or if you already have two servers in a SAN, just grab Sun Cluster 3.2 which is for free (if you don't need a support) and use it. Works with no problems with zfs. -- Best regards, Robert Milkowski mailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Which SAS JBOD-enclosure
Hi all, i am currently using two XStore XJ 1100 SAS JBOD enclosures(http://www.xtore.com/product_detail.asp?id_cat=11) attached to a x4200 for testing. So far it works rather nicly, but i am still looking for alternatives. The Infortrend JBOD-expansions are not deliverable at the moment. What else is out there on the market? Regards, Tom This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs as zone root
Hello, I surely did a mistake by configuring our zones with zfs-root: Patches are no longer possible (without disabling the zones in /etc/zones/index) in S10u3! My questions are: 1. Has S10u4 support for zone-root in zfs? 2. Will it be possible to patch my _existing_ zfs-rooted zones when such zones will be supported? 3. Sun itself seems to recommend zfs for zone-root for easy cloning of zones! I'm somewhat confused; Can someone give me a hint how I should/could update our servers to an actual patchlevel (OK, this is somewhat Solaris-specific, but it depends on how zfs-zones are handled by the process of updating)? Thanks Jan Dreyer ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Zone root on a ZFS filesystem and Cloning zones
Hi, Does anyone have an update on the support of having a zones root on a zfs filesystem with Solaris update 4? The only information that I have seen so far is that it was planned for late 2007 or early 2008. Also I was hoping to use the snapshot and clone capabilities of zfs to clone zones as a faster deployment method for new zones, is this supported and if not when is it likely to be supported? Thanks Tony ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zone root on a ZFS filesystem and Cloning zones
No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots). I have a workaround I'm about to blog, the gist of which is make the 'template' zone on zfs boot, configure, etc. zonecfg -z template detach zfs snapshot tank/zones/[EMAIL PROTECTED] zfs clone tank/zones/[EMAIL PROTECTED] tank/zones/clone zonecfg -z clone 'create -a /zones/clone' zoneadm -z clone attach Will post the URL once I pull my finger out. On 11/10/2007, Tony Marshall <[EMAIL PROTECTED]> wrote: > Hi, > > Does anyone have an update on the support of having a zones root on a > zfs filesystem with Solaris update 4? The only information that I have > seen so far is that it was planned for late 2007 or early 2008. > > Also I was hoping to use the snapshot and clone capabilities of zfs to > clone zones as a faster deployment method for new zones, is this > supported and if not when is it likely to be supported? -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS Space Map optimalization
> > Now space maps, intent log, spa history are compressed. > > All normal metadata (including space maps and spa history) is always > compressed. The intent log is never compressed. Can you tell me where space map is compressed ? Buffer is filled up with: 468 *entry++ = SM_OFFSET_ENCODE(start) | 469 SM_TYPE_ENCODE(maptype) | 470 SM_RUN_ENCODE(run_len); and later dmu_write is called. I want to propose few optimalization here: - space map block size schould be dynamin ( 4KB buffer is a bug ) My space map on thumper takes over 3,5 GB / 4kB = 855k blocks - space map should be compressed before dividing: 1. FILL LARGER BLOCK with data 2. compress it 3. divide to blocks and then write - other thing is memory usage, space map is using "kmem_alloc_40" for allocating space map in memory. During sync phase after removing snapshot kmem_alloc_40 takes over 13GB RAM and system is swapping. My question is when are you going to optimalize space map ? We are having big problems here with ZFS due to space map and fragmentation. We have to lower recordsize and disable zil. Potrzebujesz samochodu? Mamy dla Ciebie auto tylko za 70 zł dziennie! Oferta specjalna Express Rent a Car - Kliknij: http://klik.wp.pl/?adr=https%3A%2F%2Fwynajemsamochodow.wp.pl%2F&sid=58 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [storage-discuss] server-reboot
Hi Claus, Were you able to collect the core file? If so, please provide us with the core file so we can take a look. I can provide specific upload instructions offline. thanks Charles eric kustarz wrote: This looks like a bug in the sd driver (SCSI). Does this look familiar to anyway from the sd group? eric On Oct 10, 2007, at 10:30 AM, Claus Guttesen wrote: Hi. Just migrated to zfs on opensolaris. I copied data to the server using rsync and got this message: Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ff0007f1bc80: Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ff0007f1b640 addr=fffecd873000 Oct 10 17:24:04 zetta unix: [ID 10 kern.notice] Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice] sched: Oct 10 17:24:04 zetta unix: [ID 753105 kern.notice] #pf Page fault Oct 10 17:24:04 zetta unix: [ID 532287 kern.notice] Bad kernel fault at addr=0xfffecd873000 Oct 10 17:24:04 zetta unix: [ID 243837 kern.notice] pid=0, pc=0xfbbc1a9f, sp=0xff0007f1b730, eflags=0x10286 Oct 10 17:24:04 zetta unix: [ID 211416 kern.notice] cr0: 8005003b cr4: 6b8 Oct 10 17:24:04 zetta unix: [ID 354241 kern.notice] cr2: fffecd873000 cr3: 300 cr8: c Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] rdi: fffecd872f80 rsi:a rdx: ff0007f1bc80 Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] rcx: 21 r8: 927454bc6fa r9: 927445906ba Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] rax: 20 rbx: fffefef2ea40 rbp: ff0007f1b770 Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] r10: 79602 r11: fffecd872e18 r12: fffecd872f80 Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] r13: fffecd872f88 r14: 04209380 r15: fb84ce30 Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] fsb: 0 gsb: fffec1c31500 ds: 4b Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] es: 4b fs:0 gs: 1c3 Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] trp: e err:0 rip: fbbc1a9f Oct 10 17:24:04 zetta unix: [ID 592667 kern.notice] cs: 30 rfl:10286 rsp: ff0007f1b730 Oct 10 17:24:04 zetta unix: [ID 266532 kern.notice] ss: 38 Oct 10 17:24:04 zetta unix: [ID 10 kern.notice] Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b520 unix:die+ea () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b630 unix:trap+135b () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b640 unix:_cmntrap+e9 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b770 scsi:scsi_transport+1f () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b7f0 sd:sd_start_cmds+2f4 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b840 sd:sd_core_iostart+17b () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b8a0 sd:sd_mapblockaddr_iostart+185 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b8f0 sd:sd_xbuf_strategy+50 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b930 sd:xbuf_iostart+103 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b960 sd:ddi_xbuf_qstrategy+60 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b9a0 sd:sdstrategy+ec () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1b9d0 genunix:bdev_strategy+77 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1ba00 genunix:ldi_strategy+54 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1ba50 zfs:vdev_disk_io_start+219 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1ba70 zfs:vdev_io_start+1d () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bab0 zfs:zio_vdev_io_start+123 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bad0 zfs:zio_next_stage_async+bb () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1baf0 zfs:zio_nowait+11 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bb50 zfs:vdev_queue_io_done+a5 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bb90 zfs:vdev_disk_io_done+29 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bbb0 zfs:vdev_io_done+1d () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bbd0 zfs:zio_vdev_io_done+1b () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bc60 genunix:taskq_thread+1a7 () Oct 10 17:24:04 zetta genunix: [ID 655072 kern.notice] ff0007f1bc70 unix:thread_start+8 () Oct 10 17:24:04 zetta unix: [ID 10 kern.notice] Oct 10 17:24:04 zetta genunix: [ID 672855 kern.notice] syncing file systems... Oct 10 17:24:04 zetta genunix: [ID 733762 kern.notice] 26 Oct 10 17:24:05 zetta genunix: [ID 733762 kern.notice] 3 Oct 10 17:24:0
Re: [zfs-discuss] Setting up a file server (NAS)
Ima wrote: > Hi all, > I have been reading ZFS discussion for a while now and I'm planning a small > file server (to be used by only a few people). I'm fairly new to Solaris and > OpenSolaris, and I'm thinking of using Solaris 10 08/07. > > I have a few questions I haven't been able to figure out yet, and would be > grateful for any help that anyone can offer. > > My basic plan is to have a root file system, and several separate disks in a > pool for ZFS. > > 1. For my root file system, I would like to have some redundancy. This file > system wouldn't be ZFS, since ZFS boot isn't supported in Solaris 10 at the > moment. I was thinking of using a RAID controller with two mirrored disks. > Does this make sense? I would like replacing a failed disk to be as easy as > possible, and I'm not sure how hard it would be to setup and maintain a > software mirror of the root disks. > > > 2. For the data (ZFS pool) disks, I have read that it makes sense to have > two disk controllers if doing a mirror, so that at least one disk from each > vdev is still online if a controller fails. Should I still have two > controllers if I'm doing raidz2? Is this a small machine, such as a typical PC with a single motherboard? If so, then don't worry about having multiple controllers for at least two reasons: 1. some BIOS won't allow boot access to more than one controller 2. the affect on availability is very small because the reliability of modern controllers is very high, especially SAS/SATA controllers Do worry about the disks themselves, as they should be the least reliable component. -- richard > 3. Can anyone recommend a PCI-Express SATA controller that will work with > 64-bit x86 Solaris 10? > > > Thanks a lot for any help you can provide, and for taking the time to read > this :) > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Zone root on a ZFS filesystem and Cloning zones
I'm working on getting an answer to this. We're looking at whether some of the changes to LiveUpgrade to enable zfs boot can be broken out and delivered separately to enable upgrade of zone roots on zfs. I hope to have more answers next week. Lori Tony Marshall wrote: > Hi, > > Does anyone have an update on the support of having a zones root on a > zfs filesystem with Solaris update 4? The only information that I have > seen so far is that it was planned for late 2007 or early 2008. > > Also I was hoping to use the snapshot and clone capabilities of zfs to > clone zones as a faster deployment method for new zones, is this > supported and if not when is it likely to be supported? > > Thanks > > Tony > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zfs: allocating allocated segment(offset=77984887808 size=66560)
how does one free segment(offset=77984887808 size=66560) on a pool that won't import? looks like I found http://bugs.opensolaris.org/view_bug.do?bug_id=6580715 http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html when I luupgrade a ufs partition with a dvd-b62 that was bfu to b68 with a dvd of b74 it booted fine and I was doing the same thing that I had done on another machine (/usr can live on raidz if boot is ufs) on a zfs destroy -r z/snv_68 with lzjb and {usr var opt} partitions it crashed with: Oct 11 14:28:11 nas ^Mpanic[cpu0]/thread=b4b6ee00: freeing free segment (vdev=1 offset=122842f400 size=10400) 824aabac genunix:vcmn_err+16 (3, f49966e4, 824aab) 824aabcc zfs:zfs_panic_recover+28 (f49966e4, 1, 0, 284) 824aac20 zfs:metaslab_free_dva+1d1 (82a5b980, 824aace0,) 824aac6c zfs:metaslab_free+90 (82a5b980, 824aace0,) 824aac98 zfs:zio_free_blk+2d (82a5b980, 824aace0,) 824aacb4 zfs:zil_free_log_block+20 (c314f440, 824aace0,) 824aad90 zfs:zil_parse+1aa (c314f440, f4974768,) 824aaddc zfs:zil_destroy+dd (c314f440, 0) 824aae00 zfs:dmu_objset_destroy+35 (8e6ef000) 824aae18 zfs:zfs_ioc_destroy+41 (8e6ef000, 5a18, 3, ) 824aae40 zfs:zfsdev_ioctl+d8 (2d8, 5a18, 8046) 824aae6c genunix:cdev_ioctl+2e (2d8, 5a18, 8046) 824aae94 specfs:spec_ioctl+65 (8773eb40, 5a18, 804) 824aaed4 genunix:fop_ioctl+46 (8773eb40, 5a18, 804) 824aaf84 genunix:ioctl+151 (3, 5a18, 8046ab8, 8) on reboot I then finished the zfs destroy -r z/snv_68 and zfs create z/snv_74 zfs create z/snv_74/usr zfs create z/snv_74/opt zfs create z/snv_74/var zfs set compression=lzjb z/snv_74 cd /z/snv_74 ufsdump 0fs - 99 /usr /var /opt | ufsrestore -rf - Oct 11 18:10:06 nas ^Mpanic[cpu0]/thread=87a61de0: zfs: allocating allocated segment(offset=77984887808 size=66560) 87a6185c genunix:vcmn_err+16 (3, f4571654, 87a618) 87a61874 zfs:zfs_panic_recover+28 (f4571654, 2842f400,) 87a618e4 zfs:space_map_add+13f (8cbc1e78, 2842f400,) 87a6196c zfs:space_map_load+27a (8cbc1e78, 8613b5b0,) 87a6199c zfs:metaslab_activate+44 (8cbc1c40, 0, 80) 87a619f4 zfs:metaslab_group_alloc+22a (8c8e4d80, 400, 0, 2) 87a61a80 zfs:metaslab_alloc_dva+170 (82a7b900, 86057bc0,) 87a61af0 zfs:metaslab_alloc+80 (82a7b900, 86057bc0,) 87a61b40 zfs:zio_dva_allocate+6b (88e56dc0) 87a61b58 zfs:zio_next_stage+aa (88e56dc0) 87a61b70 zfs:zio_checksum_generate+5e (88e56dc0) 87a61b84 zfs:zio_next_stage+aa (88e56dc0) 87a61bd0 zfs:zio_write_compress+2c8 (88e56dc0) 87a61bec zfs:zio_next_stage+aa (88e56dc0) 87a61c0c zfs:zio_wait_for_children+46 (88e56dc0, 1, 88e56f) 87a61c20 zfs:zio_wait_children_ready+18 (88e56dc0) 87a61c34 zfs:zio_next_stage_async+ac (88e56dc0) 87a61c48 zfs:zio_nowait+e (88e56dc0) 87a61c94 zfs:dmu_objset_sync+184 (85fe96c0, 88757ae0,) 87a61cbc zfs:dsl_dataset_sync+40 (813ad000, 88757ae0,) 87a61d0c zfs:dsl_pool_sync+a3 (8291c0c0, 286de2, 0) 87a61d6c zfs:spa_sync+1fc (82a7b900, 286de2, 0) 87a61dc8 zfs:txg_sync_thread+1df (8291c0c0, 0) 87a61dd8 unix:thread_start+8 () on second reboot it also Oct 11 18:17:56 nas ^Mpanic[cpu1]/thread=8f334de0: zfs: allocating allocated segment(offset=77984887808 size=66560) 8f33485c genunix:vcmn_err+16 (3, f4571654, 8f3348) 8f334874 zfs:zfs_panic_recover+28 (f4571654, 2842f400,) 8f3348e4 zfs:space_map_add+13f (916a2278, 2842f400,) 8f33496c zfs:space_map_load+27a (916a2278, 829d25b0,) 8f33499c zfs:metaslab_activate+44 (916a2040, 0, 80) 8f3349f4 zfs:metaslab_group_alloc+22a (88ffb100, 400, 0, 2) 8f334a80 zfs:metaslab_alloc_dva+170 (82a7c980, 8ab851d0,) 8f334af0 zfs:metaslab_alloc+80 (82a7c980, 8ab851d0,) 8f334b40 zfs:zio_dva_allocate+6b (8f8286b8) 8f334b58 zfs:zio_next_stage+aa (8f8286b8) 8f334b70 zfs:zio_checksum_generate+5e (8f8286b8) 8f334b84 zfs:zio_next_stage+aa (8f8286b8) 8f334bd0 zfs:zio_write_compress+2c8 (8f8286b8) 8f334bec zfs:zio_next_stage+aa (8f8286b8) 8f334c0c zfs:zio_wait_for_children+46 (8f8286b8, 1, 8f8288) 8f334c20 zfs:zio_wait_children_ready+18 (8f8286b8) 8f334c34 zfs:zio_next_stage_async+ac (8f8286b8) 8f334c48 zfs:zio_nowait+e (8f8286b8) 8f334c94 zfs:dmu_objset_sync+184 (82ad32c0, 8f5ea480,) 8f334cbc zfs:dsl_dataset_sync+40 (8956b1c0, 8f5ea480,) 8f334d0c zfs:dsl_pool_sync+a3 (89ca5340, 286de2, 0) 8f334d6c zfs:spa_sync+1fc (82a7c980, 286de2, 0) 8f334dc8 zfs:txg_sync_thread+1df (89ca5340, 0) 8f334dd8 unix:thread_start+8 () upgrading to: Sun Microsystems Inc. SunOS 5.11 snv_75 Oct. 09, 2007 SunOS Internal Development: dm120769 2007-10-09 [onnv_75-tonic] with debug and two cpu1: x86 (chipid 0x3 GenuineIntel F27 family 15 model 2 step 7 clock 3057 MHz) kernelbase set to 0x8000, system is not i386 ABI compliant. mem = 5242412K (0x3ff8b000) got the same: Oct 11 18:58:35 nas ^Mpanic[cpu0]/thread=95425de0: zfs: allocating allocated segment(offset=77984887808 size=66560) 954257dc genunix:vcmn_err+16 (3, f4c2bdfc, 954258) 954257f4 zfs:zfs_panic_recover+28 (f4c2bdfc, 2842f400,) 95425874 zfs:space_map_add+153 (94e6da38, 2842f400,) 954258fc zfs:space_map_load+2d8 (94e6da38, 8ee
[zfs-discuss] ZFS on EMC Symmetrix
If anyone is running this configuration, I have some questions for you about Page83 data errors. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss