Re: [zfs-discuss] ZFS very slow under xVM
Mitchell The problem seems to occur with various IO patterns. I first noticed it after using ZFS based storage for a disk image for a xVM/Xen virtual domain, and then, while tracking ti down, observed that either "cp" of a large .iso disk image would reproduce the problem, and more later, a single "dd if=/dev/zero of=myfile bs=16k count=15" would. So I guess this latter case is a mostly write pattern to the disk, especially after it is noted that the command returns after around 5 seconds, leaving the rest buffered in memory. best regards Martin This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers
Ed Saipetch wrote: > To answer a number of questions: > > Regarding different controllers, I've tried 2 Syba Sil 3114 controllers > purchased about 4 months apart. I've tried 5.4.3 firmware with one and > 5.4.13 with another. Maybe Syba makes crappy Sil 3114 cards but it's the > same one that someone on blogs.sun.com used with success. I had weird > problems flashing the first card I got, hence the order of another one. I'm > not sure how I could get 2 different controllers 4 months apart and then use > them in 2 completely different computers and both controllers be bad. another data point.. I run two SiI 3114 based cards in my home fileserver running s10u3. I was having ZFS data corruption issues and I suspected the SiI cards - that was until I replaced the motherboard/CPU/memory. I didn't have the time or patience to try to determine which component was at fault, but I swapped the motherboard/CPU/memory and stressed it for a few hours and the data corruption problem was gone. before that, I was seeing data corruption issues within minutes. maybe it was just memory, but I'll never know. I junked the old kit after I confirmed I had eliminated the problem. grant. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool.cache
On Sat, Nov 03, 2007 at 05:58:17PM -0700, Denis wrote: > I am not seeing this behavior. But I forgot to mention that Iam using > FreeBSD. Maybe pawel missed something. I implemented something similar to devids in FreeBSD, but not everything supports it currently, and some can't support it at all. 'zpool import' should recreate zpool.cache automatically. In my perforce branch, I improved this part of ZFS. If a component can't be found by using path and/or devid, ZFS will try more forcible method - it will read metadata of each GEOM provider (disk-like device) in the system and find component using ZFS metadata. I'd suggest doing the same in OpenSolaris. -- Pawel Jakub Dawidek http://www.wheel.pl [EMAIL PROTECTED] http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! pgpm3ZErEeLcd.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS
> Marvell controllers work great with solaris. > > Supermicro AOC-SAT2-MV8 is what I currently use. I bought it on > recommendation from this list actually. I think I paid 110$ for mine. Yeah I have one of these; and they're nice. Problem is (1) they are PCI-X (thus not compatible with all PCI slots/motherboards), and (2) they are not PCI Express ;) The one I am using has been great so far though (on FreeBSD; never got a chance to try on Solaris). -- / Peter Schuller PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>' Key retrieval: Send an E-Mail to [EMAIL PROTECTED] E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org signature.asc Description: This is a digitally signed message part. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS
> Your best bet is to call Tech Support and not Sales. I've found LSI > tech support to be very responsive to individual customers. Thanks. I'll try them. I eventually noticed you could actually get the number to them under the "LSI offices" category of their find-your-contact web form system which otherwise looked like a re-seller inventory. > I recommend the SuperMicro card - but that is PCI-X and I think you're > looking for PCI-Express? PCI is okay and nice, PCI-Express is nicer. PCI-X I don't want since it is only semi-compatible with PCI. E.g. the Marvell I have now works in one machine, not in another. > works well with ZFS (SATA or SAS drives). The newer cards are less > expensive - but its not clear from the LSI website if they support > JBOD operation or if you can form a "mirror" or "stripe" using only > one drive and present it to ZFS as a single drive. I am okay with a one-disk mirror/stripe in the worse case, as long as cache flushes and such get passed through. Would definitely prefer JBOD though since single-disk virtual volumes tend to cause some additional headachages (like having two levels of volumen management). > Please let us know what you find out... If I get anything confirmed from LSI I'll post an update. -- / Peter Schuller PGP userID: 0xE9758B7D or 'Peter Schuller <[EMAIL PROTECTED]>' Key retrieval: Send an E-Mail to [EMAIL PROTECTED] E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org signature.asc Description: This is a digitally signed message part. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Force SATA1 on AOC-SAT2-MV8
Eric Haycraft wrote: > The drives (6 in total) are external (eSATA) ones, so they have their own > enclosure that I can't open without voiding the warranty... I destroyed one > enclosure trying out ways to get it to work and learned that there was no way > to open them up without wrecking the case :( > > I have 2 meter sata to esata cables. > > The drives are 750GB FreeAgent Pro USB/eSATA drives from Seagate. > > Thanks for your help. IIRC, eSATA has different signalling specifications from (i)SATA (higher voltages, for example). This would mean that a (passive) SATA-eSATA adapter on a SATA2 card could present its own issues. Rob++ -- Internet: [EMAIL PROTECTED] __o Life: [EMAIL PROTECTED]_`\<,_ (_)/ (_) "They couldn't hit an elephant at this distance." -- Major General John Sedgwick ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Veritas Cluster Server
Nathan Dietsch wrote: > Hello All, > > I am working with a customer on a solution where ZFS looks very > promising. The solution requires disaster recovery and the chosen > technology for providing DR of services in this organisation is Veritas > Cluster Server. > Has anyone implemented ZFS with Veritas Cluster Server to provide > high-availability for ZFS pools and datasets? I understand that Sun > Cluster is a better product for use with ZFS, but it is not supported > within the organisation and is not available for use within the proposed > solution. > I am specifically looking for information on implementation experiences > and failover testing with ZFS and VCS. > Furthermore, if anyone has implemented ZFS on SRDF, I would also be > interesting in hearing about those implementation experiences. > Any and all input would be most appreciated. Unfortunately, VxFS is still the best way to go with Veritas Cluster in a HA environment -- ZFS cannot go active-active with the same filesystem on two nodes. Since you mentioned DR, you can use VVR and go (active-active)<-VVR->(active-active) and write to the "same" filesystem on four nodes (assuming synchronous locking doesn't bottleneck your I/O). ZFS is good stuff but it can't replace VxFS/VVR (yet). VxFS has a few years head-start. :) Rob++ -- Internet: [EMAIL PROTECTED] __o Life: [EMAIL PROTECTED]_`\<,_ (_)/ (_) "They couldn't hit an elephant at this distance." -- Major General John Sedgwick ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] How to clear an old ZFS fault?
The last two faults shown by `fmdump' are: TIME UUID SUNW-MSG-ID ... Apr 27 18:51:52.7736 0c4bc0d7-59ff-6707-b306-8458c0e1626f SUNOS-8000-1L May 02 16:09:04.4966 e6c41816-5505-c31f-f9da-d81cdac50e21 ZFS-8000-CS We had an incident in May when the SAN went away for about 1/2 hour, taking ZFS with it. A reboot afterwards brought everything back to normal, including ZFS. This is on a T2000 running Solaris 10 11/06. Here's what `fmadm' says: # fmadm faulty STATE RESOURCE / UUID --- degraded zfs://pool=space e6c41816-5505-c31f-f9da-d81cdac50e21 --- The fault light is on on the T2000. Is it safe to run `fmadm repair' to clear the fault and the light? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of Samba/ZFS integration
Razvan Corneliu VILT wrote: > Sounds like the right solution to my problem in it solves a few problems, but > I am rather curious about how it would integrate with a potential Samba > server running on the same system (in case someone needs a domain controller > as well as a fileserver). > > 1 - Samba can store the DOS attributes of a file in an xattr. Can sharesmb do > that? If so, is it compatible with Samba? > 2 - Regarding that, are Resource_Forks/xattr/Alternate_data_streams supported? > 3 - How do I set share ACLs (allowed users, and their rights)? > 4 - How do I set the share name? > 5 - Will it support the smb2 protocol? > 5b - ill it work over IPv6? > 6 - Is Shadow Copy supported (using zfs snapshots) ? > 7 - How will it map nss users to domain users? Will it be able to connect to > Winbind? > 8 - Kerberos authentication support? > 9 - Will it support the NT priviledges? I could select a normal user on my > network, and with a simple net rpc rights grant SeBackupPrivilege, > SeRestorePrivilege, ACLs can be overridden by that user in a Windows > environment. A user of the sharesmb service might expect that. > > In my personal case, I need 1, 2, 3, 4, 6, 7, 8 and 9. And I am sure that > more will come-up, as these are the ones that came to my mind right now. > > Anyway, congratulations on the sharesmb thing. If it has a > flexible/configurable implementation (for the ones with complex rules in an > environment), but with sane defaults (for normal, users), it will be a hit. > > Cheers, > Razvan > You might find this presentation of interest. It was presented at the CIFS workshop recently. http://us1.samba.org/samba/ftp/slides/cifs-workshop-2007/cifs_workshop_2007_09_27.pdf It would be best to ask questions about the features of the CIFS server on [EMAIL PROTECTED] -Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] How to clear an old ZFS fault?
Yes, it should be safe to do so. - Eric On Sun, Nov 04, 2007 at 08:21:26AM -0600, Gary Mills wrote: > The last two faults shown by `fmdump' are: > > TIME UUID SUNW-MSG-ID > ... > Apr 27 18:51:52.7736 0c4bc0d7-59ff-6707-b306-8458c0e1626f SUNOS-8000-1L > May 02 16:09:04.4966 e6c41816-5505-c31f-f9da-d81cdac50e21 ZFS-8000-CS > > We had an incident in May when the SAN went away for about 1/2 hour, > taking ZFS with it. A reboot afterwards brought everything back to > normal, including ZFS. This is on a T2000 running Solaris 10 11/06. > > Here's what `fmadm' says: > > # fmadm faulty > STATE RESOURCE / UUID > --- > degraded zfs://pool=space >e6c41816-5505-c31f-f9da-d81cdac50e21 > --- > > The fault light is on on the T2000. Is it safe to run `fmadm repair' > to clear the fault and the light? > > -- > -Gary Mills--Unix Support--U of M Academic Computing and Networking- > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of Samba/ZFS integration
Does anyone know whether the following (copied from Wikipedia) is true or not?? "Solaris has a project called CIFS client for Solaris, based on the Mac OS X smbfs." Rayson On Nov 4, 2007 9:34 AM, Mark Shellenbaum <[EMAIL PROTECTED]> wrote: > Razvan Corneliu VILT wrote: > > Sounds like the right solution to my problem in it solves a few problems, > > but I am rather curious about how it would integrate with a potential Samba > > server running on the same system (in case someone needs a domain > > controller as well as a fileserver). > > > > 1 - Samba can store the DOS attributes of a file in an xattr. Can sharesmb > > do that? If so, is it compatible with Samba? > > 2 - Regarding that, are Resource_Forks/xattr/Alternate_data_streams > > supported? > > 3 - How do I set share ACLs (allowed users, and their rights)? > > 4 - How do I set the share name? > > 5 - Will it support the smb2 protocol? > > 5b - ill it work over IPv6? > > 6 - Is Shadow Copy supported (using zfs snapshots) ? > > 7 - How will it map nss users to domain users? Will it be able to connect > > to Winbind? > > 8 - Kerberos authentication support? > > 9 - Will it support the NT priviledges? I could select a normal user on my > > network, and with a simple net rpc rights grant SeBackupPrivilege, > > SeRestorePrivilege, ACLs can be overridden by that user in a Windows > > environment. A user of the sharesmb service might expect that. > > > > In my personal case, I need 1, 2, 3, 4, 6, 7, 8 and 9. And I am sure that > > more will come-up, as these are the ones that came to my mind right now. > > > > Anyway, congratulations on the sharesmb thing. If it has a > > flexible/configurable implementation (for the ones with complex rules in an > > environment), but with sane defaults (for normal, users), it will be a hit. > > > > Cheers, > > Razvan > > > > > You might find this presentation of interest. It was presented at the > CIFS workshop recently. > > http://us1.samba.org/samba/ftp/slides/cifs-workshop-2007/cifs_workshop_2007_09_27.pdf > > It would be best to ask questions about the features of the CIFS server > on [EMAIL PROTECTED] > >-Mark > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of Samba/ZFS integration
Rayson Ho wrote: > Does anyone know whether the following (copied from Wikipedia) is true or > not?? > > "Solaris has a project called CIFS client for Solaris, based on the > Mac OS X smbfs." > > Rayson > Yes, that is true. http://www.opensolaris.org/os/project/smbfs/ -Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of Samba/ZFS integration
On Nov 4, 2007, at 00:42, MC wrote: > ZFS has a smb server on the way, but there has been no real public > information about it released. Here is a sample of its existence: > http://www.opensolaris.org/os/community/arc/caselog/ > 2007/560/;jsessionid=F4061C9308088852992B7DE83CD9C1A3 There's been a put back: http://blogs.sun.com/amw/entry/cifs_in_solaris ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of Samba/ZFS integration
On 11/3/07, Razvan Corneliu VILT <[EMAIL PROTECTED]> wrote: > Sounds like the right solution to my problem in it solves a few problems, but > I am rather curious about how it would integrate with a potential Samba > server running on the same system (in case someone needs a domain controller > as well as a fileserver). > > 1 - Samba can store the DOS attributes of a file in an xattr. Can sharesmb do > that? If so, is it compatible with Samba? ... The best description I have seen so far is at http://blogs.sun.com/amw/entry/cifs_in_solaris . Based upon what I see there, OpenSolaris is getting capability that will once again surpass the capabilities of the competition. Not to belittle the advances in dtrace, zfs, smf, etc., the integration of cifs seems to be a game changer in determining which open source OS is the best for file serving. Indeed, this would not be the case without the combination of zfs, nfsv4, avs, etc. Once NDMP and COMSTAR are in place, it looks as though the "core" parts will be complete. Hopefully this will all come together through administrative tools that make cross-platform (*nix, Windows) and cross-protocol (CiFS, NFS, iSCSI, FC) file and block serving with remote replication seem intuitive. Kinda makes you understand why Netapp no longer feels that they can compete on features + ease of use. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] HAMMER
On 10/16/07, Dave Johnson <[EMAIL PROTECTED]> wrote: > > does anyone actually *use* compression ? i'd like to see a poll on how many > people are using (or would use) compression on production systems that are > larger than your little department catch-all dumping ground server. We don't use compression on our thumpers - they're mostly for image storage where the original (eg. jpeg) is already compressed. What will be interesting is to look at the effect of compression on the attribute files (largely text and xml) as we start to deploy zfs there as well. > i mean, > unless you had some NDMP interface directly to ZFS, daily tape backups for > any large system will likely be an excersize in futility unless the systems > are largely just archive servers, at which point it's probably smarter to > perform backups less often, coinciding with the workflow of migrating > archive data to it. otherwise wouldn't the system just plain get pounded? I'm not worried about the compression effect. Where I see problems is backing up million/tens of millions of files in a single dataset. Backing up each file is essentially a random read (and this isn't helped by raidz which gives you a single disks worth of random read I/O per vdev). I would love to see better ways of backing up huge numbers of files. -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] SXDE vs Solaris 10u4 for a home file server
Hi everyone, I think this post may be slightly off topic and I apologize, but I'm not sure where the best place to ask is. I'm setting up a home file server, which will mainly just consist of a ZFS pool and access with SAMBA. I'm not sure if I should use SXDE for this, or Sol 10u4. Does SXDE offer any ZFS improvements over 10u4 for this purpose? My hardware is supported under both platforms. Additionally, with SXDE I worry that I may spend more time maintaining the OS, and about the availability of upgrades for it over the next 5-10 years, so I'm not really sure which would be better in the long run. In any case, thanks a lot for any help anyone can offer :) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] SXDE vs Solaris 10u4 for a home file server
On 04/11/2007, Ima <[EMAIL PROTECTED]> wrote: > I'm setting up a home file server, which will mainly just consist of a ZFS > pool and access with SAMBA. I'm not sure if I should use SXDE for this, or > Sol 10u4. Does SXDE offer any ZFS improvements over 10u4 for this purpose? I'd be inclined to go for SXCE rather than SXDE myself - mainly because there are good things around the corner (CIFS integration being the obvious one for a NAS) that you'll be able to try out sooner that way. > My hardware is supported under both platforms. Additionally, with SXDE I > worry that I may spend more time maintaining the OS, and about the > availability of upgrades for it over the next 5-10 years, so I'm not really > sure which would be better in the long run. For a home NAS, I wouldn't worry much about maintenance taking a lot of time. It's up you whether you need a bleeding edge feature or not, but it's nice to have the option. My router is takes much less work than a server would, but only because Linksys are slackers when it comes to firmware updates; the kernel/firewall it's built with must be horribly outdated by now. -- Rasputnik :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] MySQL benchmark
Robin Harris, a ZFS fanboy, mentioned this benchmark on his blog, and since I'm not a great fan of fanboys (though to some degree I *am* a fan of ZFS) I responded in some detail (several times, since he's not easily disabused of a misconception once he's latched onto it): http://storagemojo.com/2007/10/30/flash-performance-on-a-nokia-n800/#comments It is of course possible that material omitted from Duncan's presentation would place the situation in a different light: if anyone wants to bring the above criticism to his attention I'd be interested in seeing how he'd respond to it. - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Yager on ZFS
Having gotten a bit tired of the level of ZFS hype floating around these days (especially that which Jonathan has chosen to associate with his spin surrounding the fracas with NetApp), I chose to respond to that article yesterday. I did attempt to be fair and would appreciate feedback if anything I said was not (since I would not wish to repeat it elsewhere and would be happy to correct it there). - bill This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and Veritas Cluster Server
See maybe http://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/039419.html cheers /d 2007/11/4, Nathan Dietsch <[EMAIL PROTECTED]>: > > Hello All, > > I am working with a customer on a solution where ZFS looks very > promising. The solution requires disaster recovery and the chosen > technology for providing DR of services in this organisation is Veritas > Cluster Server. > > Has anyone implemented ZFS with Veritas Cluster Server to provide > high-availability for ZFS pools and datasets? I understand that Sun > Cluster is a better product for use with ZFS, but it is not supported > within the organisation and is not available for use within the proposed > solution. > > I am specifically looking for information on implementation experiences > and failover testing with ZFS and VCS. > > Furthermore, if anyone has implemented ZFS on SRDF, I would also be > interesting in hearing about those implementation experiences. > > Any and all input would be most appreciated. > > Kind Regards, > > Nathan Dietsch > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Dominic Kay +44 780 124 6099 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss