Re: [zfs-discuss] Preferred backup s/w
On Thu, 2008-02-21 at 21:00 +, Gavin Maltby wrote: > On 02/21/08 16:31, Rich Teer wrote: > > > What is the current preferred method for backing up ZFS data pools, > > preferably using free ($0.00) software, and assuming that access to > > individual files (a la ufsbackup/ufsrestore) is required? > > For home use I am making very successful use of zfs incremental send > and receive. A script decides which filesystems to backup (based > on a user property retrieved by zfs get) and snapshots the filesystem; > it then looks for the last snapshot that the pool I'm backing > up and the pool I'm backing up to have in common, and > does a zfs send -i | zfs reveive over than. We're using a perl script which uses zfs incremental send/recv, which works pretty well for our purposes. However I hear [1] that these commands will only run on an idle thread, so get enough cores in the boxes at both ends to handle any processing demands whilst they are running. > Backups are pretty > quick since there is not huge amount of churn in the filesystems, > and on my backup disks I have browsable access to snapshot of > my data from every backup I have run. > I also leave the snapshots visible (zfs set snapdir=visible) on the fileservers so that users can retrieve old versions of their files if they need to. HTH, Chris [1] http://www.joyeur.com/2008/01/22/bingodisk-and-strongspace-what-happened ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] iscsi core dumps when under IO
Stephen, > I am getting a strange issue when using zfs/iscsi shares out of it. > > when I have attached a a cent os 5 initiator to the zfs target it > works fine normally until i start doing heavy 100MB/s+ copies to a > seperate cfs/nfs export on the same zfs pool. > > the error I am getting is: > > [ Feb 20 10:41:07 Stopping because process dumped core. ] > [ Feb 20 10:41:07 Executing stop method ("/lib/svc/method/svc- > iscsitgt stop 143") ] > > I was wondering if any one had any ideas. I am running 10U4 with all > of the latest and greatest patches. Thank you. There are a set of issues we have recently have been resolved in Nevada regarding the iSCSI Target under load. We are looking at back porting these changes to S10. The nature of the failure appears to be an iSCSI Initiator seeing long service times (in seconds), triggering a LUN reset. The LUN reset causes all I/O to be cleaned up specific to that LUN. Given the multi- threaded nature of the iSCSI Target, the odds are pretty high that cleanup across every possible I/O state would be possible, and some of the states were not handled correctly. The follow command is likely to show the reason for the "process dumped core", being an assert in the T10 state machine. # mdb /core ::status ::quit > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Jim Dunham Storage Platform Software Group Sun Microsystems, Inc. http://blogs.sun.com/avs http://www.opensolaris.org/os/project/avs/ http://www.opensolaris.org/os/project/iscsitgt/ http://www.opensolaris.org/os/community/storage/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] iscsi core dumps when under IO
I tlooks like you are probalby right. from MDB this is what i'm getting: file: /usr/sbin/amd64/iscsitgtd initial argv: /usr/sbin/iscsitgtd threading model: multi-threaded status: process terminated by SIGABRT (Abort) panic message: Assertion failed: 0, file ../t10_sam.c, line 511 Would it be possibly fixed in the S10U5 release? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Preferred backup s/w
Nicholas Brealey wrote: > Jörg Schilling wrote: > > >> If you like to still do incremental backups, I >> recommend star. >> >> Jörg >> > > Can star backup and restore ZFS ACLs and extended attributes? > > Including the new Windows ones that the CIFS server attaches?? -Kyle > Nick > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Preferred backup s/w
Jörg Schilling wrote: > > If you like to still do incremental backups, I > recommend star. > > Jörg Can star backup and restore ZFS ACLs and extended attributes? Nick This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Creating ZFS home filesystems from Linux
Sometime ago i did post this: (http://mail.opensolaris.org/pipermail/zfs-discuss/2006-October/035351.html) on ZFS discuss, and Darren J Moffat gave me the idea to use SSH to create the home directories on the solaris server. So, i did implement that solution, and did post the results in my blog: http://www.posix.brte.com.br/blog/?p=102 For make the things simpler for you :), the post describes the solution that i have implemented to: - Create (automatically) the user home directories (ZFS filesystems) from Linux clients. In a standard scenario, the "pam_mkhomedir" do the job, but if we are using ZFS filesytems to quota, snapshots, etc... we need to create them from Linux too. p.s: I think in the future i will try to improve that solution to let the linux users take snapshots, rollback and etc... But i did find some issues that i'm here to discuss with you.. - I did not find the "permission" that i need to give to the user so it can "chown" a ZFS filesystem. I did try the two ZFS profiles, file_owner, and file_chown without luck. The creation sequence is: 1) The user login on the linux client 2) The stack PAM execute a SSH session to the solaris server to create the user home directory if it does not exists yet (using a "specific user"). 3) The shell for that "specific user" do the filesystem creation task. 4) PROBLEM: This "specifi user" cannot chown the new ZFS filesystem to the final user. So, the user cannot write anything to the home directory. Thanks a lot for your time. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Preferred backup s/w
Nicholas Brealey <[EMAIL PROTECTED]> wrote: > Jörg Schilling wrote: > > > > > If you like to still do incremental backups, I > > recommend star. > > > > Jörg > > Can star backup and restore ZFS ACLs and extended attributes? If star did appear in Solaris before (see PSARC 480/2004), it most likely did support it now. ZFS ACL support has been planned for the time past star-1.5. Star-1.5-final in on hold to allow minor changes for the integration to be done before. Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS commands sudden slow down, cpu spiked
thanks. but when this happens, any ZFS command will take forever to complete not just 'zpool import' - but it might have been triggered by the import action.(?) By the way, the symptoms were first noticed not long after adding LUNs from a IBM DS8100 array. But no error messages or complains about the LUNs on the OS or the array. And now, I suspect the symptoms are showing up on 2nd node of this 3-node cluster. max This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Preferred backup s/w
Kyle McDonald <[EMAIL PROTECTED]> wrote: > Nicholas Brealey wrote: > > Jörg Schilling wrote: > > > > > >> If you like to still do incremental backups, I > >> recommend star. > >> > >> Jörg > >> > > > > Can star backup and restore ZFS ACLs and extended attributes? > > > > > Including the new Windows ones that the CIFS server attaches?? Where do you see a difference? Jörg -- EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin [EMAIL PROTECTED](uni) [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Preferred backup s/w
On advice of Joerg Schilling and not knowing what 'star' was, I decided to install it for testing. Star uses a very unorthodox build and install approach so the person building it has very little control over what it does. Unfortunately I made the mistake of installing it under /usr/local where it decided to remove the GNU tar I had installed there. Star does not support traditional tar command line syntax so it can't be used with existing scripts. Performance testing showed that it was no more efficient than the 'gtar' which comes with Solaris. It seems that 'star' does not support an 'uninstall' target so now I am forced to manually remove it from my system. It seems that the best way to deal with star is to install it into its own directory so that it does not interfere with existing software. Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Preferred backup s/w
On Fri, 22 Feb 2008, Bob Friesenhahn wrote: > where it decided to remove the GNU tar I had installed there. Star > does not support traditional tar command line syntax so it can't be > used with existing scripts. Performance testing showed that it was no > more efficient than the 'gtar' which comes with Solaris. It seems There is something I should clarify in the above. Star is a stickler for POSIX command line syntax so syntax like 'tar -cvf foo.tar' or 'tar cvf foo.tar' does not work, but 'tar -c -v -f foo.tar' does work. Testing with Star, GNU tar, and Solaris cpio showed that Star and GNU tar were able to archive the content of my home directory with no complaint whereas Solaris cpio required specification of the 'ustar' format in order to deal with long file and path names, as well as large inode numbers. Solaris cpio complained about many things with my files (e.g. unresolved passwd and group info), but managed to produce the highest throughput when archiving to a disk file. I can not attest to the ability of these tools to deal with ACLs since I don't use them. Bob == Bob Friesenhahn [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Jumpstart ZFS on Sparc?
I recall reading that ZFS boot/root will be possible in NV_86 Sparc coming soon. Do we expect to be able to to define disk setup in the Jumpstart profile at that time? Or will that come later? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss