Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Jerry K
+1 for zfsdump/zfsrestore Julian Regel wrote: When we brought it up last time, I think we found no one knows of a userland tool similar to 'ufsdump' that's capable of serializing a ZFS along with holes, large files, ``attribute'' forks, windows ACL's, and checksums of its own, and then rest

[zfs-discuss] zpool split

2010-01-06 Thread Jerry K
zpool split http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck I came across this around noon today, originally on http://c0t0d0s0.org . More here: http://opensolaris.org/jive/thread.jspa?threadID=113685&tstart=60 Too bad this probably won't make it to the final release of OpenSola

[zfs-discuss] ZFS filesystem size mismatch

2010-01-05 Thread Nils K . Schøyen
A ZFS file system reports 1007GB beeing used (df -h / zfs list). When doing a 'du -sh' on the filesystem root, I only get appr. 300GB which is the correct size. The file system became full during Christmas and I increased the quota from 1 to 1.5 to 2TB and then decreased to 1.5TB. No reservatio

Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Giridhar K R
I used the default while creating zpool with one disk drive. I guess it is a RAID 0 configuration. Thanks, Giri -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/lis

Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-17 Thread Giridhar K R
> Hi Giridhar, > > The size reported by ls can include things like holes > in the file. What space usage does the zfs(1M) > command report for the filesystem? > > Adam > > On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote: > > > Hi, > > >

[zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-16 Thread Giridhar K R
Hi, Reposting as I have not gotten any response. Here is the issue. I created a zpool with 64k recordsize and enabled dedupe on it. -->zpool create -O recordsize=64k TestPool device1 -->zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output o

Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-15 Thread Giridhar K R
As I have noted above after editing the initial post, its the same locally too. >>I found that the "ls -l" on the zpool also reports 51,193,782,290 bytes -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.or

[zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-14 Thread Giridhar K R
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREECAP

Re: [zfs-discuss] ZFS CIFS, smb.conf (smb/server) and LDAP

2009-11-28 Thread Venkatesh K
am using > Sun DSEE 7.0 and I'm > facing a heck of a lot of problems with the LDAP DIT > structure. > Let me know how and where we can discuss? Thanks, Venkatesh K -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Jerry K
o wait for U8 to be released.) I will update the CR with this information. Lori On 02/18/09 09:12, Jerry K wrote: Hello Lori, Any update to this issue, and can you speculate as to if it will be a patch to Solaris 10u6, or part of 10u7? Thanks again, Jerry Lori Alt wrote: This is in t

Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Jerry K
There is a pretty active apple ZFS sourceforge group that provides RW bits for 10.5. Things are oddly quiet concerning 10.6. I am curious about how this will turn out myself. Jerry Rich Teer wrote: It's not pertinent to this sub-thread, but zfs (albeit read-only) is already in currently s

[zfs-discuss] posix_fadvise on ZFS

2009-04-22 Thread Jignesh K. Shah
This is wrt Postgres 8.4 beta1 which has a new effective_io_concurrency tunable which uses posix_fadvice http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html (Go to the bottom) Quote: synchronous I/O depends on an effective |posix_fadvise| function, which some operating s

[zfs-discuss] posix_fadvise on ZFS

2009-04-21 Thread Jignesh K. Shah
This is wrt Postgres 8.4 beta1 which has a new effective_io_concurrency tunable which uses posix_fadvice http://www.postgresql.org/docs/8.4/static/runtime-config-resource.html (Go to the bottom) Quote: synchronous I/O depends on an effective |posix_fadvise| function, which some operating sy

[zfs-discuss] boot-interest WAS: Reliability at power failure?

2009-03-24 Thread Jerry K
Where is the boot-interest mailing list?? A review of mailing list here: http://mail.opensolaris.org/mailman/listinfo/ does not show a boot-interest mailing list, or anything similar. Is it on a different site? Thanks Richard Elling wrote: Uwe Dippel wrote: C. wrote: I've worked hard t

Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-02-18 Thread Jerry K
. In the meantime, you might try this: http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs - Lori On 01/09/09 12:28, Jerry K wrote: I understand that currently, at least under Solaris 10u6, it is not possible to jumpstart a new system with a zfs root using a flash archive

[zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Jerry K
It was rumored that Nevada build 105 would have ZFS encrypted file systems integrated into the main source. In reviewing the Change logs (URL's below) I did not see anything mentioned that this had come to pass. Its going to be another week before I have a chance to play with b105. Does anyon

[zfs-discuss] zfs root, jumpstart and flash archives

2009-01-09 Thread Jerry K
I understand that currently, at least under Solaris 10u6, it is not possible to jumpstart a new system with a zfs root using a flash archive as a source. Can anyone comment as to whether this restriction will pass in the near term, or if this is a while out (6+ months) before this will be possi

[zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available]

2009-01-09 Thread Jerry K
It was rumored that Nevada build 105 would have ZFS encrypted file systems integrated into the main source. In reviewing the Change logs (URL's below) I did not see anything mentioned that this had come to pass. Its going to be another week before I have a chance to play with b105. Does anyone k

[zfs-discuss] mbuffer WAS'zfs recv' is very slow

2008-11-14 Thread Jerry K
Hello Thomas, What is mbuffer? Where might I go to read more about it? Thanks, Jerry > > yesterday, I've release a new version of mbuffer, which also enlarges > the default TCP buffer size. So everybody using mbuffer for network data > transfer might want to update. > > For everybody unfam

[zfs-discuss] ZFS with Fusion-IO?

2008-09-23 Thread Jignesh K. Shah
http://www.fusionio.com/Products.aspx Looks like a cool SSD to go with ZFS Has anybody tried ZFS with Fusion-IO storage? For that matter even with Solaris? -Jignesh -- Jignesh Shah http://blogs.sun.com/jkshah Sun Microsystems,Inc http://sun.com/postgresql

Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Jerry K
Ming into this. Jerry K. Bob Friesenhahn wrote: > On Wed, 3 Sep 2008, Jerry K wrote: > >> How would this work for servers that support only (2) drives, or systems >> that are configured to have pools of (2) drives, i.e. mirrors, and >> there is no additional space to have

Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Jerry K
How would this work for servers that support only (2) drives, or systems that are configured to have pools of (2) drives, i.e. mirrors, and there is no additional space to have a new disk, as shown in the sample below. I still support lots of V490's, which hold only (2) drives. Thanks, Jerr

[zfs-discuss] Odp: Kernel panic at zpool import

2008-08-11 Thread Łukasz K
Dnia 7-08-2008 o godz. 13:20 Borys Saulyak napisał(a): > Hi, > > I have problem with Solaris 10. I know that this forum is for > OpenSolaris but may be someone will have an idea. > My box is crashing on any attempt to import zfs pool. First crash > happened on export operation and since then I can

Re: [zfs-discuss] RFE: ZFS commands "zmv" and "zcp"

2008-07-10 Thread Raquel K. Sanborn
No, the problem data must be moved or copied from where it is, to a different ZFS. Raquel This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] RFE: ZFS commands "zmv" and "zcp"

2008-07-09 Thread Raquel K. Sanborn
Thanks, glad someone else thought of it first. I guess I will have to do things the hard way. Raquel This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-d

[zfs-discuss] RFE: ZFS commands "zmv" and "zcp"

2008-07-09 Thread Raquel K. Sanborn
I've run across something that would save me days of trouble. Situation, the contents of one ZFS file system needs to be moved to another ZFS file system. The destination can be the same Zpool, even a brand new ZFS file system. A command to move the data from one ZFS file system to another, WITH

Re: [zfs-discuss] Zfs send takes 3 days for 1TB?

2008-04-10 Thread Jignesh K. Shah
resting read anyways. :) > > Nathan. > > > > Nicolas Williams wrote: >> On Wed, Apr 09, 2008 at 11:38:03PM -0400, Jignesh K. Shah wrote: >>> Can zfs send utilize multiple-streams of data transmission (or some >>> sort of multipleness)? >>> >&

[zfs-discuss] Zfs send takes 3 days for 1TB?

2008-04-09 Thread Jignesh K. Shah
Can zfs send utilize multiple-streams of data transmission (or some sort of multipleness)? Interesting read for background http://people.planetpostgresql.org/xzilla/index.php?/archives/338-guid.html Note: zfs send takes 3 days for 1TB to another system Regards, Jignesh ___

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a): > On Jan 10, 2008, at 4:50 AM, Łukasz K wrote: > > > Hi > > I'm using ZFS on few X4500 and I need to backup them. > > The data on source pool keeps changing so the online replication > > would be the b

Re: [zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a): > Łukasz K wrote: > > > Hi > >I'm using ZFS on few X4500 and I need to backup them. > > The data on source pool keeps changing so the online replication > > would be the best solution. > > > >

[zfs-discuss] Backup/replication system

2008-01-10 Thread Łukasz K
Hi I'm using ZFS on few X4500 and I need to backup them. The data on source pool keeps changing so the online replication would be the best solution. As I know AVS doesn't support ZFS - there is a problem with mounting backup pool. Other backup systems (disk-to-disk or block-to-block)

Re: [zfs-discuss] fclose failing at 2G on a ZFS filesystem

2007-12-25 Thread K
On 26/12/2007, at 2:43 AM, Mike Gerdts wrote: > On Dec 25, 2007 1:33 PM, K <[EMAIL PROTECTED]> wrote: >> >> if (fclose (file)) { >> fprintf (stderr, "fatal: unable to close temp file: %s\n", >> strerror (errno)); >> exit (1)

[zfs-discuss] fclose failing at 2G on a ZFS filesystem

2007-12-25 Thread K
if (fclose (file)) { fprintf (stderr, "fatal: unable to close temp file: %s\n", strerror (errno)); exit (1); I don't understand why the above piece of code is failing... fatal: unable to close file: File too large and of course my code fails at 2G... The output should b

[zfs-discuss] current status of zfs boot partition on Sparc

2007-12-04 Thread Jerry K
I haven't seen anything about this recently, or I have missed it. Can anyone share what the current status of ZFS boot partition on Sparc is? Thanks, Jerry K ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/ma

[zfs-discuss] xVm blockers!

2007-11-28 Thread K
1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more flexibility in the way we setup xen networking. What is sad is that the code is already available in the unreleased crossbow bits... but it won't appear in nevada until Q1 2008 :( This is a real blocker for me as my ISP

Re: [zfs-discuss] ZFS very slow under xVM

2007-11-27 Thread K
> kugutsum > > I tried with just 4Gb in the system, and the same issue. I'll try > 2Gb tomorrow and see if any better.(ps, how did you determine > that was the problem in your case) sorry, I wasn't monitoring this list for a while. My machine has 8GB of ram and I remembered that some

[zfs-discuss] Recommended settings for dom0_mem when using zfs

2007-11-19 Thread K
I have a xVm b75 server and use zfs for storage (zfs root mirror and a raid-z2 datapool.) I see everywhere that it is recommended to have a lot of memory on a zfs file server... but I also need to relinquish a lot of my memory to be used by the domUs. What would a good value for dom0_mem o

Re: [zfs-discuss] Slow file system access on zfs

2007-11-08 Thread Łukasz K
there are problems with zfs sync phase.Run #dtrace -n fbt::txg_wait_open:entry'{ stack(); ustack(); }'and wait 10 minutesalso give more information about pool#zfs get all filerI assume 'filer' is you pool name.RegardsLukasOn 11/7/07, Łukasz K <[EMAIL PROTECTED]> wrote: Hi,

[zfs-discuss] Odp: Slow file system access on zfs

2007-11-07 Thread Łukasz K
#!/bin/sh echo '::spa' | mdb -k | grep ACTIVE \ | while read pool_ptr state pool_name do echo "checking pool map size [B]: $pool_name" echo "${pool_ptr}::walk metaslab|::print -d struct metaslab ms_smo.smo_objsize" \ | mdb -k \ | nawk '{sub("^0t&

Re: [zfs-discuss] ZFS Space Map optimalization

2007-10-11 Thread Łukasz K
> > Now space maps, intent log, spa history are compressed. > > All normal metadata (including space maps and spa history) is always > compressed. The intent log is never compressed. Can you tell me where space map is compressed ? Buffer is filled up with: 468 *entry++ = SM_

Re: [zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impacton performance

2007-08-24 Thread Łukasz K
then I'll be able to > provide you with my changes in some form. Hope this will happen next week. > > Cheers, > Victor > > Łukasz K wrote: > > Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a): > >> Hello Victor, > >> > >> Wednesday, Ju

[zfs-discuss] Odp: zfs destroy takes long time

2007-08-24 Thread Łukasz K
drive stripe, nothing too fancy. We do not have any snapshots. > > Any ideas? Maybe your pool is fragmented and pool space map i very big. Run this script: #!/bin/sh echo '::spa' | mdb -k | grep ACTIVE \ | while read pool_ptr state pool_name do echo "checking pool

[zfs-discuss] Odp: Is ZFS efficient for large collections of small files?

2007-08-21 Thread Łukasz K
> Is ZFS efficient at handling huge populations of tiny-to-small files - > for example, 20 million TIFF images in a collection, each between 5 > and 500k in size? > > I am asking because I could have sworn that I read somewhere that it > isn't, but I can't find the reference. It depends, what typ

[zfs-discuss] Odp: Re[2]: Re: Re[2]: Re: Re: Re: Snapshots impact on performance

2007-07-27 Thread Łukasz K
Dnia 26-07-2007 o godz. 13:31 Robert Milkowski napisał(a): > Hello Victor, > > Wednesday, June 27, 2007, 1:19:44 PM, you wrote: > > VL> Gino wrote: > >> Same problem here (snv_60). > >> Robert, did you find any solutions? > > VL> Couple of week ago I put together an implementation of space maps

Re: [zfs-discuss] Vanity ZVOL paths?

2006-12-09 Thread Jignesh K. Shah
er on. Regards, Jignesh Jonathan Edwards wrote: On Dec 8, 2006, at 05:20, Jignesh K. Shah wrote: Hello ZFS Experts I have two ZFS pools zpool1 and zpool2 I am trying to create bunch of zvols such that their paths are similar except for consisent number scheme without reference to the z

[zfs-discuss] Vanity ZVOL paths?

2006-12-08 Thread Jignesh K. Shah
Hello ZFS Experts I have two ZFS pools zpool1 and zpool2 I am trying to create bunch of zvols such that their paths are similar except for consisent number scheme without reference to the zpools that actually belong. (This will allow me to have common references in my setup scripts) If I