Re: [zfs-discuss] Feature Request for zfs pool/filesystem protection?

2013-02-20 Thread Mike Gerdts
On Wed, Feb 20, 2013 at 4:49 PM, Markus Grundmann wrote: > Whenever I modify zfs pools or filesystems it's possible to destroy [on a > bad day :-)] my data. A new > property "protected=on|off" in the pool and/or filesystem can help the > administrator for datalost > (e.g. "zpool destroy tank" or "

Re: [zfs-discuss] Benefits of enabling compression in ZFS for the zones

2012-07-10 Thread Mike Gerdts
ion ./ COMPRESS on $ dd if=/dev/zero of=1gig count=1024 bs=1024k 1024+0 records in 1024+0 records out $ ls -l 1gig -rw-r--r-- 1 mgerdts staff1073741824 Jul 10 07:52 1gig $ du -k 1gig 0 1gig -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Mike Gerdts
er I can see the > following input stream bandwidth (the stream is constant bitrate, so > this shouldn't happen): If processing in interrupt context (use intrstat) is dominating cpu usage, you may be able to use pcitool to cause the device generating a

Re: [zfs-discuss] Strange hang during snapshot receive

2012-05-10 Thread Mike Gerdts
ing https://forums.oracle.com/forums/thread.jspa?threadID=2380689&tstart=15 before updating to SRU 6 (SRU 5 is fine, however). The fix for the problem mentioned in that forums thread should show up in an upcoming SRU via CR 7157313. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] test for holes in a file?

2012-03-26 Thread Mike Gerdts
r 26 18:25:25 CDT 2012 [ 1332804325.889143166 ] ct = Mar 26 18:25:25 CDT 2012 [ 1332804325.889143166 ] bsz=131072 blks=32fs=zfs Notice that it says it has 32 512 byte blocks. The mechanism you suggest does work for every other file system that I've tried it on. -- Mike Gerdts http://mge

Re: [zfs-discuss] test for holes in a file?

2012-03-26 Thread Mike Gerdts
2012/3/26 ольга крыжановская : > How can I test if a file on ZFS has holes, i.e. is a sparse file, > using the C api? See SEEK_HOLE in lseek(2). -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.or

Re: [zfs-discuss] Any rhyme or reason to disk dev names?

2011-12-21 Thread Mike Gerdts
- /dev/chassis//SYS/SASBP/HDD0/disk disk c0t5000CCA012B66E90d0 /dev/chassis//SYS/SASBP/HDD1/disk disk c0t5000CCA012B68AC8d0 The text in the left column represents text that should be printed on the corresponding disk slots. -- Mike Gerdts http

Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Mike Gerdts
> its thing. > > chicken / egg situation? I miss the old fail safe boot menu... You can mount it pretty much anywhere: mkdir /tmp/foo zfs mount -o mountpoint=/tmp/foo ... I'm not sure when the temporary mountpoint option (-o mountpoint=...) came in. If it's not valid synt

Re: [zfs-discuss] gaining access to var from a live cd

2011-11-29 Thread Mike Gerdts
as not updated from Solaris 11 Express), it will have a separate /var dataset. zfs mount -o mountpoint=/mnt/rpool/var rpool/ROOT/solaris/var -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] FS Reliability WAS: about btrfs and zfs

2011-10-21 Thread Mike Gerdts
impact if an errant command were issued. I'd never do that in production without some form of I/O fencing in place. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-05 Thread Mike Gerdts
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish wrote: > # zpool import -f tank > > http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/ I encourage you to open a support case and ask for an escalation on CR 7056738. -- Mike Gerdts http://mgerdts.blo

Re: [zfs-discuss] zfs rename query

2011-07-27 Thread Mike Gerdts
I suspect that it doesn't give you exactly the output you are looking for. FWIW, the best way to achieve what you are after without breaking the zones is going to be along the lines of: zlogin z1c1 init 0 zoneadm -z z1c1 detach zfs rename rpool/zones/z1c1 rpool/new/z1c1 zoneadm -

Re: [zfs-discuss] What is ".$EXTEND/$QUOTA" ?

2011-07-19 Thread Mike Gerdts
reated in 757 * a special directory, $EXTEND, at the root of the shared file 758 * system. To hide this directory prepend a '.' (dot). 759 */ -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list

Re: [zfs-discuss] Summary: Dedup memory and performance (again, again)

2011-07-15 Thread Mike Gerdts
dding a good enterprise SSD would double the > server cost - not only on those big good systems with > tens of GB of RAM), and hopefully simplifying the system > configuration and maintenance - that is indeed the point > in question. > > //Jim > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Non-Global zone recovery

2011-07-07 Thread Mike Gerdts
/zonecfg.export zoneadm -z attach [-u|-U] Any follow-ups should probably go to Oracle Support or zones-discuss. Your problems are not related to zfs. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] FW: Solaris panic

2011-03-17 Thread Mike Gerdts
enunix: [ID 877030 kern.notice] Copyright (c) 1983, > 2010, Oracle and/or its affiliates. All rights reserved. > > Can anyone help? > > Regards > Karl > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] hardware going bad

2010-10-27 Thread Mike Gerdts
ms. Perhaps this belongs somewhere other than zfs-discuss - it has nothing to do with zfs. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Moving the 17 zones from one LUN to another LUN

2010-10-27 Thread Mike Gerdts
structions. This sounds like it is a production Solaris 10 system in an enterprise environment. In most places that I've worked, I would be hesitant to provide the required level of detail on a public mailing list. Perhaps you should open a service call to get the assistance y

Re: [zfs-discuss] Moving the 17 zones from one LUN to another LUN

2010-10-26 Thread Mike Gerdts
me that you are comfortable that the zone data moved over ok... zfs destroy -r oldpool/zones Again, verify the procedure works on a test/lab/whatever box before trying it for real. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mail

Re: [zfs-discuss] file level clones

2010-09-27 Thread Mike Gerdts
On Mon, Sep 27, 2010 at 6:23 AM, Robert Milkowski wrote: > Also see http://www.symantec.com/connect/virtualstoreserver And http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/ -- Mike Gerdts http://mgerdts.blogspot.

Re: [zfs-discuss] How to migrate to 4KB sector drives?

2010-09-12 Thread Mike Gerdts
around the b137 timeframe. OpenIndiana, to be released on Tuesday, is based on b146 or later. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-28 Thread Mike Gerdts
s) Presumably this problem is being worked... http://hg.genunix.org/onnv-gate.hg/rev/d560524b6bb6 Notice that it implements: 866610 Add SATA TRIM support With this in place, I would imagine a next step is for zfs to issue TRIM commands as zil entries have been committed to the data disks. -- M

Re: [zfs-discuss] Moving /export to another zpool

2010-08-13 Thread Mike Gerdts
hen I boot on using LiveCD, how can I mount my first drive that has > opensolaris installed ? To list the zpools it can see: zpool import To import one called rpool at an alternate root: zpool import -R /mnt rpool -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] NFS performance?

2010-07-26 Thread Mike Gerdts
On Mon, Jul 26, 2010 at 2:56 PM, Miles Nordin wrote: >>>>>> "mg" == Mike Gerdts writes: >    mg> it is rather common to have multiple 1 Gb links to >    mg> servers going to disparate switches so as to provide >    mg> resilience in the face of switc

Re: [zfs-discuss] NFS performance?

2010-07-26 Thread Mike Gerdts
On Mon, Jul 26, 2010 at 1:27 AM, Garrett D'Amore wrote: > On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote: >> On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore wrote: >> > On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote: >> >> >> >> I

Re: [zfs-discuss] NFS performance?

2010-07-25 Thread Mike Gerdts
ration choices, and and a bit of luck. Note that with Sun Trunking there was an option to load balance using a round robin hashing algorithm. When pushing high network loads this may cause performance problems with reassembly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___

Re: [zfs-discuss] Hashing files rapidly on ZFS

2010-07-07 Thread Mike Gerdts
it looks as though znode_t's z_seq may be useful. While it isn't a checksum, it seems to be incremented on every file change. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
etting data 32 KB at a time. How does 32 KB compare to the database block size? How does 32 KB compare to the block size on the relevant zfs filesystem or zvol? Are blocks aligned at the various layers? http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified -- Mike Gerdts h

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
ut 32 KB I/O's. I think you can perform a test that involves mainly the network if you use netperf with options like: netperf -H $host -t TCP_RR -r 32768 -l 30 That is speculation based on reading http://www.netperf.org/netperf/training/Netperf.html. Someone else (perhaps

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
y good point. You can use a combination of "zpool iostat" and fsstat to see the effect of reads that didn't turn into physical I/Os. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Mike Gerdts
engineering where group projects were common and CAD, EDA, and simulation tools could generate big files very quickly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mai

Re: [zfs-discuss] Dedup... still in beta status

2010-06-15 Thread Mike Gerdts
ent mail system should already dedup. Or at least that is how I would have written it for the last decade or so... -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Mike Gerdts
s=513 count=204401 # repeatedly feed that file to dd while true ; do cat /tmp/randomdataa ; done | dd of=/my/test/file bs=... count=... The above should make it so that it will take a while before there are two blocks that are identical, thus confounding deduplication as well. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Mike Gerdts
Sorry, turned on html mode to avoid gmail's line wrapping. On Mon, May 31, 2010 at 4:58 PM, Sandon Van Ness wrote: > On 05/31/2010 02:52 PM, Mike Gerdts wrote: > > On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness > wrote: > > > >> On 05/31/2010 01:51 PM, Bob Fri

Re: [zfs-discuss] Small stalls slowing down rsync from holding network saturation every 5 seconds

2010-05-31 Thread Mike Gerdts
then a few tenths of a percent, you are probably short on CPU. It could also be that interrupts are stealing cycles from rsync. Placing it in a processor set with interrupts disabled in that processor set may help. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Is it safe to disable the swap partition?

2010-05-09 Thread Mike Gerdts
rs, ARC, etc. If the processes never page in the pages that have been paged out (or the processes that have been swapped out are never scheduled) then those pages will not consume RAM. The best thing to do with processes that can be swapped out forever is to not run them. -- Mike Gerdts http://

Re: [zfs-discuss] zfs diff

2010-03-29 Thread Mike Gerdts
llions of files with relatively few changes. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Mike Gerdts
ific butype_name strings accessible via the NDMP_CONFIG_GET_BUTYPE_INFO request. http://www.ndmp.org/download/sdk_v4/draft-skardal-ndmp4-04.txt It seems pretty clear from this that an NDMP data stream can contain most anything and is dependent on the devi

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-18 Thread Mike Gerdts
hat a similar argument could be made for storing the zfs send data streams on a zfs file system. However, it is not clear why you would do this instead of just zfs send | zfs receive. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [OT] excess zfs-discuss mailman digests

2010-02-08 Thread Mike Gerdts
On Mon, Feb 8, 2010 at 9:04 PM, grarpamp wrote: > PS: Is there any way to get a copy of the list since inception > for local client perusal, not via some online web interface? You can get monthly .gz archives in mbox format from http://mail.opensolaris.org/pipermail/zfs-discuss/. --

Re: [zfs-discuss] zero out block / sectors

2010-01-25 Thread Mike Gerdts
On Mon, Jan 25, 2010 at 2:32 AM, Kjetil Torgrim Homme wrote: > Mike Gerdts writes: > >> John Hoogerdijk wrote: >>> Is there a way to zero out unused blocks in a pool?  I'm looking for >>> ways to shrink the size of an opensolaris virtualbox VM and using the

Re: [zfs-discuss] zero out block / sectors

2010-01-23 Thread Mike Gerdts
On Sat, Jan 23, 2010 at 11:55 AM, John Hoogerdijk wrote: > Mike Gerdts wrote: >> >> On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk >> wrote: >> >>> >>> Is there a way to zero out unused blocks in a pool?  I'm looking for ways >>>

Re: [zfs-discuss] zero out block / sectors

2010-01-22 Thread Mike Gerdts
at you should be able to just use mkfile or "dd if=/dev/zero ..." to create a file that consumes most of the free space then delete that file. Certainly it is not an ideal solution, but seems quite likely to be effective. -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread Mike Gerdts
se gnu tar to extract data. This seems to be most useful when you need to recover master and/or media servers and to be able to extract your data after you no longer use netbackup. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Mike Gerdts
56 -rw-r--r-- 1 428411 Jan 22 04:14 sha256.Z -rw-r--r-- 1 321846 Jan 22 04:14 sha256.bz2 -rw-r--r-- 1 320068 Jan 22 04:14 sha256.gz -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mai

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-16 Thread Mike Gerdts
data stream > compared to other archive formats. In general it is strongly discouraged for > these purposes. > Yet it is used in ZFS flash archives on Solaris 10 and are slated for use in the successor to flash archives. This initial proposal seems to imply using the same

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 12:28 PM, Torrey McMahon wrote: > On 1/8/2010 10:04 AM, James Carlson wrote: >> >> Mike Gerdts wrote: >> >>> >>> This unsupported feature is supported with the use of Sun Ops Center >>> 2.5 when a zone is put on a "NAS St

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts wrote: > I've seen similar errors on Solaris 10 in the primary domain and on a > M4000.  Unfortunately Solaris 10 doesn't show the checksums in the > ereport.  There I noticed a mixture between read errors and checksum > errors -

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
addcafe00 0x5dcc54647f00 0x1f82a459c2aa00 > 0x7f84b11b3fc7f80 > *G  48    cksum_actual = 0x5d6ee57f00 0x178a70d27f80 0x3fc19c3a19500 > 0x82804bc6ebcfc0 > > and observe that the values in 'chksum_actual' causing our CHKSUM pool errors > eventually > because of missmatchi

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
ot a good idea in any sort > of production environment?" > > It sounds like a bug, sure, but the fix might be to remove the option. This unsupported feature is supported with the use of Sun Ops Center 2.5 when a zone is put on a "NAS Storage Library". -- Mike Gerd

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Mike Gerdts
errors from "zoneadm install", which under the covers does a pkg image create followed by *multiple* pkg install invocations. No checksum errors pop up there. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zones on shared storage - a warning

2010-01-07 Thread Mike Gerdts
E STATE READ WRITE CKSUM nfszone ONLINE 0 0 0 /nfszone/root ONLINE 0 0 109 errors: No known data errors I'm confused as to why this pool seems to be quite usable even with so many checksum errors. -- Mike Gerdts http://m

Re: [zfs-discuss] Clearing a directory with more than 60 million files

2010-01-05 Thread Mike Gerdts
e appreciated. > > Thanks, > Mikko > > -- >  Mikko Lammi | l...@lmmz.net | http://www.lmmz.net > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread Mike Gerdts
ndancy choices then there is no need for any rocket scientists. :) -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread Mike Gerdts
t; could reclaim those blocks. This is just a variant of the same problem faced with expensive SAN devices that have thin provisioning allocation units measured in the tens of megabytes instead of hundreds to thousands of kilobytes. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Zones on shared storage - a warning

2009-12-22 Thread Mike Gerdts
On Tue, Dec 22, 2009 at 8:02 PM, Mike Gerdts wrote: > I've been playing around with zones on NFS a bit and have run into > what looks to be a pretty bad snag - ZFS keeps seeing read and/or > checksum errors.  This exists with S10u8 and OpenSolaris dev build > snv_129.  This is

[zfs-discuss] Zones on shared storage - a warning

2009-12-22 Thread Mike Gerdts
0 /mnt/osolzone/root DEGRADED 0 0 117 too many errors errors: No known data errors r...@soltrain19# zlogin osol uptime 5:31pm up 1 min(s), 0 users, load average: 0.69, 0.38, 0.52 -- Mike Gerdts http://mgerdts.blogspot.com/ __

Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-15 Thread Mike Gerdts
1 Dec 15 14:35 on/a # du -h */a 95M off/a 3.4M on/a # zfs get compressratio test/on test/off NAME PROPERTY VALUE SOURCE test/off compressratio 1.00x - test/on compressratio 28.27x - -- Mike Gerdts http://mgerdts.blogspot.com/ __

Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-14 Thread Mike Gerdts
s, but that would seem to contribute to a higher compressratio rather than a lower compressratio. If I disable compression and enable dedup, does it count deduplicated blocks of zeros toward the dedupratio? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-di

Re: [zfs-discuss] ZFS - how to determine which physical drive to replace

2009-12-12 Thread Mike Gerdts
d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Model: Hitachi HTS5425 Revision: Serial No: 080804BB6300HCG Size: 160.04GB <160039305216 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 ... That /should/ be printed on the di

Re: [zfs-discuss] ZFS send | verify | receive

2009-12-05 Thread Mike Gerdts
used as a starting point. http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-26 Thread Mike Gerdts
On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain wrote: > > On 26-Nov-09, at 8:57 PM, Richard Elling wrote: > >> On Nov 26, 2009, at 1:20 PM, Toby Thain wrote: >>> >>> On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote: >>> >>>> On 2009-Nov-24 14:07:06

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2009-11-25 Thread Mike Gerdts
but creates datasets instead of > directories. > > Thoughts ?  Is this useful for anyone else ?  My above examples are some > of the shorter dataset names I use, ones in my home directory can be > even deeper. > > -- > Darren J Moffat > ___ &g

Re: [zfs-discuss] ZFS Random Read Performance

2009-11-25 Thread Mike Gerdts
t is small enough that it is somewhat likely that many of those random reads will be served from cache. A dtrace analysis of just how random the reads are would be interesting. I think that hotspot.d from the DTrace Toolkit would be a good starting place. -- Mike Gerdts http://mgerdts.blogspo

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling wrote: > On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote: > >> On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling >> wrote: >>> >>> Good question!  Additional thoughts below... >>> >>> On Nov 24, 2

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling wrote: > Good question!  Additional thoughts below... > > On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote: > >> Suppose I have a storage server that runs ZFS, presumably providing >> file (NFS) and/or block (iSCSI, FC) service

[zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
characteristics in this area? Is there less to be concerned about from a performance standpoint if the workload is primarily read? To maximize the efficacy of dedup, would it be best to pick a fixed block size and match it between the layers of zfs? -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] CIFS shares being lost

2009-11-20 Thread Mike Gerdts
gt; reportedly good for CIFS based on traffic from this list. >> >> --eric >> >> -- >> Eric D. Mudama >> edmud...@mail.bounceswoosh.org >> > > > Check out The Great Australian Pay Check now Want to know what your boss is > paid? > ___

Re: [zfs-discuss] dedup question

2009-11-02 Thread Mike Gerdts
;s. It because quite significant if you have 5000 (e.g. on a ZFS-based file server). Assuming that the deduped blocks stay deduped in the ARC, it means that it is feasible to every block that is accessed with any frequency to be in memory. Oh yeah, and you save a lot of disk space. -- Mike Gerdts ht

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
ording to page 35 of http://www.slideshare.net/ramesh_r_nagappan/wirespeed-cryptographic-acceleration-for-soa-and-java-ee-security, a T2 CPU can do 41 Gb/s of SHA256. The implication here is that this keeps the MAU's busy but the rest of the core is still idle for things like compression, TCP,

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
hms implemented in software and sha256 implemented in hardware? I've been waiting very patiently to see this code go in. Thank you for all your hard work (and the work of those that helped too!). -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Mike Gerdts
>         Current Size:             4206 MB (arcsize) >         Target Size (Adaptive):   4207 MB (c) That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system that you have booted from a 32-bit kernel? -- Mike Gerdts http://mgerdts.blogspot.com/ __

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
host1# zoneadm -z zone1 detach host1# zfs snapshot zonepool/zo...@migrate host1# zfs send -r zonepool/zo...@migrate \ | ssh host2 zfs receive zones/zo...@migrate host2# zonecfg -z zone1 create -a /zones/zone1 host2# zonecfg -z zone1 attach host2# zoneadm -z zone1 boot -- Mike Gerdts http://m

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda wrote: > Thanks for the info Mike. > > Just so I'm clear.  You suggest 1)create a single zpool from my LUN 2) create > a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right? Correct --

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
to it, so I will give each thing X/Y space. This is because it is quite likely that someone will do the operation Y++ and there are very few storage technologies that allow you to shrink the amount of space allocated to each item. -- Mike Gerdts h

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-13 Thread Mike Gerdts
g/pipermail/fm-discuss/2009-June/000436.html from June 10 suggests you are running firmware release (045C)8626. On August 11 they released firmware revisions 8820, 8850, and 02G9, depending on the drive model. http://downloadcenter.intel.com/Detail_Desc.aspx?agr

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
nd to agree with the spirit of the docs, but I've also seen several conversations where storing "zfs send" output is highly discouraged. My intent of bringing this up was to head off the eventual situation where someone came to the list saying that the

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
on, I think. > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Complete_Solaris_ZFS_Root_Pool_Recovery > > I will follow-up on this particular marketing document. > > Thanks for pointing it out... > > Cindy > > On 09/02/09 12:37, Mike Gerdts wrot

[zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
to do things that will lead them to unsympathetic ears if things go poorly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symbolic links?

2009-08-24 Thread Mike Gerdts
ry for a project that we are working on together. Unfortunately, his umask was messed up and I can't modify the files in ~alice/proj1. Can you do a 'chmod -fR a+rw /home/alice/proj1' for me? Thanks!" | mailx -s "permissions fix" Helpdesk$ pfexec chmod -fR a+r

Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Mike Gerdts
//opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#404589 http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405835 http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405308 -- Mike Gerdts http://mgerdts.blogspot.com/ ___

Re: [zfs-discuss] file change long - was zfs fragmentation

2009-08-12 Thread Mike Gerdts
anpages/vxfs/man1m/fcladm.html This functionality would come in very handy. It would seem that it isn't too big of a deal to identify the files that changed, as this type of data is already presented via "zpool status -v" when corruption is detected. http://docs.sun.com/app/

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
in the parallelism gaps as the longer-running ones finish. 3. That is, there is sometimes benefit in having many more jobs to run than you have concurrent streams. This avoids having one save set that finishes long after all the others because of poorly balanced save sets. -- Mike Gerdts http

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
"sequential" I mean that one doesn't start until the other finishes. There is certainly a better word, but it escapes me at the moment. At an average file size of 45 KB, that translates to about 3 MB/sec. As you run two data streams, you are seeing throughput that looks kinda like the 2 * 3 MB/sec. With 4 backup streams do you get something that looks like 4 * 3 MB/s? How does that effect iostat output? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] pathnames in zfs(1M) arguments

2009-08-09 Thread Mike Gerdts
ase of creating snapshots, there is also this: # mkdir .zfs/snapshot/foo # zfs list | grep foo rpool/ROOT/s10u7_...@foo 0 - 9.76G - # rmdir .zfs/snapshot/foo # zfs list | grep foo I don't know of a similar shortcut for the create or clone subcommands. -- Mike Gerdts http://mg

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
On Sat, Aug 8, 2009 at 3:25 PM, Ed Spencer wrote: > > On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote: > >> The DBA's that I know use files that are at least hundreds of >> megabytes in size.  Your problem is very different. > Yes, definitely. > > I'm relat

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
peed with SSD's than there is in read speeds. However, the NVRAM on the NetApp that is backing your iSCSI LUNs is probably already giving you most of this benefit (assuming low latency on network connections). -- Mike Gerdts http://mgerdts.blogspot.com/ ___

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
increase the performance of a zfs > filesystem without causing any downtime to an Enterprise email system > used by 30,000 intolerant people, when you don't really know what is > causing the performance issues in the first place? (Yeah, it sucks to be > me!) Hopefully I've helped

Re: [zfs-discuss] How Virtual Box handles the IO

2009-07-31 Thread Mike Gerdts
s/2007-September/013233.html Quite likely related to: http://bugs.opensolaris.org/view_bug.do?bug_id=6684721 In other words, it was a buggy Sun component that didn't do the right thing with cache flushes. -- Mike Gerdts http://mgerdts.blogspot.com/

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Mike Gerdts
lly? It appears as though there is an upgrade path. http://www.c0t0d0s0.org/archives/5750-Upgrade-of-a-X4500-to-a-X4540.html However, the troll that you have to pay to follow that path demands a hefty sum ($7995 list). Oh, and a reboot is required. :) -- Mike Gerdts http://m

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Mike Gerdts
roducts (eg VMware, Parallels, Virtual PC) have the > same default behaviour as VirtualBox? I've lost a pool due to LDoms doing the same. This bug seems to be related. http://bugs.opensolaris.org/view_bug.do?bug_id=6684721 -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] An amusing scrub

2009-07-15 Thread Mike Gerdts
u would have enough to pay this credit card bill. http://www.cnn.com/2009/US/07/15/quadrillion.dollar.glitch/index.html > - Rich > > (Footnote: I ran ntpdate between starting the scrub and it finishing, > and time rolled backwards. Nothing more exciting.) And Visa is willing to wave

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
Use is subject to license terms. Assembled 07 May 2009 # uname -srvp SunOS 5.11 snv_111b sparc -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
ied via truss that each read(2) was returning 128K. I thought I had seen excessive reads there too, but now I can't reproduce that. Creating another fs with recordsize=8k seems to make this behavior go away - things seem to be working as designed. I'll go upd

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
On Mon, Jul 13, 2009 at 3:16 PM, Joerg Schilling wrote: > Bob Friesenhahn wrote: > >> On Mon, 13 Jul 2009, Mike Gerdts wrote: >> > >> > FWIW, I hit another bug if I turn off primarycache. >> > >> > http://defect.opensolaris.org/bz/show_bug.c

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
4m21.57s user0m9.72s sys 0m36.30s Doing second 'cpio -o > /dev/null' 4800025 blocks real4m21.56s user0m9.72s sys 0m36.19s Feel free to clean up with 'zfs destroy testpool/zfscachetest'. This bug report contains more detail of the configuration. O

Re: [zfs-discuss] deduplication

2009-07-11 Thread Mike Gerdts
r trouble in the long term without deduplication to handle ongoing operation. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Mike Gerdts
his for smallish (8KB) directories. > > > BTW: If you like to fix the software, you should know that Linux has at least > one filesystem that returns the entries for "." and ".." out of order. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Mike Gerdts
/lib/libc/port/gen/readdir.c http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libbc/libc/gen/common/readdir.c The libbc version hasn't changed since the code became public. You can get to an older libc variant of it by clicking on the history link or using th

Re: [zfs-discuss] Why Oracle process open(2)/ioctl(2) /dev/dtrace/helper?

2009-06-22 Thread Mike Gerdts
009 09:06:09 KST >  open(/dev/dtrace/helper) > >              libc.so.1`open >              libCrun.so.1`0x7a50aed8 >              libCrun.so.1`0x7a50b0f4 >              ld.so.1`call_fini+0xd0 >              ld.so.1`atexit_fini+0x80 >              libc.so.1`_exithandle+0x48 >              libc.so.1`exit+0x4 >              oracle`_start+0x184 > > *** > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   >