[zfs-discuss] zdb for deleted file recovery?

2013-01-31 Thread Eric
out 1400 metaslabs, and each one taking hours to complete (I've got a script that's been running for about an hour or so now and has managed to reconstruct about 80M out of ~300M free from a single metaslab- then I get to run it for all the other metaslabs). So if this idea is completely insane a

Re: [zfs-discuss] Another zfs dataset [was: Plans for swapping to part of a pool]

2007-07-13 Thread Eric
inds me of the problem of bootstrapping the slab allocator and of avoiding allocations when freeing memory objects. When people do cool things in the past, it raises the bar on expectations for the future :). Eric ___ zfs-discuss mailing list zfs-dis

[zfs-discuss] checksum errors increasing on "spare" vdev?

2010-03-17 Thread Eric Sproul
ight expect my read performance to increase as resilver progresses, as less and less data requires reconstruction. I haven't measured this in a controlled environment though, so I'm mostly just curious about the theory. Eric ___ zfs-discuss mai

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-20 Thread Eric Andersen
2, go with mirrors. Either way, if you care about your data, back it up. eric -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] RAID10

2010-03-26 Thread Eric Andersen
erformance, and ease of replacing drives mean to you and go from there. ZFS will do pretty much any configuration to suit your needs. eric -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

Re: [zfs-discuss] Simultaneous failure recovery

2010-03-31 Thread Eric Schrock
ault received by the zfs-retire FMA agent. There is no notion that the spares should be re-evaluated when they become available at a later point in time. Certainly a reasonable RFE, but not something ZFS does today. You can 'zpool attach' the spare like a normal device - that's

Re: [zfs-discuss] no hot spare activation?

2010-04-05 Thread Eric Schrock
eives the list.suspect event. This code path is tested many, many times every day, so it's not as obvious as "this doesn't work." The ZFS retire agent subscribes only to ZFS faults. The underlying driver or other telemetry h

Re: [zfs-discuss] no hot spare activation?

2010-04-05 Thread Eric Schrock
' show? Does doing a 'zpool replace c2t3d1 c2t3d2' by hand succeed? - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-08 Thread Eric Andersen
ve works for me, but there are certainly weaknesses with using it as a backup solution (as has been much discussed on this list.) Hopefully, in the future it will be possible to remove vdevs from a pool and to restripe data across a pool. Those particul

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Eric Andersen
more cost effective to just build a new system with newer and better technology. It should take me a long while to fill up 9TB, but there was a time when I thought a single gigabyte was a ridiculous amount of storage too. Eric On Apr 8, 2010, at 11:21 PM, Erik Trimble wrote: > Eric An

Re: [zfs-discuss] ZFS RaidZ recommendation

2010-04-09 Thread Eric Andersen
ed if these drives end up flaking out on me. You usually get what you pay for. What I have isn't great, but it's better than nothing. Hopefully, I'll never need to recover data from them. If they end up proving to be too unreliable, I'll have to look at other options. Eric

Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Eric Andersen
don't have ethernet run to it, and trying to stream any media over wireless-g, especially the HD stuff, is frustrating to say the least. I dropped $100 on an xtreamer media player, and it's great. Plays any format/container I can throw at it.

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Eric Andersen
> I'm on snv 111b. I attempted to get smartmontools > workings, but it doesn't seem to want to work as > these are all sata drives. Have you tried using '-d sat,12' when using smartmontools? opensolaris.org/jive/thread.jspa?messageID=473727 -- This message posted from opensolaris.org __

Re: [zfs-discuss] Dedup... still in beta status

2010-06-16 Thread Eric Schrock
othing pathological (i.e. 30 seconds, not 30 hours). Expect to see fixes for these remaining issues in the near future. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Eric Schrock
distinguish between REMOVED and FAULTED devices. Mis-diagnosing a removed drive as faulted is very bad (fault = broken hardware = service call = $$$). - Eric P.S. the bug in the ZFS scheme module is legit, we just haven't fixed it yet -- Eric Schrock, Fishworkshttp://bl

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Eric Schrock
aving your pool running minus one disk for hours/days/weeks is clearly broken. If you have a solution that correctly detects devices as REMOVED for a new class of HBAs/drivers, that'd be more than welcome. If you choose to represent missing devices as faulted in your own third party sy

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-17 Thread Eric Schrock
will it report them as CMD_DEV_GONE, or will it report an error > causing a fault to be flagged? This is detected as device removal. There is a timeout associated with I/O errors in zfs-diagnosis that gives some grace period to detect removal before declaring a disk faulted. - Eric -- Eric Schro

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Eric Schrock
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote: > On 18/06/2010 00:18, Garrett D'Amore wrote: >> On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote: >> >>> On the SS7000 series, you get an alert that the enclosure has been detached >>>

Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-06-24 Thread Eric Jones
Where is the link to the script, and does it work with RAIDZ arrays? Thanks so much. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS Filesystem Recovery on RAIDZ Array

2010-06-24 Thread Eric Jones
This day went from usual Thursday to worst day of my life in the span of about 10 seconds. Here's the scenario: 2 Computer, both Solaris 10u8, one is the primary, one is the backup. Primary system is RAIDZ2, Backup is RAIDZ with 4 drives. Every night, Primary mirrors to Backup using the 'zfs

Re: [zfs-discuss] zfs-discuss Digest, Vol 56, Issue 126

2010-06-30 Thread Eric Andersen
On Jun 28, 2010, at 10:03 AM, zfs-discuss-requ...@opensolaris.org wrote: > Send zfs-discuss mailing list submissions to > zfs-discuss@opensolaris.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > or, via ema

Re: [zfs-discuss] snapshot .zfs folder can only be seen in the top of a file system?

2010-07-24 Thread Eric Schrock
age you to work on the RFE yourself - any implementation would certainly be appreciated. This possibility was originally why the 'snapdir' property was named as it was, so we could someday support 'snapdir=every' to export .zfs in every directory. - Eric -- Eric Schrock, F

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Eric Schrock
failed disk with the spare. The spare is now busy and it fails. This has to be a bug. You need to 'zpool detach' the original (c8t7d0). - Eric Another way to recover is if you have a replacement disk for c8t7d0, like this: 1. Physically replace c8t7d0. You might have to unconfigur

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Eric Schrock
quot; is overly brief and could be expanded to include this use case. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Eric Schrock
On 10/14/09 14:33, Cindy Swearingen wrote: Hi Eric, I tried that and found that I needed to detach and remove the spare before replacing the failed disk with the spare disk. You should just be able to detach 'c0t6d0' in the config below. The spare (c0t7d0) will assume its pl

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Eric Schrock
o do "echo ::spa -c | mdb -k" and look for that vdev id, assuming the vdev is still active on the system. - Eric Cindy On 10/23/09 14:52, sean walmsley wrote: Thanks for this information. We have a weekly scrub schedule, but I ran another just to be sure :-) It completed with

Re: [zfs-discuss] cryptic vdev name from fmdump

2009-10-23 Thread Eric Schrock
On 10/23/09 16:56, sean walmsley wrote: Eric and Richard - thanks for your responses. I tried both: echo ::spa -c | mcb zdb -C (not much of a man page for this one!) and was able to match the POOL id from the log (hex 4fcdc2c9d60a5810) with both outputs. As Richard pointed out, I needed

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-11-02 Thread Eric Sproul
an LED on the board face, not even on the bracket, so I'd have to go crack open the machine to make sure the battery was holding a charge. That didn't fit our model of maintainability, so we didn't deploy it. Regards, Eric _

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread Eric Schrock
hologies as the pool gets full. Namely, that ZFS will artificially enforce a limit on the logical size of the pool based on non-deduped data. This is obviously something that should be addressed. - Eric dd if=/dev/urandom of=/tank/foobar/file1 bs=1024k count=512 512+0 records in

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread Eric Schrock
hologies as the pool gets full. Namely, that ZFS will artificially enforce a limit on the logical size of the pool based on non-deduped data. This is obviously something that should be addressed. Eric, Many people (me included) perceive deduplication as a mean to save disk space and allow

Re: [zfs-discuss] ..and now ZFS send dedupe

2009-11-09 Thread Eric Schrock
On 11/09/09 12:58, Brent Jones wrote: Are these recent developments due to help/support from Oracle? No. Or is it business as usual for ZFS developments? Yes. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock

Re: [zfs-discuss] uneven usage between raidz2 devices in storage pool

2009-11-20 Thread Eric Sproul
ites will be spread across the 3 vdevs. Existing data stays where it is for reading, but if you update it, those writes will be balanced across all 3 vdevs. If you are mostly concerned with write performance, you don't have to do anything. Regards, Eric __

Re: [zfs-discuss] uneven usage between raidz2 devices in storage pool

2009-11-20 Thread Eric Sproul
details. You didn't mention how wide your raidz2 vdevs are, but I would imagine that even with a larger proportion of writes going to the new vdev, your overall write performance (particularly on concurrent writes) will improve regardless. Eric ___ zfs

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-09 Thread Eric Schrock
ng at the ideal state. By definition a hot spare is always DEGRADED. As long as the spare itself is ONLINE it's fine. Hope that helps, - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-11 Thread Eric Schrock
On 01/11/10 17:42, Paul B. Henson wrote: On Sat, 9 Jan 2010, Eric Schrock wrote: No, it's fine. DEGRADED just means the pool is not operating at the ideal state. By definition a hot spare is always DEGRADED. As long as the spare itself is ONLINE it's fine. One more question o

Re: [zfs-discuss] x4500 failed disk, not sure if hot spare took over correctly

2010-01-11 Thread Eric Schrock
On Jan 11, 2010, at 6:35 PM, Paul B. Henson wrote: > On Mon, 11 Jan 2010, Eric Schrock wrote: > >> No, there is no way to tell if a pool has DTL (dirty time log) entries. > > Hmm, I hadn't heard that term before, but based on a quick search I take it > that's th

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Eric Schrock
e-attach the device if it is indeed just missing. #2 is being worked on, but also does not affect the standard reboot case. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discus

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Eric Schrock
On Feb 6, 2010, at 11:30 PM, Christo Kutrovsky wrote: > Eric, thanks for clarifying. > > Could you confirm the release for #1 ? As "today" can be misleading depending > on the user. A long time (snv_96/s10u8). > Is there a schedule/target for #2 ? No. > And jus

Re: [zfs-discuss] ZFS Volume Destroy Halts I/O

2010-02-15 Thread Eric Schrock
fact that free operations used to be in-memory only but with dedup enabled can result in synchronous I/O to disks in syncing context. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list z

Re: [zfs-discuss] SSDs with a SCSI SCA interface?

2010-02-19 Thread Eric Sproul
not particular about whether it's a 68-pin or > SCA) Bitmicro makes one: http://www.bitmicro.com/products_edisk_altima_35_u320.php They also make a version with a 4Gb FC interface. Haven't tried either one, but found Bitmicro when researching SSD options for a V890. Eric _

Re: [zfs-discuss] Growing CKSUM errors with no READ/WRITE errors

2011-10-21 Thread Eric Sproul
capable, it seems unlikely to be the issue. I'd make sure all cables are fully seated and not kinked or otherwise damaged. Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about "Seagate Pipeline HD" or "SV35 series"HDDs

2012-03-28 Thread Eric Sproul
poorer performance than even a bog-standard desktop drive. Never seemed like a good idea to me, and to paraphrase Richard Elling, expecting any kind of respectable performance from spinning media is a sucker's game. ;) Eric ___ zfs-discuss maili

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-25 Thread Eric Schrock
ZFS will always track per-user usage information even in the absence of quotas. See the the zfs 'userused@' properties and 'zfs userspace' command. - Eric 2012/4/25 Fred Liu > Missing an important ‘NOT’: > > >OK. I see. And I agree such quotas will **NOT** scal

Re: [zfs-discuss] [developer] Setting default user/group quotas[usage accounting]?

2012-04-26 Thread Eric Schrock
with the 'compression' property on a per-filesystem level, and is fundamentally per-block. Dedup is also controlled per-filesystem, though the DDT is global to the pool. If you think there are compelling features lurking here, then by all means grab the code and run with it :-) - Eric

Re: [zfs-discuss] [developer] History of EPERM for unlink() of directories on ZFS?

2012-06-25 Thread Eric Schrock
Also worth noting that ZFS also doesn't let you open(2) directories and read(2) from them, something (I believe) UFS does allow. - Eric On Mon, Jun 25, 2012 at 10:40 AM, Garrett D'Amore wrote: > I don't know the precise history, but I think its a mistake to permit > di

Re: [zfs-discuss] [developer] History of EPERM for unlink() of directories on ZFS?

2012-06-25 Thread Eric Schrock
ing, guess you learn something new every day :-) http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/mkdir.c Thanks, - Eric -- Eric Schrock Delphix http://blog.delphix.com/eschrock 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread Eric Schrock
e if it does the right thing for checksum errors. That is a very small subset of possible device failure modes. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@open

Re: [zfs-discuss] ZFS offline ZIL corruption not detected

2010-08-26 Thread Eric Schrock
additional writes for every block. If it's even possible to implement this "paranoid ZIL" tunable, are you willing to take a 2-5x performance hit to be able to detect this failure mode? - Eric -- Eric Schrock, Fishworks

Re: [zfs-discuss] Question about (delayed) block freeing

2010-10-29 Thread Eric Schrock
come and the (now free) > blocks are reused for new data. ZFS will not reuse blocks for 3 transaction groups. This is why uberblock rollback will do normally only attempt a rollback of up to two previous txgs. - Eric -- Eric Schrock, Fishworks

Re: [zfs-discuss] [illumos-Developer] ZFS spare disk usage issue

2011-03-04 Thread Eric Schrock
spare, and that spare may not have the same RAS properties as other devices in your RAID-Z stripe (it may put 3 disks on the same controller in one stripe, for example). - Eric On Fri, Mar 4, 2011 at 7:06 AM, Roy Sigurd Karlsbakk wrote: > Hi all > > I just did a small test on RAIDz2 to

Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Eric Sproul
s send/recv to move the datasets, so your mountpoints and other properties will be preserved. Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] changing vdev types

2011-06-01 Thread Eric Sproul
On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison wrote: > Thanks Eric, however seeing as I can't have two pools named 'tank', I'll > have to name the new one something else. I believe I will be able to rename > it afterwards, but I just wanted to check first. I&#x

Re: [zfs-discuss] SATA disk perf question

2011-06-03 Thread Eric Sproul
may internally reorder and/or aggregate those writes before sending them to the platter. Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] compare snapshot to current zfs fs

2011-06-05 Thread Eric Sproul
re, so upcoming OI releases will be in better sync. OI 147 still had man pages from the original OpenSolaris docs consolidation. Sorry I can't answer about zfs diff directly-- haven't used that feature yet. Eric ___ zfs-discuss

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Eric Schrock
e should be "refcompressratio" as the long name and "refratio" as the short name would make sense, as that matches "compressratio". Matt? - Eric On Mon, Jun 6, 2011 at 7:08 PM, Haudy Kazemi wrote: > On 6/6/2011 5:02 PM, Richard Elling wrote: > >> On Jun

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Eric Schrock
Webrev has been updated: http://dev1.illumos.org/~eschrock/cr/zfs-refratio/ - Eric -- Eric Schrock Delphix 275 Middlefield Road, Suite 50 Menlo Park, CA 94025 http://www.delphix.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Eric Schrock
Good catch. For consistency, I updated the property description to match "compressratio" exactly. - Eric On Mon, Jun 6, 2011 at 9:39 PM, Mark Musante wrote: > > minor quibble: compressratio uses a lowercase x for the description text > whereas the new prop uses an upperc

Re: [zfs-discuss] zfs usable space?

2011-06-15 Thread Eric Sproul
dor math". :) Then I do $NSEC/2097152 to get GB (assuming 512-byte sectors). ZFS reserves 1/64 of the pool size to protect copy-on-write as the pool approaches being full. After you make your usable space calculation, subtract 1/64 of that (total*.016) and that should be very close to the av

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Eric Sproul
also a nice fit for the typical 8-port SAS HBA. Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pure SSD Pool

2011-07-11 Thread Eric Sproul
s before another write comes along that would occupy them anew. I'm contemplating a similar setup for some servers, so I'm interested if other people have been operating pure-SSD zpools and what their experiences have been. Eric ___ zfs-disc

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Eric Sproul
On Tue, Jul 12, 2011 at 1:06 AM, Brandon High wrote: > On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote: >> Interesting-- what is the suspected impact of not having TRIM support? > > There shouldn't be much, since zfs isn't changing data in place. Any > drive with r

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Eric Sproul
r that explanation. So finding drives that keep more space in reserve is key to getting consistent performance under ZFS. Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] recover raidz from fried server ??

2011-07-13 Thread Eric Sproul
hey did for the 1068e and others. As long as you don't configure any RAID volumes, the card will attach to the non-RAID mpt_sas driver in Solaris and you'll be all set. Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.op

Re: [zfs-discuss] file system under heavy load, how to find out what the cause is?

2011-09-20 Thread Eric Sproul
y broken down by mountpoint: fsstat -i `mount | awk '{if($3 ~ /^[^\/:]+\//) {print $1;}}'` 1 Of course this only works for POSIX filesystems. This won't catch activity to zvols. Maybe that won't matter in your case. Eric ___ zfs-disc

Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Eric Schrock
on if you are using whole disks and a driver with static device paths (such as sata). - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] questions about ZFS Send/Receive

2008-07-29 Thread eric kustarz
; destination file system and all of its child file sys- > tems are unmounted and cannot be accessed during the > receive operation. Actually we don't unmount the file systems anymore for incremental send/recv, see: 6425096 want online 'zfs rec

Re: [zfs-discuss] ZFS on 32bit.

2008-08-08 Thread eric kustarz
I've filed specifically for ZFS: 6735425 some places where 64bit values are being incorrectly accessed on 32bit processors eric On Aug 6, 2008, at 1:59 PM, Brian D. Horn wrote: > In the most recent code base (both OpenSolaris/Nevada and S10Ux with > patches) > all the kno

Re: [zfs-discuss] more ZFS recovery

2008-08-12 Thread eric kustarz
do you mean by "internal data structures"? Are you referring to things like space maps, props, history obj, etc. (basically anything other than user data and the indirect blocks that point to user data)? eric ___ zfs-discuss mailing list zfs-

Re: [zfs-discuss] ZFS + MPXIO + SRDF

2008-08-13 Thread eric kustarz
ZFS pool Ugly workaround is to purposely reboot the original host. And you will want: 6282725 hostname/hostid should be stored in the label http://blogs.sun.com/erickustarz/en_US/entry/poor_man_s_cluster_end which will be in s10u6. eric ___ zfs-d

Re: [zfs-discuss] zpool detach from degraded mirror : why "only applicable to mirror ..." ?

2008-08-15 Thread Eric Schrock
thing completely non-sensical. If you do a 'zpool scrub', does it complete without any errors? - Eric On Fri, Aug 15, 2008 at 01:48:48PM -0700, Nils Goroll wrote: > Hi, > > I thought that this question must have been answered already, but I have > not found any explanations. I

Re: [zfs-discuss] zpool detach from degraded mirror : why "only applicable to mirror ..." ?

2008-08-15 Thread Eric Schrock
On Fri, Aug 15, 2008 at 02:14:02PM -0700, Eric Schrock wrote: > The fact that it's DEGRADED and not FAULTED indicates that it thinks the > DTL (dirty time logs) for the two sides of the mirrors overlap in some > way, so detaching it would result in loss of data. In the process of &g

Re: [zfs-discuss] ZFS handling of many files

2008-08-21 Thread eric kustarz
n't you test this right now? You could generate a similar workload using FileBench: http://www.solarisinternals.com/wiki/index.php/FileBench eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-28 Thread Eric Schrock
another set of tunables is not practical. It will be interesting to see if this is an issue after the retry logic is modified as described above. Hope that helps, - Eric On Thu, Aug 28, 2008 at 01:08:26AM -0700, Ross wrote: > Since somebody else has just posted about their entire system locking up w

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-28 Thread Eric Schrock
uation really poorly. I don't think you understand how this works. Imagine two I/Os, just with different sd timeouts and retry logic - that's B_FAILFAST. It's quite simple, and independent of any hardware implementation. - Eric -- Eric Schrock, Fishworks

Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-08-28 Thread Eric Schrock
ny such "best effort RAS" is a little dicey because you have very little visibility into the state of the pool in this scenario - "is my data protected?" becomes a very difficult question to answer. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Explaining ZFS message in FMA

2008-09-04 Thread Eric Schrock
You should be able to do 'zpool status -x' to find out what vdev is broken. A useful extension to the DE would be to add a label to the suspect corresponding to /. - Eric On Thu, Sep 04, 2008 at 06:34:33PM +0200, Alain Ch?reau wrote: > Hi all, > > ZFS send a message to

Re: [zfs-discuss] Greenbytes/Cypress

2008-09-23 Thread Eric Schrock
_ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock _

Re: [zfs-discuss] make zfs(1M) use literals when displaying properties in scripted mode

2008-09-30 Thread Eric Schrock
A better solution (one that wouldn't break backwards compatability) would be to add the '-p' option (parseable output) from 'zfs get' to the 'zfs list' command as well. - Eric On Wed, Oct 01, 2008 at 03:59:27PM +1000, David Gwynne wrote: > as the topic

[zfs-discuss] Root pool mirror wasn't automatically configured during install

2008-10-03 Thread Eric Boutilier
attach was required. -Eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Root pool mirror wasn't automatically configured during install

2008-10-06 Thread Eric Boutilier
On Fri, 3 Oct 2008, [EMAIL PROTECTED] wrote: > Eric Boutilier wrote: >> Is the following issue related to (will probably get fixed by) bug 6748133? >> ... >> >> During a net-install of b96, I modified the name of the root pool, >> overriding the default name, r

Re: [zfs-discuss] ZFS Mirrors braindead?

2008-10-07 Thread Eric Schrock
probably happened to you. FYI, this is bug 6667208 fixed in build 100 of nevada. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-10 Thread Eric Schrock
iting ZFS pools[1]. But I haven't actually heard a reasonable proposal for what a fsck-like tool (i.e. one that could "repair" things automatically) would actually *do*, let alone how it would work in the variety of situations it needs to (compressed RAID-Z?) where the standard ZFS i

Re: [zfs-discuss] zpool import problem

2008-10-27 Thread Eric Schrock
These are the symptoms of a shrinking device in a RAID-Z pool. You can try to run the attached script during the import to see if this the case. There's a bug filed on this, but I don't have it handy. - Eric On Sun, Oct 26, 2008 at 05:18:25PM -0700, Terry Heatlie wrote: > Folk

Re: [zfs-discuss] cleaning user properties

2008-11-03 Thread Eric Schrock
set locally ('zfs get -s local ...'). - Eric On Mon, Nov 03, 2008 at 08:35:22AM -0500, Mark J Musante wrote: > On Mon, 3 Nov 2008, Luca Morettoni wrote: > > > now I need to *clear* (remove) the property from > > rpool/export/home/luca/src filesystem, but if I use the "

Re: [zfs-discuss] Sun Storage 7000

2008-11-10 Thread Eric Schrock
http://blogs.sun.com/fishworks There will be much more information throughout the day and in the coming weeks. If you want to give it a spin, be sure to check out the freely available VM images. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com

Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Eric Schrock
configured in an implementation-defined way for the software to function correctly. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Eric Schrock
ed config) so that we can mirror/RAID across them. Even without NSPF, we have redundant cables, HBAs, power supplies, and controllers, so this is only required if you are worried about disk backplane failure (a very rare failure mode). Can you point to the literature that suggests this is

Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Eric Schrock
has gotten the > best of me, so I apologize. Feel free to correct as you see fit. I can update the blog entry if it's misleading. I assumed that it was implicit that the absence of the above (missing or broken disks) meant supported, but I admit that I did not state that explicitl

Re: [zfs-discuss] "ZFS, Smashing Baby" a fake???

2008-11-25 Thread Eric Schrock
ilure ereport). So ZFS pre-emptively short circuits all I/O and treats the drive as faulted, even though the diagnosis hasn't come back yet. We can only do this for errors that have a 1:1 correspondence with faults. - Eric On Tue, Nov 25, 2008 at 04:10:13PM +, Ross Smith wrote: > I

[zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-25 Thread Eric Hill
ions on LIB1 for 777, and created a test subfolder that I have applied permissions through Windows XP. Windows complained about reordering the permissions when I first set them, and now doesn't complain when opening the security tab, so I assume they're ordered correctly. [EMAIL PROTECTED]:/po

Re: [zfs-discuss] "ZFS, Smashing Baby" a fake???

2008-11-26 Thread Eric Schrock
ck into new behavior that should provide a much improved experience. - Eric P.S. I'm also not sure that B_FAILFAST behaves in the way you think it does. My reading of sd.c seems to imply that much of what you suggest is actually how it currently behaves, but you should probably

Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-12-01 Thread Eric Hill
Well, there's the problem... #id -a tom uid=15669(tom) gid=15004(domain users) groups=15004(domain users) # wbinfo -r shows the full list of groups, but id -a only lists "domain users". Since I'm trying to restrict permissions on other groups, my access denied error message makes more sense.

Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-04 Thread Eric Schrock
Can you send the output of the attached D script when running 'zpool status'? - Eric On Thu, Dec 04, 2008 at 02:58:54PM -0800, Brett wrote: > As a result of a power spike during a thunder storm I lost a sata controller > card. This card supported my zfs pool called newsan which

Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-08 Thread Eric Schrock
Well it shows that you're not suffering from a known bug. The symptoms you were describing were the same as those seen when a device spontaneously shrinks within a raid-z vdev. But it looks like the sizes are the same ("config asize" = "asize"), so I'm at a loss. -

Re: [zfs-discuss] hot spare not so hot ??

2009-01-20 Thread Eric Schrock
What software are you running? There was a bug where offline device failure did not trigger hot spares, but that should be fixed now (at least in OpenSolaris, not sure about s10u6). - Eric On Wed, Jan 21, 2009 at 09:57:42AM +1100, Nathan Kroenert wrote: > An interesting interpretation of us

[zfs-discuss] [Fwd: Re: [Fwd: RE: Disk Pool overhead]]

2009-02-09 Thread Eric Frank
Hi There, One of my partners asked the question w.r.t. Disk Pool overhead for the 7000 series. Adam Leventhal put that it was very small (1/64) see below.. Do we have any further info regarding this? Thanks, -eric :) Original Message Subject:Re: [Fwd: RE: Disk

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Eric Schrock
Note that: 6501037 want user/group quotas on ZFS Is already committed to be fixed in build 113 (i.e. in the next month). - Eric On Thu, Mar 12, 2009 at 12:04:04PM +0900, Jorgen Lundman wrote: > > In the style of a discussion over a beverage, and talking about > user-quotas

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-19 Thread Eric Schrock
tinue without committed data. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-19 Thread Eric Schrock
Ds - it doesn't matter if reads are fast for slogs. With the txg being a working set of the active commit, so might be a set of NFS iops? If the NFS ops are synchronous, then yes. Async operations do not use the ZIL and therefore don't have anything to do with slogs. - Eric -

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-19 Thread Eric Schrock
open. A failed slog device can prevent such a pool from being imported. - Eric -- Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   4   5   6   7   8   9   10   >