out 1400
metaslabs, and each one taking hours to complete (I've got a script that's
been running for about an hour or so now and has managed to reconstruct
about 80M out of ~300M free from a single metaslab- then I get to run it
for all the other metaslabs). So if this idea is completely insane a
inds me of the problem of bootstrapping the slab allocator
and of avoiding allocations when freeing memory objects. When people
do cool things in the past, it raises the bar on expectations for the
future :).
Eric
___
zfs-discuss mailing list
zfs-dis
ight expect my read performance to increase as resilver
progresses, as less and less data requires reconstruction. I haven't measured
this in a controlled environment though, so I'm mostly just curious about the
theory.
Eric
___
zfs-discuss mai
2, go with mirrors. Either way, if you care about your
data, back it up.
eric
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
erformance, and ease of replacing drives mean to you and go
from there. ZFS will do pretty much any configuration to suit your needs.
eric
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
ault received by the zfs-retire
FMA agent. There is no notion that the spares should be re-evaluated when they
become available at a later point in time. Certainly a reasonable RFE, but not
something ZFS does today.
You can 'zpool attach' the spare like a normal device - that's
eives the list.suspect event. This code path is
tested many, many times every day, so it's not as obvious as "this doesn't
work."
The ZFS retire agent subscribes only to ZFS faults. The underlying driver or
other telemetry h
' show? Does doing a 'zpool
replace c2t3d1 c2t3d2' by hand succeed?
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ve works for me, but there are certainly weaknesses with using it
as a backup solution (as has been much discussed on this list.)
Hopefully, in the future it will be possible to remove vdevs from a pool and to
restripe data across a pool. Those particul
more cost effective to just build a new
system with newer and better technology. It should take me a long while to
fill up 9TB, but there was a time when I thought a single gigabyte was a
ridiculous amount of storage too.
Eric
On Apr 8, 2010, at 11:21 PM, Erik Trimble wrote:
> Eric An
ed if these drives end up flaking out on me.
You usually get what you pay for. What I have isn't great, but it's better
than nothing. Hopefully, I'll never need to recover data from them. If they
end up proving to be too unreliable, I'll have to look at other options.
Eric
don't have ethernet run to it, and trying to stream any media over wireless-g,
especially the HD stuff, is frustrating to say the least. I dropped $100 on an
xtreamer media player, and it's great. Plays any format/container I can throw
at it.
> I'm on snv 111b. I attempted to get smartmontools
> workings, but it doesn't seem to want to work as
> these are all sata drives.
Have you tried using '-d sat,12' when using smartmontools?
opensolaris.org/jive/thread.jspa?messageID=473727
--
This message posted from opensolaris.org
__
othing pathological (i.e. 30 seconds, not 30 hours). Expect
to see fixes for these remaining issues in the near future.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-
distinguish between REMOVED and FAULTED devices. Mis-diagnosing a removed
drive as faulted is very bad (fault = broken hardware = service call = $$$).
- Eric
P.S. the bug in the ZFS scheme module is legit, we just haven't fixed it yet
--
Eric Schrock, Fishworkshttp://bl
aving your pool running minus one disk for hours/days/weeks is clearly broken.
If you have a solution that correctly detects devices as REMOVED for a new
class of HBAs/drivers, that'd be more than welcome. If you choose to represent
missing devices as faulted in your own third party sy
will it report them as CMD_DEV_GONE, or will it report an error
> causing a fault to be flagged?
This is detected as device removal. There is a timeout associated with I/O
errors in zfs-diagnosis that gives some grace period to detect removal before
declaring a disk faulted.
- Eric
--
Eric Schro
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
> On 18/06/2010 00:18, Garrett D'Amore wrote:
>> On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
>>
>>> On the SS7000 series, you get an alert that the enclosure has been detached
>>>
Where is the link to the script, and does it work with RAIDZ arrays? Thanks so
much.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This day went from usual Thursday to worst day of my life in the span of about
10 seconds. Here's the scenario:
2 Computer, both Solaris 10u8, one is the primary, one is the backup. Primary
system is RAIDZ2, Backup is RAIDZ with 4 drives. Every night, Primary mirrors
to Backup using the 'zfs
On Jun 28, 2010, at 10:03 AM, zfs-discuss-requ...@opensolaris.org wrote:
> Send zfs-discuss mailing list submissions to
> zfs-discuss@opensolaris.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> or, via ema
age you to work on the RFE yourself - any implementation would
certainly be appreciated. This possibility was originally why the 'snapdir'
property was named as it was, so we could someday support 'snapdir=every' to
export .zfs in every directory.
- Eric
--
Eric Schrock, F
failed disk with
the spare. The spare is now busy and it fails. This has to be a bug.
You need to 'zpool detach' the original (c8t7d0).
- Eric
Another way to recover is if you have a replacement disk for c8t7d0,
like this:
1. Physically replace c8t7d0.
You might have to unconfigur
quot; is overly brief and
could be expanded to include this use case.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/14/09 14:33, Cindy Swearingen wrote:
Hi Eric,
I tried that and found that I needed to detach and remove
the spare before replacing the failed disk with the spare
disk.
You should just be able to detach 'c0t6d0' in the config below. The
spare (c0t7d0) will assume its pl
o do "echo ::spa -c | mdb
-k" and look for that vdev id, assuming the vdev is still active on the
system.
- Eric
Cindy
On 10/23/09 14:52, sean walmsley wrote:
Thanks for this information.
We have a weekly scrub schedule, but I ran another just to be sure :-)
It completed with
On 10/23/09 16:56, sean walmsley wrote:
Eric and Richard - thanks for your responses.
I tried both:
echo ::spa -c | mcb
zdb -C (not much of a man page for this one!)
and was able to match the POOL id from the log (hex 4fcdc2c9d60a5810) with both
outputs. As Richard pointed out, I needed
an LED on the board
face, not even on the bracket, so I'd have to go crack open the machine to make
sure the battery was holding a charge. That didn't fit our model of
maintainability, so we didn't deploy it.
Regards,
Eric
_
hologies as the pool gets full. Namely,
that ZFS will artificially enforce a limit on the logical size of the
pool based on non-deduped data. This is obviously something that
should be addressed.
- Eric
dd if=/dev/urandom of=/tank/foobar/file1 bs=1024k count=512
512+0 records in
hologies as the pool gets full.
Namely, that
ZFS will artificially enforce a limit on the logical size of the
pool based
on non-deduped data. This is obviously something that should be
addressed.
Eric,
Many people (me included) perceive deduplication as a mean to save
disk space and allow
On 11/09/09 12:58, Brent Jones wrote:
Are these recent developments due to help/support from Oracle?
No.
Or is it business as usual for ZFS developments?
Yes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
ites will be spread across the 3 vdevs. Existing data stays where it
is for reading, but if you update it, those writes will be balanced across all 3
vdevs. If you are mostly concerned with write performance, you don't have to do
anything.
Regards,
Eric
__
details.
You didn't mention how wide your raidz2 vdevs are, but I would imagine that even
with a larger proportion of writes going to the new vdev, your overall write
performance (particularly on concurrent writes) will improve regardless.
Eric
___
zfs
ng at the ideal
state. By definition a hot spare is always DEGRADED. As long as the spare
itself is ONLINE it's fine.
Hope that helps,
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/11/10 17:42, Paul B. Henson wrote:
On Sat, 9 Jan 2010, Eric Schrock wrote:
No, it's fine. DEGRADED just means the pool is not operating at the
ideal state. By definition a hot spare is always DEGRADED. As long as
the spare itself is ONLINE it's fine.
One more question o
On Jan 11, 2010, at 6:35 PM, Paul B. Henson wrote:
> On Mon, 11 Jan 2010, Eric Schrock wrote:
>
>> No, there is no way to tell if a pool has DTL (dirty time log) entries.
>
> Hmm, I hadn't heard that term before, but based on a quick search I take it
> that's th
e-attach the device if it is indeed just missing.
#2 is being worked on, but also does not affect the standard reboot case.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discus
On Feb 6, 2010, at 11:30 PM, Christo Kutrovsky wrote:
> Eric, thanks for clarifying.
>
> Could you confirm the release for #1 ? As "today" can be misleading depending
> on the user.
A long time (snv_96/s10u8).
> Is there a schedule/target for #2 ?
No.
> And jus
fact that free operations used to be in-memory
only but with dedup enabled can result in synchronous I/O to disks in
syncing context.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
z
not particular about whether it's a 68-pin or
> SCA)
Bitmicro makes one: http://www.bitmicro.com/products_edisk_altima_35_u320.php
They also make a version with a 4Gb FC interface. Haven't tried either one, but
found Bitmicro when researching SSD options for a V890.
Eric
_
capable, it seems unlikely to
be the issue. I'd make sure all cables are fully seated and not
kinked or otherwise damaged.
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
poorer
performance than even a bog-standard desktop drive.
Never seemed like a good idea to me, and to paraphrase Richard Elling,
expecting any kind of respectable performance from spinning media is a
sucker's game. ;)
Eric
___
zfs-discuss maili
ZFS will always track per-user usage information even in the absence of
quotas. See the the zfs 'userused@' properties and 'zfs userspace' command.
- Eric
2012/4/25 Fred Liu
> Missing an important ‘NOT’:
>
> >OK. I see. And I agree such quotas will **NOT** scal
with the 'compression'
property on a per-filesystem level, and is fundamentally per-block. Dedup
is also controlled per-filesystem, though the DDT is global to the pool.
If you think there are compelling features lurking here, then by all means
grab the code and run with it :-)
- Eric
Also worth noting that ZFS also doesn't let you open(2) directories and
read(2) from them, something (I believe) UFS does allow.
- Eric
On Mon, Jun 25, 2012 at 10:40 AM, Garrett D'Amore wrote:
> I don't know the precise history, but I think its a mistake to permit
> di
ing, guess you learn something new every day :-)
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/mkdir.c
Thanks,
- Eric
--
Eric Schrock
Delphix
http://blog.delphix.com/eschrock
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
e if it does the right thing for checksum
errors. That is a very small subset of possible device failure modes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@open
additional writes for every block. If it's even possible to implement this
"paranoid ZIL" tunable, are you willing to take a 2-5x performance hit to be
able to detect this failure mode?
- Eric
--
Eric Schrock, Fishworks
come and the (now free)
> blocks are reused for new data.
ZFS will not reuse blocks for 3 transaction groups. This is why uberblock
rollback will do normally only attempt a rollback of up to two previous txgs.
- Eric
--
Eric Schrock, Fishworks
spare, and that spare may not have the same
RAS properties as other devices in your RAID-Z stripe (it may put 3 disks on
the same controller in one stripe, for example).
- Eric
On Fri, Mar 4, 2011 at 7:06 AM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> I just did a small test on RAIDz2 to
s send/recv to move the datasets, so
your mountpoints and other properties will be preserved.
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison
wrote:
> Thanks Eric, however seeing as I can't have two pools named 'tank', I'll
> have to name the new one something else. I believe I will be able to rename
> it afterwards, but I just wanted to check first. I
may
internally reorder and/or aggregate those writes before sending them
to the platter.
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
re, so upcoming OI releases will be in better
sync. OI 147 still had man pages from the original OpenSolaris docs
consolidation.
Sorry I can't answer about zfs diff directly-- haven't used that feature yet.
Eric
___
zfs-discuss
e should be "refcompressratio" as the long name and
"refratio" as the short name would make sense, as that matches
"compressratio". Matt?
- Eric
On Mon, Jun 6, 2011 at 7:08 PM, Haudy Kazemi wrote:
> On 6/6/2011 5:02 PM, Richard Elling wrote:
>
>> On Jun
Webrev has been updated:
http://dev1.illumos.org/~eschrock/cr/zfs-refratio/
- Eric
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Good catch. For consistency, I updated the property description to match
"compressratio" exactly.
- Eric
On Mon, Jun 6, 2011 at 9:39 PM, Mark Musante wrote:
>
> minor quibble: compressratio uses a lowercase x for the description text
> whereas the new prop uses an upperc
dor math". :) Then I
do $NSEC/2097152 to get GB (assuming 512-byte sectors).
ZFS reserves 1/64 of the pool size to protect copy-on-write as the
pool approaches being full. After you make your usable space
calculation, subtract 1/64 of that (total*.016) and that should be
very close to the av
also a nice fit for the typical 8-port SAS HBA.
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s before
another write comes along that would occupy them anew.
I'm contemplating a similar setup for some servers, so I'm interested
if other people have been operating pure-SSD zpools and what their
experiences have been.
Eric
___
zfs-disc
On Tue, Jul 12, 2011 at 1:06 AM, Brandon High wrote:
> On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
>> Interesting-- what is the suspected impact of not having TRIM support?
>
> There shouldn't be much, since zfs isn't changing data in place. Any
> drive with r
r that explanation. So finding drives that keep more
space in reserve is key to getting consistent performance under ZFS.
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hey did for the
1068e and others. As long as you don't configure any RAID volumes,
the card will attach to the non-RAID mpt_sas driver in Solaris and
you'll be all set.
Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
y broken down by mountpoint:
fsstat -i `mount | awk '{if($3 ~ /^[^\/:]+\//) {print $1;}}'` 1
Of course this only works for POSIX filesystems. This won't catch
activity to zvols. Maybe that won't matter in your case.
Eric
___
zfs-disc
on if you are using whole disks and a
driver with static device paths (such as sata).
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; destination file system and all of its child file sys-
> tems are unmounted and cannot be accessed during the
> receive operation.
Actually we don't unmount the file systems anymore for incremental
send/recv, see:
6425096 want online 'zfs rec
I've filed specifically for ZFS:
6735425 some places where 64bit values are being incorrectly accessed
on 32bit processors
eric
On Aug 6, 2008, at 1:59 PM, Brian D. Horn wrote:
> In the most recent code base (both OpenSolaris/Nevada and S10Ux with
> patches)
> all the kno
do you mean by "internal data structures"? Are you referring to
things like space maps, props, history obj, etc. (basically anything
other than user data and the indirect blocks that point to user data)?
eric
___
zfs-discuss mailing list
zfs-
ZFS pool
Ugly workaround is to purposely reboot the original host.
And you will want:
6282725 hostname/hostid should be stored in the label
http://blogs.sun.com/erickustarz/en_US/entry/poor_man_s_cluster_end
which will be in s10u6.
eric
___
zfs-d
thing completely non-sensical. If you do a 'zpool scrub', does it
complete without any errors?
- Eric
On Fri, Aug 15, 2008 at 01:48:48PM -0700, Nils Goroll wrote:
> Hi,
>
> I thought that this question must have been answered already, but I have
> not found any explanations. I
On Fri, Aug 15, 2008 at 02:14:02PM -0700, Eric Schrock wrote:
> The fact that it's DEGRADED and not FAULTED indicates that it thinks the
> DTL (dirty time logs) for the two sides of the mirrors overlap in some
> way, so detaching it would result in loss of data. In the process of
&g
n't you test this right now?
You could generate a similar workload using FileBench:
http://www.solarisinternals.com/wiki/index.php/FileBench
eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
another set of tunables is not practical. It will be interesting to
see if this is an issue after the retry logic is modified as described
above.
Hope that helps,
- Eric
On Thu, Aug 28, 2008 at 01:08:26AM -0700, Ross wrote:
> Since somebody else has just posted about their entire system locking up w
uation really poorly.
I don't think you understand how this works. Imagine two I/Os, just
with different sd timeouts and retry logic - that's B_FAILFAST. It's
quite simple, and independent of any hardware implementation.
- Eric
--
Eric Schrock, Fishworks
ny such "best effort RAS" is a little dicey because
you have very little visibility into the state of the pool in this
scenario - "is my data protected?" becomes a very difficult question to
answer.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You should be able to do 'zpool status -x' to find out what vdev is
broken. A useful extension to the DE would be to add a label to the
suspect corresponding to /.
- Eric
On Thu, Sep 04, 2008 at 06:34:33PM +0200, Alain Ch?reau wrote:
> Hi all,
>
> ZFS send a message to
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
_
A better solution (one that wouldn't break backwards compatability)
would be to add the '-p' option (parseable output) from 'zfs get' to the
'zfs list' command as well.
- Eric
On Wed, Oct 01, 2008 at 03:59:27PM +1000, David Gwynne wrote:
> as the topic
attach was required.
-Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 3 Oct 2008, [EMAIL PROTECTED] wrote:
> Eric Boutilier wrote:
>> Is the following issue related to (will probably get fixed by) bug 6748133?
>> ...
>>
>> During a net-install of b96, I modified the name of the root pool,
>> overriding the default name, r
probably happened to you.
FYI, this is bug 6667208 fixed in build 100 of nevada.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iting ZFS
pools[1]. But I haven't actually heard a reasonable proposal for what a
fsck-like tool (i.e. one that could "repair" things automatically) would
actually *do*, let alone how it would work in the variety of situations
it needs to (compressed RAID-Z?) where the standard ZFS i
These are the symptoms of a shrinking device in a RAID-Z pool. You can
try to run the attached script during the import to see if this the
case. There's a bug filed on this, but I don't have it handy.
- Eric
On Sun, Oct 26, 2008 at 05:18:25PM -0700, Terry Heatlie wrote:
> Folk
set
locally ('zfs get -s local ...').
- Eric
On Mon, Nov 03, 2008 at 08:35:22AM -0500, Mark J Musante wrote:
> On Mon, 3 Nov 2008, Luca Morettoni wrote:
>
> > now I need to *clear* (remove) the property from
> > rpool/export/home/luca/src filesystem, but if I use the "
http://blogs.sun.com/fishworks
There will be much more information throughout the day and in the coming
weeks. If you want to give it a spin, be sure to check out the freely
available VM images.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com
configured in an implementation-defined way for
the software to function correctly.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed config)
so that we can mirror/RAID across them. Even without NSPF, we have
redundant cables, HBAs, power supplies, and controllers, so this is only
required if you are worried about disk backplane failure (a very rare
failure mode).
Can you point to the literature that suggests this is
has gotten the
> best of me, so I apologize. Feel free to correct as you see fit.
I can update the blog entry if it's misleading. I assumed that it was
implicit that the absence of the above (missing or broken disks) meant
supported, but I admit that I did not state that explicitl
ilure ereport). So ZFS pre-emptively short circuits
all I/O and treats the drive as faulted, even though the diagnosis
hasn't come back yet. We can only do this for errors that have a 1:1
correspondence with faults.
- Eric
On Tue, Nov 25, 2008 at 04:10:13PM +, Ross Smith wrote:
> I
ions on LIB1 for 777, and created a test subfolder that I
have applied permissions through Windows XP. Windows complained about
reordering the permissions when I first set them, and now doesn't complain when
opening the security tab, so I assume they're ordered correctly.
[EMAIL PROTECTED]:/po
ck into new behavior that should
provide a much improved experience.
- Eric
P.S. I'm also not sure that B_FAILFAST behaves in the way you think it
does. My reading of sd.c seems to imply that much of what you
suggest is actually how it currently behaves, but you should
probably
Well, there's the problem...
#id -a tom
uid=15669(tom) gid=15004(domain users) groups=15004(domain users)
#
wbinfo -r shows the full list of groups, but id -a only lists "domain users".
Since I'm trying to restrict permissions on other groups, my access denied
error message makes more sense.
Can you send the output of the attached D script when running 'zpool
status'?
- Eric
On Thu, Dec 04, 2008 at 02:58:54PM -0800, Brett wrote:
> As a result of a power spike during a thunder storm I lost a sata controller
> card. This card supported my zfs pool called newsan which
Well it shows that you're not suffering from a known bug. The symptoms
you were describing were the same as those seen when a device
spontaneously shrinks within a raid-z vdev. But it looks like the sizes
are the same ("config asize" = "asize"), so I'm at a loss.
-
What software are you running? There was a bug where offline device
failure did not trigger hot spares, but that should be fixed now (at
least in OpenSolaris, not sure about s10u6).
- Eric
On Wed, Jan 21, 2009 at 09:57:42AM +1100, Nathan Kroenert wrote:
> An interesting interpretation of us
Hi There,
One of my partners asked the question w.r.t. Disk Pool overhead for the
7000 series.
Adam Leventhal put that it was very small (1/64) see below..
Do we have any further info regarding this?
Thanks,
-eric :)
Original Message
Subject:Re: [Fwd: RE: Disk
Note that:
6501037 want user/group quotas on ZFS
Is already committed to be fixed in build 113 (i.e. in the next month).
- Eric
On Thu, Mar 12, 2009 at 12:04:04PM +0900, Jorgen Lundman wrote:
>
> In the style of a discussion over a beverage, and talking about
> user-quotas
tinue without
committed data.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ds - it doesn't matter if reads are fast for slogs.
With the txg being a working set of the active commit, so might be a
set of NFS iops?
If the NFS ops are synchronous, then yes. Async operations do not use
the ZIL and therefore don't have anything to do with slogs.
- Eric
-
open. A failed slog device can
prevent such a pool from being imported.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 946 matches
Mail list logo