-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
>> does this mean that without an account on the NFS server, a user cannot see
>> his
>> current disk use / quota?
> That's correct.
in this case, might i suggest at least an RFE to add ZFS quota support to
rquotad? i'm sure we aren
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
> ZFS user quotas (like other zfs properties) will not be accessible over NFS;
> you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk us
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Bob Friesenhahn:
> Please make sure to read my write-up of how I configured a StorageTek
> 2540 array (FC version of same hardware) for use with ZFS. It can be
> found at
> "http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have a system connected to a StorageTek 2530 SAS array (12 disks), on which i
want to run ZFS. in the past, when using ZFS on an external array, i would
simply create one LUN on the array and create the ZFS pool on this; but for
various well-do
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
> My results are much improved, on the order of 5-100 times faster
> (either over Mbuffer or SSH).
this is good news - although not quite soon enough for my current 5TB zfs send
;-)
have you tested if this also improves the performance
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have a system connected to an external DAS (SCSI) array, using ZFS. the
array has an nvram write cache, but it honours SCSI cache flush commands by
flushing the nvram to disk. the array has no way to disable this behaviour. a
well-known behav
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Miles Nordin:
> rt> currently i crontab zfs send | zfs recv for this
> My point is that I don't think the 10min delay is the most significant
> difference between AVS/snapmirror and a 'zfs send' cronjob.
i didn't intend to suggest there was any si
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Daryl Doami:
> As an aside, replication has been implemented as part of the new Storage
> 7000 family. Here's a link to a blog discussing using the 7000
> Simulator running in two separate VMs and replicating w/ each other:
that's interesting, alth
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
> It sounds like you need either a true clustering file system or to draw back
> your plans to see changes read-only instantly on the secondary node.
well, the idea is to have two separate copies of the data, for backup / DR.
being able t
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
> I doubt zfs receive would be able to keep pace with any non-trivial update
> rate.
one could consider this a bug in zfs receive :)
> Mirroring iSCSI or a dedicated HA tool would be a better solution.
i'm not sure how to apply iSCSI h
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
are there any RFEs or plans to create a 'continuous' replication mode for ZFS?
i envisage it working something like this: a 'zfs send' on the sending host
monitors the pool/filesystem for changes, and immediately sends them to the
receiving host,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
> [EMAIL PROTECTED] # zpool upgrade
> This system is currently running ZFS pool version 10.
Update 6 introduced a new feature from Nevada: ZFS *filesystem* versions (as
opposed to pool versions):
[EMAIL PROTECTED]:~#zpool upgrade
This sy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Andrew Gabriel:
> This is quite easily worked around by putting a buffering program
> between the network and the zfs receive.
i tested inserting mbuffer with a 250MB buffer between the zfs send and zfs
recv. unfortunately, it seems to make very lit
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins:
> That's very slow. What's the nature of your data?
mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
50KB. they are organised into subdirectories, A/B/C/. each directory
has 18,000-25,000 files. total data
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i'm
using 'zfs send -i' to replicate changes on A to B. however, the 'zfs recv' on
B is running extremely slowly. if i run the zfs send on A and redirect output
to a file,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi,
i have an X4500 running Solaris 10 Update 5 (with all current patches). it has
a stripe-mirror ZFS pool over 44 disks with 2 hot spare. the system is
entirely idle, except that every 60 seconds, a 'zfs recv' is run. a couple of
days ago, while
16 matches
Mail list logo