We have some HDS storage that isn't supported by mpxio, so we have to use
veritas dmp to get multipathing.
Whats the recommended way to use DMP storage with ZFS. I want to use DMP but
get at the multipathed virtual luns at as low a level as possible to avoid
using vxvm as much as possible.
I f
On Jan 2, 2007, at 11:14, Richard Elling wrote:
Don't dispense with proper backups or you will be unhappy. One
of my New Years resolutions is to campaign against unhappiness.
So I would encourage you to explore ways to backup such large
data stores in a timely and economical way.
The Sun Sto
Hi all,
I have a interesting one. I have a disk wedged in a zpool.
It can be seen from format and a analyze works fine, but from zfs the
system has it marked as unavailable. (Yes the disk is probably flaky...
I don't get my hands on the good disks)
The system is 5.11 snv_55 sun4u sparc SUNW,
Dennis Clarke wrote:
Another thing to keep an eye out for is disk caching. With ZFS,
whenever the NFS server tells us to make sure something is on disk, we
actually make sure it's on disk by asking the drive to flush dirty data
in its write cache out to the media. Needless to say, this takes a
> Another thing to keep an eye out for is disk caching. With ZFS,
> whenever the NFS server tells us to make sure something is on disk, we
> actually make sure it's on disk by asking the drive to flush dirty data
> in its write cache out to the media. Needless to say, this takes a
> while.
>
> W
Another thing to keep an eye out for is disk caching. With ZFS,
whenever the NFS server tells us to make sure something is on disk, we
actually make sure it's on disk by asking the drive to flush dirty data
in its write cache out to the media. Needless to say, this takes a
while.
With UFS, it is
Brad Plecs wrote:
I had a user report extreme slowness on a ZFS filesystem mounted over NFS over the weekend.
After some extensive testing, the extreme slowness appears to only occur when a ZFS filesystem is mounted over NFS.
One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS onto
Ah, thanks -- reading that thread did a good job of explaining what I was
seeing. I was going
nuts trying to isolate the problem.
Is work being done to improve this performance? 100% of my users are coming in
over NFS,
and that's a huge hit. Even on single large files, writes are slower by a
You may want to check some of the past postings to this list as I believe
what you are seeing has already been discussed already. If I remember
correctly this is a "feature" of zfs and is designed to protect the
integrity of the data on the zfs file system.
On 1/2/07, Willem van Schaik <[EMAIL PR
Hi Brad,
I believe benr experienced the same/similar issue here:
http://www.opensolaris.org/jive/message.jspa?messageID=77347
If it is the same, I believe its a known ZFS/NFS interaction bug, and
has to do with small file creation.
Best Regards,
Jason
On 1/2/07, Brad Plecs <[EMAIL PROTECTED]>
I had a user report extreme slowness on a ZFS filesystem mounted over NFS over
the weekend.
After some extensive testing, the extreme slowness appears to only occur when a
ZFS filesystem is mounted over NFS.
One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS onto a ZFS
filesyste
Played over Xmas a bit with ZFS on mirrored USB sticks. Which was fun
When I pulled both sticks without doing any unmount, the filesystem
seemed to be still there, all in cache of course. I even could open a
file in vi, but when I then tried to save the file, I had expected a
failure, but
james hughes wrote:
This is intended as a defense in depth measure and also a sufficiently
good measure for the customers that don't need full compliance with
NIST like requirements that need degausing or physical destruction.
Govt, finance, healthcare all require the NIST overwrite...
Jim,
Sorry a few corrections, and inserts..
>
> which is not the behavior I am seeing.. If I have 100 snaps of a
> filesystem that are relatively low delta churn and then delete half of
the
> data out there I would expect to see that space go up in the used column
> for one of the snaps (in my test
I am bringing this up again with the hopes that more eye may be on the list
now then before the holidays..
the zfs man page lists the usage column as:
used
The amount of space consumed by this dataset and all its
descendants. This is the value that is checked against
Anders Troberg wrote:
What I want:
* Software RAID support, even across the network, so I can just
add a bunch of parity disks and survive if a few disks crash.
To me, it's well worth it to pony up with the money for 5-10
extra disks if I know that that many disks can fail before I
it seems taking a clone always requires taking a snapshot first and provide
this as a parameter
to the zfs clone command.
now wouldnt it be more natural way of usage when I intend to create a clone,
that by default
the zfs clone command will create the needed snapshot from the current image
int
On 1/2/07, Anders Troberg <[EMAIL PROTECTED]> wrote:
I've ploughed through the documentation, but it's kind of vague on some
points and I need to buy some hardware if I'm to test it, so I thought I'd
ask first. I'll begin by describing what I want to achieve, and would
apreciate if someone could
I've ploughed through the documentation, but it's kind of vague on some points
and I need to buy some hardware if I'm to test it, so I thought I'd ask first.
I'll begin by describing what I want to achieve, and would apreciate if someone
could tell me if this is possible or how close I can come.
Darren Reed wrote:
Darren J Moffat wrote:
...
Of course. I didn't mention it because I thought it was obvious but
this would NOT break the COW or the transactional integrity of ZFS.
One of the possible ways that the "to be bleached" blocks are dealt
with in the face of a crash is just like
Darren J Moffat wrote:
...
Of course. I didn't mention it because I thought it was obvious but
this would NOT break the COW or the transactional integrity of ZFS.
One of the possible ways that the "to be bleached" blocks are dealt
with in the face of a crash is just like everything else - t
David Bustos wrote:
Quoth Darren J Moffat on Thu, Dec 21, 2006 at 03:31:59PM +:
Pawel Jakub Dawidek wrote:
I like the idea, I really do, but it will be s expensive because of
ZFS' COW model. Not only file removal or truncation will call bleaching,
but every single file system modificati
Dmitry Mozheyko wrote:
Hello all.
Where i can find information about differences between fletcher2, fletcher4,
and sha256 algorithms for ZFS checksums?
Start here: http://en.wikipedia.org/wiki/Checksum
Then read up on Fletcher here at this link
http://en.wikipedia.org/wiki/Fletcher's_
On Sat, 30 Dec 2006 18:13:04 +0100, <[EMAIL PROTECTED]> wrote:
I think removing the ability to use link(2) or unlink(2) on directories
would hurt no-one and would make a few things easier.
I'd be rather carful here, see the standards implications drafted in
4917742.
The standard gives perm
24 matches
Mail list logo