On Fri, Joe Little wrote:
> Thanks. I'm playing with it now, trying to get the most succinct test.
> This is one thing that bothers me: Regardless of the backend, it
> appears that a delete of a large tree (say the linux kernel) over NFS
> takes forever, but its immediate when doing so locally. Is
Here's some sample output. Where the I write over NFS to ZFS (no
iscsi) I get high sizes for i/o:
UID PID DBLOCK SIZE COMM PATHNAME
1 427 W 22416320 4096 nfsd
1 427 W 22416328 4096 nfsd
1 427 W 22416336 4096 nfsd
1 427 W 22416344 409
Thanks. I'm playing with it now, trying to get the most succinct test.
This is one thing that bothers me: Regardless of the backend, it
appears that a delete of a large tree (say the linux kernel) over NFS
takes forever, but its immediate when doing so locally. Is delete over
NFS really take such
These may help:
http://opensolaris.org/os/community/dtrace/scripts/
Check out iosnoop.d
http://www.solarisinternals.com/si/dtrace/index.php
Check out iotrace.d
- Lisa
Joe Little wrote On 05/05/06 18:59,:
Are there known i/o or iscsi dtrace scripts available?
On 5/5/06, Spencer Shepler
Are there known i/o or iscsi dtrace scripts available?
On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote:
On Fri, Joe Little wrote:
> On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
> >On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> >> Thanks for the tip. In the local case, I
On Fri, Joe Little wrote:
> On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
> >On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> >> Thanks for the tip. In the local case, I could send to the
> >> iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
> >> of 50seconds
And of course, just to circle back, an rsync via ssh from the client
to the Solaris ZFS/iscsi server came in at 17.5MB/sec, taking 1minute
16 seconds, or about 20% longer. So, NFS (over TCP) is 1.4k/s, and
encrypted ssh is 17.5MB/sec following the same network path.
On 5/5/06, Joe Little <[EMAIL
On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> Thanks for the tip. In the local case, I could send to the
> iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
> of 50seconds (17 seconds better than UFS). However
On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> Thanks for the tip. In the local case, I could send to the
> iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
> of 50seconds (17 seconds better than UFS). However, I didn't even both
> finishing the NFS client test,
Hmmm, this looks like a bug to me. The single argument form of 'zpool
replace' should do the trick. What has happened is that there is enough
information on the disk to identify it as belonging to 'tank', yet not
enough good data for it to be opened. Incidentally, you you send me the
contents of
Thanks for the tip. In the local case, I could send to the
iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
of 50seconds (17 seconds better than UFS). However, I didn't even both
finishing the NFS client test, since it was taking a few seconds
between multiple 27K files. So,
I have a raidz pool which looks like this after a disk failure:
# zpool status
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
a
On Fri, Joe Little wrote:
> well, it was already an NFS-discuss list message. Someone else added
> dtrace-discuss to it. I have already noted this to a degree on zfs-discuss,
> but it seems to be mainly a NFS specific issue at this stage.
So I took your original data you posted and reformatted it
My gut feeling is that somehow the DKIOCFLUSHWRITECACHE ioctls (which
translate to the SCSI flush write cace requests) are throwing iSCSI for
a loop. We've exposed a number of bugs in our drivers because ZFS is
the first filesystem to actually care to issue this request.
To turn this off, you can
I just did another test, this time using a linux NFS client against B38
with UFS and iscsi disks. It was close to the same speed (over 8MB/sec
average) as going to UFS on local disk or ZFS on local disk (around
20MB/sec). My UFS formated iscsi disk was only a single iscsi disk and
not like the RAID
well, it was already an NFS-discuss list message. Someone else added
dtrace-discuss to it. I have already noted this to a degree on
zfs-discuss, but it seems to be mainly a NFS specific issue at this
stage.
On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote:
On Fri, Joe Little wrote:> On 5/5/06,
On Fri, Joe Little wrote:
> On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote:
> >On Fri, Joe Little wrote:
> >> Well, I used the dtrace script used here. The NFS implementation
> >> (server) is Solaris 11 B38, and the client and the RHEL linux
> >> revision, which doesn't have this problem goin
On Fri, 5 May 2006, Marion Hakanson wrote:
> Interesting discussion. I've often been impressed at how NetApp-like
> the overal ZFS feature-set is (implies that I like NetApp's). Is it
> verboten to compare ZFS to NetApp? I hope not
Of course not. And if "Thumper" is similar to the rumoure
On 5/5/06, Constantin Gonzalez <[EMAIL PROTECTED]> wrote:
Hi,
(apologies if this has been discussed before, I hope not)
while setting up a script at home to do automatic snapshots, a number of
wishes popped into my mind:
The basic problem with regular snapshotting is that you end up managing
s
On Fri, May 05, 2006 at 09:43:05AM -0700, Marion Hakanson wrote:
> Interesting discussion. I've often been impressed at how NetApp-like
> the overal ZFS feature-set is (implies that I like NetApp's). Is it
> verboten to compare ZFS to NetApp? I hope not
It's a public list, you can do the co
I agree and I do not say what you do is wrong, I was just expressing my opinion
after what happend today on my system that there would be an excelent idea to
have such extra feature in ZFS...
Regards,
Chris
On Fri, 5 May 2006, Darren J Moffat wrote:
Krzys wrote:
Maybe there could be a flag
I realy do like the way NetApp is handling snaps :) that would be an excelent
thing in ZFS :)
On Fri, 5 May 2006, Marion Hakanson wrote:
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compar
Krzys wrote:
Maybe there could be a flag for certain snaps where it could be made
read only?!? But I dont know how this could be implemented and I do not
think that would be possible... Anyway I still think that if I had a
production system with those snaps I would rather remove that "golden
i
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compare ZFS to NetApp? I hope not
NetApp has two ways of making snapshots. There is a set of automatic
snapshots, which are created, rotate a
Maybe there could be a flag for certain snaps where it could be made read
only?!? But I dont know how this could be implemented and I do not think that
would be possible... Anyway I still think that if I had a production system with
those snaps I would rather remove that "golden image" and conti
Krzys wrote:
I did not think of it this way and it is a very valid point, but I still
think that most likely you would have a backup already on tape if need
be and haveing space available for writing rhather than having no disk
space for live data is much more important than a snap, but thats m
I did not think of it this way and it is a very valid point, but I still think
that most likely you would have a backup already on tape if need be and haveing
space available for writing rhather than having no disk space for live data is
much more important than a snap, but thats my opinion. I t
Krzys wrote:
It would be also nice that if you have many snapshoots and you do run
out of space that the oldest snapshoot would be automatically removed
untill space is freed up. I did setup this snashoot that is beiing made
every minute, then every hour, day and a month, and I finally got to t
On Fri, 5 May 2006, Nicolas Williams wrote:
> On Fri, May 05, 2006 at 05:17:43PM +0200, Constantin Gonzalez wrote:
> > But you're right in that my desired functionality can "easily" be
> > implemented
> > with scripts. Then I would still argue for including this functionality as
> > part of the ZF
On Fri, May 05, 2006 at 05:17:43PM +0200, Constantin Gonzalez wrote:
> But you're right in that my desired functionality can "easily" be
> implemented
> with scripts. Then I would still argue for including this functionality as
> part of the ZFS user interface, because of ease of use and minimizat
Hi Wes,
Wes Williams wrote:
Interesting idea Constantin.
However, perhaps instead of or in addition to your idea, I'd like to have a
mechanism or script that would overwrite the older snapshots [u]only if[/u]
some more current snapshot were created. Ideally this mechanism would
prevent your id
Hi Al,
1) But is this something that belongs in ZFS or is this a backup/restore
type tool that is simply a "user" of zfs?
...
Again - this looks like an operational backup/restore policy. Not a ZFS
function.
So the question is: Is advanced management of snapshots (aging, expiring,
etc.) s
On Fri, May 05, 2006 at 10:19:56AM +0200, Constantin Gonzalez wrote:
> (apologies if this was discussed before, I _did_ some research, but this
> one may have slipped for me...)
I'm in the process of writing a blog on this one. Give me another day
or so.
> Looking through the current Sun ZFS Tec
Interesting idea Constantin.
However, perhaps instead of or in addition to your idea, I'd like to have a
mechanism or script that would overwrite the older snapshots [u]only if[/u]
some more current snapshot were created. Ideally this mechanism would prevent
your idea of expired snapshots bein
On Fri, 5 May 2006, Constantin Gonzalez wrote:
> Hi,
>
> (apologies if this has been discussed before, I hope not)
>
> while setting up a script at home to do automatic snapshots, a number of
> wishes popped into my mind:
>
> The basic problem with regular snapshotting is that you end up managing
It would be also nice that if you have many snapshoots and you do run out of
space that the oldest snapshoot would be automatically removed untill space is
freed up. I did setup this snashoot that is beiing made every minute, then every
hour, day and a month, and I finally got to the point where
Hi,
(apologies if this has been discussed before, I hope not)
while setting up a script at home to do automatic snapshots, a number of
wishes popped into my mind:
The basic problem with regular snapshotting is that you end up managing
so many of them. Wouldn't it be nice if you could assign an
Scott Rotondo wrote:
Joseph Kowalski wrote:
This is just a request for elaboration/education. I find reason #1
compelling ehough to accept your answer, but I really don't understand
reason #2. Why wouldn't the Solaris audit facility be correct here?
The Solaris audit facility will record a c
Hi,
(apologies if this was discussed before, I _did_ some research, but this
one may have slipped for me...)
Looking through the current Sun ZFS Technical presentation, I found a ZFS
feature that was new to me: Ditto Blocks.
In search of more information, I asked Google but there seem to be no
39 matches
Mail list logo