I just responsed to the NFS list, and it definitely looks like a bad
interaction between NFS->ZFS->iSCSI, where as the first two (local
disk for ZFS) or the last two (no ZFS) are very fast. Are there posted
zfs dtrace scripts for observability of i/o?
On 5/4/06, Neil Perrin <[EMAIL PROTECTED]> w
Nope. The ZFS head (iscsi initiator) is a Sun Ultra 20 Workstation.
The clients are RHEL4 quad opterons running the x86_64 kernel series.
On 5/4/06, Neil Perrin <[EMAIL PROTECTED]> wrote:
Actually the nfs slowness could be caused by the bug below,
but it doesn't explain the "find ." times on a
Actually the nfs slowness could be caused by the bug below,
but it doesn't explain the "find ." times on a local zfs.
Neil Perrin wrote On 05/04/06 21:01,:
Was this a 32 bit intel system by chance?
If so this is quite likely caused by:
6413731 pathologically slower fsync on 32 bit systems
This
Was this a 32 bit intel system by chance?
If so this is quite likely caused by:
6413731 pathologically slower fsync on 32 bit systems
This was fixed in snv_39.
Joe Little wrote On 05/04/06 15:47,:
I've been writing to the Solaris NFS list since I was getting some bad
performance copying via NF
Joseph Kowalski wrote:
This is just a request for elaboration/education. I find reason #1
compelling ehough to accept your answer, but I really don't understand
reason #2. Why wouldn't the Solaris audit facility be correct here?
The Solaris audit facility will record a command execution as so
I've been writing to the Solaris NFS list since I was getting some bad
performance copying via NFS (noticeably there) a large set of small
files. We have various source trees, including a tree with many linux
versions that I was copying to my ZFS NAS-to-be. On large files, it
flies pretty well, an
This is just a request for elaboration/education. I find reason #1
compelling ehough to accept your answer, but I really don't understand
reason #2. Why wouldn't the Solaris audit facility be correct here?
(I suspect I'm about to have a Homer Simpson moment.)
- jek3
> From: Jeff Bonwick <[EMA
On Thu, 4 May 2006, James Dickens wrote:
> comparison of ZFS vs. Linux Raid and LVM
> http://unixconsult.org/zfs_vs_lvm.html
Interesting reading, although perhaps a column for UFS+SVM would be
useful?
> moving zfs filesystems using zfs back/restore commands
> http://uadmin.blogspot.com/2006/05/m
On Thu, May 04, 2006 at 10:05:31AM -0400, Maury Markowitz wrote:
> Hmmm, where in 6.2 is the filename? I see the description of the
> znode_phys_t, which doesn't have it, and "Each directory holds a set
> of name-value pairs which contain the names and object numbers for
> each directory entry." I
I was under the impresion that Raid-Z could also use disks of multiple sizes.
Is that correct? In other words, if I created a raidz pool with four disks of
80gb, 80gb, 160gb, 160gb, would I only get a useful pool of (4-1)*80gb = 240gb
or would I get (sum(80,80,160,160) - max(80,80,160,160)) =
Sorry guys, I have to take the blame for letting this slip. I have
been working with the VM folks on some comprehensive changes to the
way ZFS works with the VM system (still a ways out I'm afraid), and
let this bug slip into the background.
I'm afraid its probably too late to get this into the
I don't think so, but I may not be reading the output carefully enough.
What I'm really looking for is a distribution of write sizes. Specifically,
I'm trying to understand the I/Os given to RAID-Z devices so I can model
how different stripe widths might handle the same load.
Adam
On Thu, May 04,
On Thu, May 04, 2006 at 09:55:37AM -0700, Adam Leventhal wrote:
> Is there a way, given a dataset or pool, to get some statistics about the
> sizes of writes that were made to the underlying vdevs?
Does zdb -bsv give you what you want?
--Bill
___
zfs-
Is there a way, given a dataset or pool, to get some statistics about the
sizes of writes that were made to the underlying vdevs?
Thanks.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zf
To expand a bit on the description in the man page, the amount of data a
RAID-Z vdev can store actually varies quite a bit. One of the interesting
innovations of RAID-Z is that it only allocates chunks that are a multiple
of the minimum allocatable size (2 blocks -- 1 data, 1 parity) so that you
ne
On Thu, May 04, 2006 at 12:20:47AM -0700, Jeff Bonwick wrote:
> > I just got an Ultra 20 with the default 80GB internal disk. Right now,
> > I'm using around 30GB for zfs. I will be getting a new 250GB drive.
> >
> > Question: If I create a 30GB slice on the 250GB drive, will that be okay
> >
Nicolas Williams wrote:
On Thu, May 04, 2006 at 12:39:59AM -0700, Jeff Bonwick wrote:
Why not use the Solaris audit facility?
Several reasons:
(1) We want the history to follow the data, not the host. If you
export the pool from one host and import it on another, we want
the command h
On Thu, May 04, 2006 at 12:39:59AM -0700, Jeff Bonwick wrote:
> > Why not use the Solaris audit facility?
>
> Several reasons:
>
> (1) We want the history to follow the data, not the host. If you
> export the pool from one host and import it on another, we want
> the command history to m
> Why not use the Solaris audit facility?
Several reasons:
(1) We want the history to follow the data, not the host. If you
export the pool from one host and import it on another, we want
the command history to move with the pool. That won't happen
if the history file is somewhere i
ZFS must support POSIX semantics, part of which is hard links. Hard
links allow you to create multiple names (directory entries) for the
same file. Therefore, all UNIX filesystems have chosen to store the
file information separately for the directory entries (otherwise, you'd
have multiple copie
Hi
Exists (or It will exists) any metoth or tool for migrate a UFS/SVM
filesystems with soft partitions to ZFS filesystems with pools?
Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme
Manager to Solaris 10 ZFS or only backup-recovery option?
Thanks
> ZFS must support POSIX semantics, part of which is hard links. Hard
> links allow you to create multiple names (directory entries) for the
> same file. Therefore, all UNIX filesystems have chosen to store the
> file information separately for the directory entries (otherwise, you'd
> have multipl
Yesterday my snv_39 32-bit x86 test box had a stange issue with "zfs snapshot"
failing,
the strange state lasted for ~ 5 - 10 minutes, but eventually the problem
disappeared.
Unfortunatelly I can't reproduce the behaviour.
What happened was this:
zfs snapshot failed with an "unexpected error 16
> > I'm a newbie to ZFS. Can some explain this point a bit deeper. If I
> > try to run ZFS on a 32-bit system will it just be slower or is the
> > maximum storage pool size actually limited by the 32-but address
> > space?
>
> Only the cache size is limited by the 32-bit address space, thus
> (p
> I believe RAID-Z in a two-disk configuration is
> almost completely identical (in terms of space and failure resistant)
> to mirroring, but not an optimal implementation of it.
>
> If you want mirroring, you should just use mirror
> vdevs. Any ZFS folk want to chime in?
>
> Cheers,
> - jonatha
On Wed, 3 May 2006, Matthew A. Ahrens wrote:
> > # zpool history jen
> > History for 'jen':
> > 2006-04-27T10:38:36 zpool create jen mirror ...
>
> I have two suggestions which are just minor nits compared with the rest of
> this discussion:
>
> 1. Why do you print a "T" between the date and the
> Why not use a terse XML format?
I suppose we could, but I'm not convinced that XML is stable enough
to be part of a 30-year on-disk format. 15 years ago PostScript
was going to be stable forever, but today many PostScript readers
barf on Adobe-PS-1.0 files -- which were supposed to be the most
> I just got an Ultra 20 with the default 80GB internal disk. Right now,
> I'm using around 30GB for zfs. I will be getting a new 250GB drive.
>
> Question: If I create a 30GB slice on the 250GB drive, will that be okay
> to use as mirror (or raidz) of the current 30GB that I now have on the
> What I meant is that events that "cause a permanent change..." should
> not be deleted from the circular log if there are "old" (older?)
> "operationally interesting" events that could be deleted instead.
>
> I.e., if the log can keep only so much info then I'd rather have the
> history of a poo
29 matches
Mail list logo