On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote:
> For extremely large files (25 to 100GBs), that are accessed
> sequentially for both read & write, I would expect 64k or 128k.
Lager files accessed sequentially don't need any special heuristic for
record size determination:
Group,
I am not sure I agree with the 8k size.
Since "recordsize" is based on the size of filesystem blocks
for large files, my first consideration is what will be
the max size of the file object.
For extremely large files (25 to 100GBs), that are accessed
ZFS ignores the fsflush. Here's a snippet of the code in zfs_sync():
/*
* SYNC_ATTR is used by fsflush() to force old filesystems like UFS
* to sync metadata, which they would otherwise cache indefinitely.
* Semantically, the only requirement is that the sync be
On 10/13/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Using ZFS for a zones root is currently planned to be supported in
solaris 10 update 5, but we are working on moving it up to update 4.
Are there any areas where the community can help with this? Would
code or "me too!" support calls help
Group,
If their is a bad vfs ops template, why
wouldn't you just return(error) versus
trying to create the vnode ops template?
My suggestion is after the cmn_err()
then return(error);
Mitchell Erblich
-
The high order bit here is that
write();
write();
fsync();
can be executed using a single I/O latency (during the
fsync) whereas using O_*DSYNC, will require 2 I/O latency
(one for each write).
-r
Neil Perrin writes:
> As far as zfs performance is concerned,
Robert Milkowski wrote:
Hello Noel,
Friday, October 13, 2006, 11:22:06 PM, you wrote:
ND> I don't understand why you can't use 'zpool status'? That will show
ND> the pools and the physical devices in each and is also a pretty basic
ND> command. Examples are given in the sysadmin docs and man
Hello Noel,
Friday, October 13, 2006, 11:22:06 PM, you wrote:
ND> I don't understand why you can't use 'zpool status'? That will show
ND> the pools and the physical devices in each and is also a pretty basic
ND> command. Examples are given in the sysadmin docs and manpages for
ND> ZFS on the
Hello Ramneek,
Friday, October 13, 2006, 6:07:22 PM, you wrote:
RS> Hello Experts
RS> Would appreciate if somebody can comment on sendmail environment on
RS> solaris 10.
RS> How will Zfs perform if one has millions of files in sendmail message
RS> store directory under zfs filesystem compared to
Hello Matthew,
Friday, October 13, 2006, 5:37:45 PM, you wrote:
MA> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Friday, October 13, 2006, 8:05:18 AM, you wrote:
>>
>> REP> Do you want data availability, data retention, space, or performance?
>>
>> data availability, space, performance
>>
I don't understand why you can't use 'zpool status'? That will show
the pools and the physical devices in each and is also a pretty basic
command. Examples are given in the sysadmin docs and manpages for
ZFS on the opensolaris ZFS community page.
I realize it's not quite the same command
ZFS is supposed to be much easier to use than UFS.
For creating a filesystem, I agree it is, as I could do that easily without a
man page.
However, I found it rather surprising that I could not see the physical
device(s) a zfs filesystem was attached to using either "df" command (that
shows ph
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote:
> Jeremy Teo wrote:
> >Would it be worthwhile to implement heuristics to auto-tune
> >'recordsize', or would that not be worth the effort?
>
> It would be really great to automatically select the proper recordsize
> for each file! H
- Original Message -
Subject: no tool to get "expected" disk usage reports
From:"Dennis Clarke" <[EMAIL PROTECTED]>
Date:Fri, October 13, 2006 14:29
To: zfs-discuss@opensolaris.org
---
One technique would be to keep a histogram of read & write sizes.
Presumably one would want to do this only during a “tuning phase” after the
file was first created, or when access patterns change. (A shift to smaller
record sizes can be detected by a large proportion of write operations which
For what it's worth, close-to-open consistency was added to Linux NFS in the
2.4.20 kernel (late 2002 timeframe). This might be the source of some of the
confusion.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
On Fri, Oct 13, 2006 at 11:03:51AM +0200, Joerg Schilling wrote:
> Nicolas Williams <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
> > > Before we start defining the first offocial functionality for this Sun
> > > feature,
> > > we should define
Hello Experts
Would appreciate if somebody can comment on sendmail environment on
solaris 10.
How will Zfs perform if one has millions of files in sendmail message
store directory under zfs filesystem compared to UFS or VxFS..
--
Thanks & Regards,
***
I don't think controllers really fail that often, but anything that increases
redundancy is likely to be an improvement. I would hope that the controllers
used in Thumper mostly keep their channels independent from a PCI point of
view, so that your pools don’t collide. (It does mean that each r
Robert Milkowski wrote:
Hello Richard,
Friday, October 13, 2006, 8:05:18 AM, you wrote:
REP> Do you want data availability, data retention, space, or performance?
data availability, space, performance
However we're talking about quite a lot of small IOs (r+w).
Then you should seriously cons
Roshan Perera wrote:
Hi Jeff & Robert, Thanks for the reply. Your interpretation is
correct and the answer spot on.
This is going to be at a VIP clients QA/production environment and
first introduction to 10, zones and zfs. Anything unsupported is not
allowed. Hence I may have to wait for the fi
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do you suggest doing so?
--matt
_
Most ZFS improvements should be available through patches. Some may require
moving to a future update (for instance, ZFS booting, which may have other
implications throughout the system).
On most systems, you won’t see a lot of difference between hardware or software
mirroring.
The benefit of
> Does it matter if the /dev names of the partitions change (i.e. from /
> dev/dsk/c2t2250CC611005d3s0 to another machine not using sun hba
> drivers with a different/shorter name??)
It should not. As long as all the disks are visible and ZFS can read
the labels, it should be able to impor
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does it matter if the /dev names of the partitions change (i.e. from /
dev/dsk/c2t2250CC611005d3s0 to another machine not using sun hba
drivers with a different/shorter name??)
thanks
keith
If the file does not exist than ZFS will not attempt to open any
pools at boot. You must issu
On Fri, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > Sorry, the code in Solaris would behave as I described. Upon the
> > application closing the file, modified data is written to the server.
> > The client waits for completion of those writes. If there is an error,
Jeff Victor <[EMAIL PROTECTED]> wrote:
> >>Your working did not match with the reality, this is why I did write this.
> >>You did write that upon close() the client will first do something similar
> >>to
> >>fsync on that file. The problem is that this is done asynchronously and the
> >>close()
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> Sorry, the code in Solaris would behave as I described. Upon the
> application closing the file, modified data is written to the server.
> The client waits for completion of those writes. If there is an error,
> it is returned to the caller of close(
Hi Jeff & Robert,
Thanks for the reply. Your interpretation is correct and the answer spot on.
This is going to be at a VIP clients QA/production environment and first
introduction to 10, zones and zfs. Anything unsupported is not allowed. Hence I
may have to wait for the fix. Do you know roughl
On Fri, Jeff Victor wrote:
> Spencer Shepler wrote:
> >On Fri, Joerg Schilling wrote:
> >
> >>>This doesn't change the fact that upon close() the NFS client will
> >>>write data back to the server. This is done to meet the
> >>>close-to-open semantics of NFS.
> >>
> >>Your working did not match wi
Spencer Shepler wrote:
On Fri, Joerg Schilling wrote:
This doesn't change the fact that upon close() the NFS client will
write data back to the server. This is done to meet the
close-to-open semantics of NFS.
Your working did not match with the reality, this is why I did write this.
You did
On Fri, Joerg Schilling wrote:
> Spencer Shepler <[EMAIL PROTECTED]> wrote:
>
> > I didn't comment on the error conditions that can occur during
> > the writing of data upon close(). What you describe is the preferred
> > method of obtaining any errors that occur during the writing of data.
> > T
Roshan Perera wrote:
Hi,
Sorry if this has been raised before.
Question: IS it possible to
1. Solaris 10 OS partitons to be SDS and have a single partition on that same
disk (without SDS) to be ZFS slice.
Yes.
2. Partition the zfs slice for many
partitions and each partition to hold a zon
Hello Roshan,
Friday, October 13, 2006, 1:12:12 PM, you wrote:
RP> Hi,
RP> Sorry if this has been raised before.
RP> Question: IS it possible to
RP> 1. Solaris 10 OS partitons to be SDS and have a single partition
RP> on that same disk (without SDS) to be ZFS slice.
Yes.
RP> 2. Partition th
Hi,
Sorry if this has been raised before.
Question: IS it possible to
1. Solaris 10 OS partitons to be SDS and have a single partition on that same
disk (without SDS) to be ZFS slice.
2. Partition the zfs slice for many partitions and each partition to hold a
zone. Idea is to create many non-
Hello Richard,
Friday, October 13, 2006, 8:05:18 AM, you wrote:
REP> Do you want data availability, data retention, space, or performance?
data availability, space, performance
However we're talking about quite a lot of small IOs (r+w).
The real question was what do you think about creating ea
Spencer Shepler <[EMAIL PROTECTED]> wrote:
> I didn't comment on the error conditions that can occur during
> the writing of data upon close(). What you describe is the preferred
> method of obtaining any errors that occur during the writing of data.
> This occurs because the NFS client is writin
Nicolas Williams <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
> > Before we start defining the first offocial functionality for this Sun
> > feature,
> > we should define a mapping for Mac OS, FreeBSD and Linux. It may make
> > sense, to
> > def
ZFS/NFSv4 introduced a new acl model (see acl(2) ...nevada (OpenSolaris)
Solaris10u2). There is no compatibility bridge between the
GETACL/SETACL/GETACLCNT and ACE_SETACL/ACE_SETACL/ACE_GETACLCNT fonctions of
acl(2) syscall. Because this is Solaris speciffic (samba.org defines its
internal acl
Since it is embedded into Solaris now, further improvements will be available
by patching and we dont have to migrate to another update, right ?
Two more things;
1) I know the benefits of ZFS, but I wonder if having ufs+zfs instead of only
ufs on single disk mirroring (I mean there are two disk
41 matches
Mail list logo