Ya, I agree that we need some additional data and testing. The iostat
data in itself doesn't suggest to me that the process (dd) is slow but
rather that most of the data is being retrieved elsewhere (ARC). An
fsstat would be useful to correlate with the iostat data.
One thing that also comes to
On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens wrote:
> River Tarnell wrote:
>>
>> Matthew Ahrens:
>>>
>>> ZFS user quotas (like other zfs properties) will not be accessible over
>>> NFS;
>>> you must be on the machine running zfs to manipulate them.
>>
>> does this mean that without an account o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
>> does this mean that without an account on the NFS server, a user cannot see
>> his
>> current disk use / quota?
> That's correct.
in this case, might i suggest at least an RFE to add ZFS quota support to
rquotad? i'm sure we aren
On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
> Hi all,
> Could someone give a hint if it's possible to create rpool/tmp, mount
> it as /tmp so that tmpfs has some disk-based back-end instead of
> memory-based size-limited one.
You mean you want /tmp to be a regular ZFS filesyst
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other zfs properties) will not be accessible over NFS;
you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk use / quota?
That's correc
bh...@freaks.com said:
> Even with a very weak CPU the system is close to saturating the PCI bus for
> reads with most configurations.
Nice little machine. I wonder if you'd get some of the bonnie numbers
increased if you ran multiple bonnie's in parallel. Even though the
sequential throughput
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
> ZFS user quotas (like other zfs properties) will not be accessible over NFS;
> you must be on the machine running zfs to manipulate them.
does this mean that without an account on the NFS server, a user cannot see his
current disk us
Tomas Ögren wrote:
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland
On Mar 31, 2009, at 04:31, Scott Lawson wrote:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z
There's a more recent post on bp (block pointer) rewriting that will
allow for moving blocks around (part of cleaning up the scrub code):
http://blogs.sun.com/ahrens/entry/new_scrub_code
Th
On 31 March, 2009 - Matthew Ahrens sent me these 10K bytes:
> FYI, I filed this PSARC case yesterday, and expect to integrate into
> OpenSolaris in April. Your comments are welcome.
>
> http://arc.opensolaris.org/caselog/PSARC/2009/204/
Quota reporting over NFS or for userland apps like Samba?
Nicolas Williams wrote:
We could also
disallow them from doing "zfs get useru...@name pool/zoned/fs", just make
it an error to prevent them from seeing something other than what they
intended.
I don't see why the g-z admin should not get this data.
They can of course still get the data by d
On Tue, Mar 31, 2009 at 01:25:35PM -0700, Matthew Ahrens wrote:
>
> These new properties are not printed by "zfs get all", since that could
> generate a huge amount of output, which would not be very well
> organized. The new "zfs userspace" subcommand should be used instead.
Ah, I missed that.
On Tue, Mar 31, 2009 at 11:01 PM, George Wilson wrote:
> Cyril Plisko wrote:
>>
>> On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
>> wrote:
>>
>>>
>>> assertion failures are bugs.
>>>
>>
>> Yup, I know that.
>>
>>
>>>
>>> Please file one at http://bugs.opensolaris.org
>>>
>>
>> Just did.
>>
>
>
Nicolas Williams wrote:
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
The or is specified using one of the following forms:
posix name (eg. ahrens)
posix numeric id (eg. 126829)
sid name (eg. ahr...@sun)
sid numeric id (eg. S-1-12345-12423-125829)
How does this work with zones?
On Tue, Mar 31, 2009 at 01:16:42PM -0700, Matthew Ahrens wrote:
> Robert Milkowski wrote:
> >Hello Matthew,
> >
> >Excellent news.
> >
> >Wouldn't it be better if logical disk usage would be accounted and not
> >physical - I mean when compression is enabled should quota be
> >accounted based by a l
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
]
The compressed space *is* the amount of spa
On Tue, Mar 31, 2009 at 02:37:02PM -0500, Mike Gerdts wrote:
> > The or is specified using one of the following forms:
> > posix name (eg. ahrens)
> > posix numeric id (eg. 126829)
> > sid name (eg. ahr...@sun)
> > sid numeric id (eg. S-1-12345-12423-125829)
>
> How does this work with zones? S
much cheering ensues!
2009/3/31 Matthew Ahrens :
> FYI, I filed this PSARC case yesterday, and expect to integrate into
> OpenSolaris in April. Your comments are welcome.
>
> http://arc.opensolaris.org/caselog/PSARC/2009/204/
>
> --matt
>
>
> -- Forwarded message --
> From: Matthe
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file size or physical as in du?
I'm not saying which one is better just raising the question.
--
Be
Cyril Plisko wrote:
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
wrote:
assertion failures are bugs.
Yup, I know that.
Please file one at http://bugs.opensolaris.org
Just did.
Do you have a crash dump from this issue?
- George
You may need to try another vers
2009/3/31 Matthew Ahrens :
> 4. New Properties
>
> user/group space accounting information and quotas can be manipulated
> with 4 new properties:
>
> zfs get userused@
> zfs get groupused@
>
> zfs get userquota@
> zfs get groupquota@
>
> zfs set userquota@=
> zfs set groupquota@=
>
> The or
FYI, I filed this PSARC case yesterday, and expect to integrate into
OpenSolaris in April. Your comments are welcome.
http://arc.opensolaris.org/caselog/PSARC/2009/204/
--matt
--- Begin Message ---
Template Version: @(#)sac_nextcase 1.68 02/23/09 SMI
This information is Copyright 2009 Sun Mic
>casper@sun.com said:
>> I've upgraded my system from ufs to zfs (root pool).
>> By default, it creates a zvol for dump and swap.
>> . . .
>> So I removed the zvol swap and now I have a standard swap partition. The
>> performance is much better (night and day). The system is usable and I
>>
james.ma...@sun.com said:
> I'm not yet sure what's broken here, but there's something pathologically
> wrong with the IO rates to the device during the ZFS tests. In both cases,
> the wait queue is getting backed up, with horrific wait queue latency
> numbers. On the read side, I don't understand
casper@sun.com said:
> I've upgraded my system from ufs to zfs (root pool).
> By default, it creates a zvol for dump and swap.
> . . .
> So I removed the zvol swap and now I have a standard swap partition. The
> performance is much better (night and day). The system is usable and I
> don't k
Hello Brad,
Monday, March 30, 2009, 7:57:31 PM, you wrote:
BP> I've run into this too... I believe the issue is that the block
BP> size/allocation unit size in ZFS is much larger than the default size
BP> on older filesystems (ufs, ext2, ext3).
BP> The result is that if you have lots of small fi
On Tue, Mar 31, 2009 at 1:31 AM, Scott Lawson
wrote:
> No. There is no way to expand a RAIDZ or RAIDZ2 at this point. It is a
> feature that is often discussed
> and people would like, but has been seen by Sun as more of a feature home
> users would like rather2
> than enterprise users. Enterpris
Posting this back to zfs-discuss.
Roland's test case (below) is a single threaded sequential write
followed by a single threaded sequential read. His bandwidth
goes from horrible (~2MB/sec) to expected (~30MB/sec)
when prefetch is disabled. This is with relatively recent nv bits (nv110).
Roland
I'm currently setting up user directories on a zfs filesystem (Solaris
10) which I then nfs mount on an OpenSUSE 9.3 system.
I have a zpool called zpool1. First, I set up a "home" zfs volume:
zfs create zpool1/la_home_hpc_users
zfs sharenfs=on zpool1/la_home_hpc_users
Then I create a user direc
I've upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
It's a 4GB Ultra-45 and every late night/morning I run a job which takes
around 2GB of memory.
With a zvol swap, the system becomes unusable and the Sun Ray client often
goes into "26B".
So
Michael Shadle wrote:
On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle wrote:
Sounds like a reasonable idea, no?
Follow up question: can I add a single disk to the existing raidz2
later on (if somehow I found more space in my chassis) so instead of a
7 disk raidz2 (5+2) it becomes a 6+2
31 matches
Mail list logo