Jeff A. Earickson wrote:
Are there any plans/schemes for per-user quotas within a ZFS filesystem,
akin to the UFS quotaon(1M) mechanism? I take it that quotaon won't
work with a ZFS filesystem, right? Suggestions please? My notion right
now is to drop quotas for /var/mail.
An alternative m
Eric Schrock wrote:
This case adds a new option, 'zfs create -o', which allows for any ZFS
property to be set at creation time. Multiple '-o' options can appear
in the same subcommand. Specifying the same property multiple times in
the same command results in an error. For example:
#
Hey, Bob -
It might be worth exploring where your data stream for the writes was
coming from. Moreover, it might be worth exploring how fast it was
filling up caches for writing.
Were you delivering enough data to keep the disks busy 100% of the time?
I have been tricked by this before... :)
N
Robert Milkowski wrote:
ps. however I'm really concerned with ZFS behavior when a pool is
almost full, there're lot of write transactions to that pool and
server is restarted forcibly or panics. I observed that file systems
on that pool will mount in 10-30 minutes each during zfs mount -a, and
o
Bob Evans writes:
> One last tidbit, for what it is worth. Rather than watch top, I ran
> xcpustate. It seems that just as the writes pause, the cpu looks like
> it hits 100% (or very close), then it falls back down to its lower
> level.
>
> I'm still getting used to Solaris 10 as well,
One last tidbit, for what it is worth. Rather than watch top, I ran xcpustate.
It seems that just as the writes pause, the cpu looks like it hits 100% (or
very close), then it falls back down to its lower level.
I'm still getting used to Solaris 10 as well, so if you have a DTrace script
you'
Bob Evans writes:
> I'm starting simple, there is no app.
>
> I have a 10GB file (called foo) on the internal FC drive, I did a zfs create
> raidz bar
> then ran "cp foo /bar/", so there is no cpu activity due to an app.
>
> As a test case, this took 7 min 30 sec to copy to the zfs
As added information, top reports that "cp" is using about 25% of the single
cpu. There are no other apps running.
Bob
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
I'm starting simple, there is no app.
I have a 10GB file (called foo) on the internal FC drive, I did a zfs create
raidz bar
then ran "cp foo /bar/", so there is no cpu activity due to an app.
As a test case, this took 7 min 30 sec to copy to the zfs partition. I removed
the pool, formatt
Incidentally, this is part of how QFS gets its performance
for streaming I/O. We use an "allocate forward" policy,
allow very largeallocation blocks, and separate the
metadata from data. This allows us to write (or read) data
in fairly large I/O requests, without unne
Neil Perrin writes:
> Yes James is right this is normal behaviour. Unless the writes are
> synchronous (O_DSYNC) or explicitely flushed (fsync()) then they
> are batched up, written out and committed as a transaction
> every txg_time (5 seconds).
>
> Neil.
>
> James C. McPherson wrote:
Yes James is right this is normal behaviour. Unless the writes are
synchronous (O_DSYNC) or explicitely flushed (fsync()) then they
are batched up, written out and committed as a transaction
every txg_time (5 seconds).
Neil.
James C. McPherson wrote:
Bob Evans wrote:
Just getting my feet wet
Hi Bob,
Looks like : 6415647 Sequential writing is jumping
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6415647
-r
Roch BourbonnaisSun Microsystems, Icnc-Grenoble
Senior
Bob Evans wrote:
Just getting my feet wet with zfs. I set up a test system (Sunblade
1000, dual channel scsi card, disk array with 14x18GB 15K RPM SCSI
disks) and was trying to write a large file (10 GB) to the array to
see how it performed. I configured the raid using raidz.
During the write,
HI,
Just getting my feet wet with zfs. I set up a test system (Sunblade 1000, dual
channel scsi card, disk array with 14x18GB 15K RPM SCSI disks) and was trying
to write a large file (10 GB) to the array to see how it performed. I
configured the raid using raidz.
During the write, I saw the
> Brad,
>
> I have a suspicion about what you might be seeing and I want to confirm
> it. If it locks up again you can also collect a threadlist:
>
> "echo $
> Send me the output and that will be a good starting point.
I tried popping out a disk again, but for whatever reason, the system
just
Robert Milkowski wrote:
Hello Mark,
Sunday, August 13, 2006, 8:00:31 PM, you wrote:
MM> Robert Milkowski wrote:
Hello zfs-discuss,
bash-3.00# zpool status nfs-s5-s6
pool: nfs-s5-s6
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made
The test case was build 38, Solaris 11, a 2 GB file, initially created
with 1 MB SW, and a recsize of 8 KB, on a pool with two raid-z 5+1,
accessed with 24 threads of 8 KB RW, for 500,000 ops or 40 seconds which
ever came first. The result at the pool level was 78% of the operations
On Fri, Aug 11, 2006 at 05:25:11PM -0700, Peter Looyenga wrote:
> I looked into backing up ZFS and quite honostly I can't say I am convinced
> about its usefullness here when compared to the traditional ufsdump/restore.
> While snapshots are nice they can never substitute offline backups. And
>
19 matches
Mail list logo