06.10.2013 08:54, Teske, Devin wrote:
On Sep 30, 2013, at 6:20 AM, Volodymyr Kostyrko wrote:
29.09.2013 00:30, Teske, Devin wrote:
Interested in feedback, but moreover I would like to see who is
interested in tackling this with me? I can't do it alone... I at least
need testers whom will prov
On Sep 30, 2013, at 6:20 AM, Volodymyr Kostyrko wrote:
> 29.09.2013 00:30, Teske, Devin wrote:
>> Interested in feedback, but moreover I would like to see who is
>> interested in tackling this with me? I can't do it alone... I at least
>> need testers whom will provide feedback and edge-case test
29.09.2013 00:30, Teske, Devin wrote:
Interested in feedback, but moreover I would like to see who is
interested in tackling this with me? I can't do it alone... I at least
need testers whom will provide feedback and edge-case testing.
Sign me in, I'm not fluent with forth but testing something
Am 28.09.2013 23:30, schrieb Teske, Devin:
In my recent interview on bsdnow.tv, I was pinged on BEs in Forth.
I'd like to revisit this.
Back on Sept 20th, 2012, I posted some pics demonstrating what
exactly code that was in HEAD (at the time) was/is capable of.
These three pictures (posted the
On Tue, Sep 3, 2013 at 9:01 AM, Florent Peterschmitt
wrote:
> Le 03/09/2013 16:53, Alan Somers a écrit :
>> GELI is full-disk encryption. It's far superior to ZFS encryption.
>
> Yup, but is there a possibility to encrypt a ZFS volume (not a whole
> pool) with a separate GELI partition?
You mean
Le 03/09/2013 16:53, Alan Somers a écrit :
> GELI is full-disk encryption. It's far superior to ZFS encryption.
Yup, but is there a possibility to encrypt a ZFS volume (not a whole
pool) with a separate GELI partition?
Also, in-ZFS encryption would be a nice thing if it could work like an
LVM/LU
On Tue, Sep 3, 2013 at 6:22 AM, Florent Peterschmitt
wrote:
> Le 03/09/2013 14:14, Emre Çamalan a écrit :
>> Hi,
>> I want to encrypt some disk on my server with Zfs encryption property but it
>> is not available.
>
> "That would require ZFS v30. As far as I am aware Oracle has not
> released the
Le 03/09/2013 14:14, Emre Çamalan a écrit :
> Hi,
> I want to encrypt some disk on my server with Zfs encryption property but it
> is not available.
"That would require ZFS v30. As far as I am aware Oracle has not
released the code under CDDL."
From http://forums.freebsd.org/showthread.php?t=30
here is my real world production example of users mail as well as
documents.
/dev/mirror/home1.eli 2788 1545 1243 55% 1941057 20981181 8%
/home
Not the same data, I imagine.
A mix. 90% Mailboxes and user data (documents, pictures), rest are some
.tar.gz backups.
On Jan 24, 2013, at 4:24 PM, Wojciech Puchar
wrote:
>>
> Except it is on paper reliability.
This "on paper" reliability saved my ass numerous times.
For example I had one home NAS server machine with flaky SATA controller that
would not detect one of the four drives from time to time on reboo
On Thu, Jan 24, 2013 at 2:26 PM, Wojciech Puchar <
woj...@wojtek.tensor.gdynia.pl> wrote:
> There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
>> "average" file at 30,127 bytes. But for the full breakdown:
>>
>
> quite low. what do you store.
>
Apparently you're not really
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
"average" file at 30,127 bytes. But for the full breakdown:
quite low. what do you store.
here is my real world production example of users mail as well as
documents.
/dev/mirror/home1.eli 2788 1545 124355%
So far I've not lost a single ZFS pool or any data stored.
so far my house wasn't robbed.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "freebsd-hackers-unsubscr..
Ok... here's the existing data:
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the
"average" file at 30,127 bytes. But for the full breakdown:
512 : 7758
1024 : 139046
2048 : 1468904
4096 : 325375
8192 : 492399
16384 : 324728
32768 : 263210
65536 : 102407
131072 : 43046
26
On 2013-01-24 15:45, Zaphod Beeblebrox wrote:
Ok... so my question then would be... what of the small files. If I write
several small files at once, does the transaction use a record, or does
each file need to use a record? Additionally, if small files use
sub-records, when you delete that file
On 2013-01-24 15:24, Wojciech Puchar wrote:
For me the reliability ZFS offers is far more important than pure
performance.
Except it is on paper reliability.
This "on paper" reliability in practice saved a 20TB pool. See one of my
previous emails. Any other filesystem or hardware/software rai
several small files at once, does the transaction use a record, or does
each file need to use a record? Additionally, if small files use
sub-records, when you delete that file, does the sub-record get moved or
just wasted (until the record is completely free)?
writes of small files are always g
Wow!.! OK. It sounds like you (or someone like you) can answer some of my
burning questions about ZFS.
On Thu, Jan 24, 2013 at 8:12 AM, Adam Nowacki wrote:
> Lets assume 5 disk raidz1 vdev with ashift=9 (512 byte sectors).
>
> A worst case scenario could happen if your random i/o workload was
then stored on a different disk. You could think of it as a regular RAID-5
with stripe size of 32768 bytes.
PostgreSQL uses 8192 byte pages that fit evenly both into ZFS record size and
column size. Each page access requires only a single disk read. Random i/o
performance here should be 5 time
On 2013-01-23 21:22, Wojciech Puchar wrote:
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both on reads and writes it
gives single drive
On 01/23/13 14:27, Wojciech Puchar wrote:
>>
>
> both "works". For todays trend of solving everything by more hardware
> ZFS may even have "enough" performance.
>
> But still it is dangerous for a reasons i explained, as well as it
> promotes bad setups and layouts like making single filesystem out
even you need normal performance use gmirror and UFS
I've no objection. If it works for you -- go for it.
both "works". For todays trend of solving everything by more hardware ZFS
may even have "enough" performance.
But still it is dangerous for a reasons i explained, as well as it
promot
associated with mirroring.
Thanks for the link, but I could have done that; I am attempting to
explain to Wojciech that his habit of making bold assertions and
as you can see it is not a bold assertion, just you use something without
even reading it's docs.
Not mentioning doing any more resea
On 23 Jan 2013 21:45, "Michel Talon" wrote:
>
> On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
> >
> > So we have to take your word for it?
> > Provide a link if you're going to make assertions, or they're no more
> > than
> > your own opinion.
>
> I've heard this same thing -- every vde
On Jan 23, 2013, at 11:09 PM, Mark Felder wrote:
> On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
>>
>> So we have to take your word for it?
>> Provide a link if you're going to make assertions, or they're no more than
>> your own opinion.
>
> I've heard this same thing -- every vde
On Wed, Jan 23, 2013 at 1:25 PM, Wojciech Puchar
wrote:
>>> gives single drive random I/O performance.
>>
>>
>> For reads - true. For writes it's probably behaves better than RAID5
>
>
> yes, because as with reads it gives single drive performance. small writes
> on RAID5 gives lower than single d
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
> So we have to take your word for it?
> Provide a link if you're going to make assertions, or they're no more
> than
> your own opinion.
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/pape
"1 drive in performance" only applies to number of random i/o
operations vdev can perform. You still get increased throughput. I.e.
5-drive RAIDZ will have 4x bandwidth of individual disks in vdev, but
unless your work is serving movies it doesn't matter.
On 23 January 2013 21:24, Wojciech Puchar
wrote:
>>
>> I've heard this same thing -- every vdev == 1 drive in performance. I've
>> never seen any proof/papers on it though.
>
> read original ZFS papers.
No, you are making the assertion, provide a link.
Chris
_
gives single drive random I/O performance.
For reads - true. For writes it's probably behaves better than RAID5
yes, because as with reads it gives single drive performance. small writes
on RAID5 gives lower than single disk performance.
If you need higher performance, build your pool out
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/papers on it though.
read original ZFS papers.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To un
On Wed, Jan 23, 2013 at 1:09 PM, Mark Felder wrote:
> On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
>>
>> So we have to take your word for it?
>> Provide a link if you're going to make assertions, or they're no more than
>> your own opinion.
>
>
> I've heard this same thing -- every vde
On Wed, Jan 23, 2013 at 12:22 PM, Wojciech Puchar
wrote:
>>> While RAID-Z is already a king of bad performance,
>>
>>
>> I don't believe RAID-Z is any worse than RAID5. Do you have any actual
>> measurements to back up your claim?
>
>
> it is clearly described even in ZFS papers. Both on reads an
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
So we have to take your word for it?
Provide a link if you're going to make assertions, or they're no more
than
your own opinion.
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/papers on
On 23 Jan 2013 20:23, "Wojciech Puchar"
wrote:
>>>
>>> While RAID-Z is already a king of bad performance,
>>
>>
>> I don't believe RAID-Z is any worse than RAID5. Do you have any actual
>> measurements to back up your claim?
>
>
> it is clearly described even in ZFS papers. Both on reads and writ
This is because RAID-Z spreads each block out over all disks, whereas RAID5
(as it is typically configured) puts each block on only one disk. So to
read a block from RAID-Z, all data disks must be involved, vs. for RAID5
only one disk needs to have its head moved.
For other workloads (especially
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both on reads and writes it
gives single drive random I/O performance.
__
On Mon, Jan 21, 2013 at 11:36 PM, Peter Jeremy wrote:
> On 2013-Jan-21 12:12:45 +0100, Wojciech Puchar <
woj...@wojtek.tensor.gdynia.pl> wrote:
>>While RAID-Z is already a king of bad performance,
>
> I don't believe RAID-Z is any worse than RAID5. Do you have any actual
> measurements to back up
On 2013-Jan-21 12:12:45 +0100, Wojciech Puchar
wrote:
>That's why i use properly tuned UFS, gmirror, and prefer not to use
>gstripe but have multiple filesystems
When I started using ZFS, I didn't fully trust it so I had a gmirrored
UFS root (including a full src tree). Over time, I found that
Please don't misinterpret this post: ZFS's ability to recover from fairly
catastrophic failures is pretty stellar, but I'm wondering if there can be
from my testing it is exactly opposite. You have to see a difference
between marketing and reality.
a little room for improvement.
I use RAID
Hi,
On 01/20/13 23:26, Zaphod Beeblebrox wrote:
1) a pause for scrub... such that long scrubs could be paused during
working hours.
While not exactly pause, but isn't playing with scrub_delay works here?
vfs.zfs.scrub_delay: Number of ticks to delay scrub
Set this to a high value during wo
On 23/08/2011 11:59, Aled Morris wrote:
On 23 August 2011 10:52, Ivan Voras wrote:
I agree but there are at least two things going for making the increase
anyway:
1) 2 TB drives cost $80
2) Where the space is really important, the person in charge usually knows
it and can choose a non-defaul
On 23 August 2011 10:52, Ivan Voras wrote:
>
> I agree but there are at least two things going for making the increase
> anyway:
>
> 1) 2 TB drives cost $80
> 2) Where the space is really important, the person in charge usually knows
> it and can choose a non-default size like 512b fragments.
>
>
On 23/08/2011 03:23, Peter Jeremy wrote:
On 2011-Aug-22 12:45:08 +0200, Ivan Voras wrote:
It would be suboptimal but only for the slight waste of space that would
have otherwise been reclaimed if the block or fragment size remained 512
or 2K. This waste of space is insignificant for the vast ma
On 2011-Aug-22 12:45:08 +0200, Ivan Voras wrote:
>It would be suboptimal but only for the slight waste of space that would
>have otherwise been reclaimed if the block or fragment size remained 512
>or 2K. This waste of space is insignificant for the vast majority of
>users and there are no perf
On 19/08/2011 14:21, Aled Morris wrote:
On 19 August 2011 11:15, Tom Evans wrote:
On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote:
Some latest hard drives have logical sectors of 512 byte when they
actually
have 4k physical sectors.
...
Shouldn't UFS and ZFS drivers be able to either read
On 19 August 2011 11:15, Tom Evans wrote:
> On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote:
> > Some latest hard drives have logical sectors of 512 byte when they
> actually
> > have 4k physical sectors.
>
...
> Shouldn't UFS and ZFS drivers be able to either read the right sector size
> > from th
On Friday, August 19, 2011 12:15:30 PM Tom Evans wrote:
> On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote:
> > Some latest hard drives have logical sectors of 512 byte when they
> > actually have 4k physical sectors. Here is the document describing what
> > to do in such case:
> > http://ivoras.net/bl
On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote:
> Some latest hard drives have logical sectors of 512 byte when they actually
> have 4k physical sectors. Here is the document describing what to do in such
> case:
> http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html .
> For UFS: n
Quoting Stefan Esser (from Thu, 05 May 2011 13:04:59 +0200):
Sorry for the follow-up to my own posting, but I noticed, that I left
out significant
information.
The system is an Athlon64 (X2, but was running with SMP disabled at the
time) in
32 bit mode (i386) with 4GB RAM running 8-STABLE in a
on 07/10/2010 20:33 Andriy Gapon said the following:
>
> A simple, probably incomplete and perhaps incorrect patch for
> ru_inblock/ru_oublock accounting in zfs:
> http://people.freebsd.org/~avg/zfs-ru.diff
I've updated the patch at the same location.
Thanks to "swell.k" for pointing out that the
on 22/09/2010 10:25 Andriy Gapon said the following:
> 2. patch that attempts to implement Jeff's three suggestions; I've tested
> per-CPU cache size adaptive behavior, works well, but haven't tested per-CPU
> cache draining yet:
> http://people.freebsd.org/~avg/uma-2.diff
Now I've fully tested th
on 21/09/2010 19:16 Alan Cox said the following:
> Actually, I think that there is a middle ground between "per-cpu caches" and
> "directly from the VM" that we are missing. When I've looked at the default
> configuration of ZFS (without the extra UMA zones enabled), there is an
> incredible amoun
On Tue, Sep 21, 2010 at 1:39 AM, Jeff Roberson wrote:
> On Tue, 21 Sep 2010, Andriy Gapon wrote:
>
> on 19/09/2010 11:42 Andriy Gapon said the following:
>>
>>> on 19/09/2010 11:27 Jeff Roberson said the following:
>>>
I don't like this because even with very large buffers you can still
On Tue, 21 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 11:42 Andriy Gapon said the following:
on 19/09/2010 11:27 Jeff Roberson said the following:
I don't like this because even with very large buffers you can still have high
enough turnover to require per-cpu caching. Kip specifically added
On Tue, 21 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 01:16 Jeff Roberson said the following:
Additionally we could make a last ditch flush mechanism that runs on each cpu in
How would you qualify a "last ditch" trigger?
Would this be called from "standard" vm_lowmem look or would there be s
on 21/09/2010 09:35 Jeff Roberson said the following:
> On Tue, 21 Sep 2010, Andriy Gapon wrote:
>
>> on 19/09/2010 01:16 Jeff Roberson said the following:
>>> Additionally we could make a last ditch flush mechanism that runs on each
>>> cpu in
>>
>> How would you qualify a "last ditch" trigger?
on 21/09/2010 09:39 Jeff Roberson said the following:
> I'm afraid there is not enough context here for me to know what 'the same
> mechanism' is or what solaris does. Can you elaborate?
This was in my first post:
[[[
There is this good book:
http://books.google.com/books?id=r_cecYD4AKkC&printsec
on 19/09/2010 01:16 Jeff Roberson said the following:
> Additionally we could make a last ditch flush mechanism that runs on each cpu
> in
How would you qualify a "last ditch" trigger?
Would this be called from "standard" vm_lowmem look or would there be some extra
check for even more severe memo
on 19/09/2010 11:42 Andriy Gapon said the following:
> on 19/09/2010 11:27 Jeff Roberson said the following:
>> I don't like this because even with very large buffers you can still have
>> high
>> enough turnover to require per-cpu caching. Kip specifically added UMA
>> support
>> to address thi
on 19/09/2010 11:27 Jeff Roberson said the following:
> On Sun, 19 Sep 2010, Andriy Gapon wrote:
>
>> on 19/09/2010 01:16 Jeff Roberson said the following:
>>> Additionally we could make a last ditch flush mechanism that runs on each
>>> cpu in
>>> turn and flushes some or all of the buckets in p
On 19 Sep 2010, at 09:42, Andriy Gapon wrote:
> on 19/09/2010 11:27 Jeff Roberson said the following:
>> I don't like this because even with very large buffers you can still have
>> high
>> enough turnover to require per-cpu caching. Kip specifically added UMA
>> support
>> to address this iss
On 19 Sep 2010, at 09:21, Andriy Gapon wrote:
>> I believe the combination of these approaches would significantly solve the
>> problem and should be relatively little new code. It should also preserve
>> the
>> adaptable nature of the system without penalizing resource heavy systems. I
>> wou
On Sun, 19 Sep 2010, Andriy Gapon wrote:
on 19/09/2010 01:16 Jeff Roberson said the following:
Not specifically in reaction to Robert's comment but I would like to add my
thoughts to this notion of resource balancing in buckets. I really prefer not
to do any specific per-zone tuning except in
on 19/09/2010 11:27 Jeff Roberson said the following:
> I don't like this because even with very large buffers you can still have high
> enough turnover to require per-cpu caching. Kip specifically added UMA
> support
> to address this issue in zfs. If you have allocations which don't require
>
on 19/09/2010 01:16 Jeff Roberson said the following:
> Not specifically in reaction to Robert's comment but I would like to add my
> thoughts to this notion of resource balancing in buckets. I really prefer not
> to do any specific per-zone tuning except in extreme cases. This is because
> quite
On Sat, 18 Sep 2010, Robert Watson wrote:
On Fri, 17 Sep 2010, Andre Oppermann wrote:
Although keeping free items around improves performance, it does consume
memory too. And the fact that that memory is not freed on lowmem
condition makes the situation worse.
Interesting. We may run int
> FWIW, kvm_read taking the second argument as unsigned long instead of
> void* seems a bit inconsistent:
I think it done on purpose, since address in the kernel address space
has nothing to do with pointers for mere userland mortals. We shouldn't
bother compiler with aliasing and other stuff in c
On 18 September 2010 17:52, Robert N. M. Watson wrote:
>
> On 18 Sep 2010, at 13:35, Fabian Keil wrote:
>
>> Doesn't build for me on amd64:
>>
>> f...@r500 /usr/src/tools/tools/umastat $make
>> Warning: Object directory not changed from original
>> /usr/src/tools/tools/umastat
>> cc -O2 -pipe -f
On Sat, Sep 18, 2010 at 6:52 AM, Robert N. M. Watson
wrote:
>
> On 18 Sep 2010, at 13:35, Fabian Keil wrote:
>
>> Doesn't build for me on amd64:
>>
>> f...@r500 /usr/src/tools/tools/umastat $make
>> Warning: Object directory not changed from original
>> /usr/src/tools/tools/umastat
>> cc -O2 -pip
On 18 Sep 2010, at 13:35, Fabian Keil wrote:
> Doesn't build for me on amd64:
>
> f...@r500 /usr/src/tools/tools/umastat $make
> Warning: Object directory not changed from original
> /usr/src/tools/tools/umastat
> cc -O2 -pipe -fno-omit-frame-pointer -std=gnu99 -fstack-protector
> -Wsystem-he
on 18/09/2010 14:30 Robert N. M. Watson said the following:
> Those issues are closely related, and in particular, wanted to point Andre at
> umastat since he's probably not aware of it.. :-)
I didn't know about the tool too, so thanks!
But I perceived the issues as quite opposite: small items vs
Robert Watson wrote:
> On Fri, 17 Sep 2010, Andre Oppermann wrote:
>
> >> Although keeping free items around improves performance, it does consume
> >> memory too. And the fact that that memory is not freed on lowmem
> >> condition
> >> makes the situation worse.
> >
> > Interesting. We may
On 18 Sep 2010, at 12:27, Andriy Gapon wrote:
> on 18/09/2010 14:23 Robert Watson said the following:
>> I've been keeping a vague eye out for this over the last few years, and
>> haven't
>> spotted many problems in production machines I've inspected. You can use the
>> umastat tool in the tool
on 18/09/2010 14:23 Robert Watson said the following:
> I've been keeping a vague eye out for this over the last few years, and
> haven't
> spotted many problems in production machines I've inspected. You can use the
> umastat tool in the tools tree to look at the distribution of memory over
> bu
On Fri, 17 Sep 2010, Andre Oppermann wrote:
Although keeping free items around improves performance, it does consume
memory too. And the fact that that memory is not freed on lowmem condition
makes the situation worse.
Interesting. We may run into related issues with excessive mbuf (cluste
On 17.09.2010 10:14, Andriy Gapon wrote:
I've been investigating interaction between zfs and uma for a while.
You might remember that there is a noticeable fragmentation in zfs uma zones
when uma use is not enabled for actual data/metadata buffers.
I also noticed that when uma use is enabled fo
on 17/09/2010 15:30 Andre Oppermann said the following:
> Having a general solutions for that is appreciated. Maybe the size
> of the free per-cpu buckets should be specified when setting up the
> UMA zone. Of certain frequently re-used elements we may want to
> cache more, other less.
This kind
On 08/23/2010 22:10, Artem Belevich wrote:
> First prepare the data.
> * You'll need some files totalling around the amount of physical
> memory on your box. Multiple copies of /usr/src should do the trick.
> * Place one copy on UFS filesystem and another on ZFS
>
> Experiment #1:
> * Prime ARC b
Could you try following experiments before and after the patch while
monitoring kstat.zfs.misc.arcstats.size and
vm.stats.vm.v_inactive_count.
First prepare the data.
* You'll need some files totalling around the amount of physical
memory on your box. Multiple copies of /usr/src should do the tri
On 08/23/2010 16:42, jhell wrote:
> On 08/23/2010 03:28, Artem Belevich wrote:
>> Can anyone test the patch on a system with mix of UFS/ZFS filesystems
>> and see if the change mitigates or solves the issue with inactive
>> memory excessively backpressuring ARC.
>
> I have a system currently patch
On 08/23/2010 03:28, Artem Belevich wrote:
> Can anyone test the patch on a system with mix of UFS/ZFS filesystems
> and see if the change mitigates or solves the issue with inactive
> memory excessively backpressuring ARC.
I have a system currently patched up to ZFSv15 and mm@'s metaslab patch
ru
on 23/08/2010 10:28 Artem Belevich said the following:
> If we could also deal with zone fragmentation issue you've written in
> another thread, that should bring ZFS even closer to being usable
> without shaman-style (the one with lots of muttering, swearing and
> dancing around) tuning.
>
> Actu
Ah! After re-reading your first email and I think I've finally got
what you're saying -- with your change ARC would only start giving up
memory when pagedaemon is awake. Presumably once it's awake it will
also run through inactive list pushing some of it to cache. On the
other hand existing code vo
on 23/08/2010 02:52 Artem Belevich said the following:
> Do you by any chance have a graph showing kstat.zfs.misc.arcstats.size
> behavior in addition to the stuff included on your graphs now?
Yes, I do and not by a chance :-)
> All I
> can tell from your graphs is that v_free_count+v_cache_cou
Do you by any chance have a graph showing kstat.zfs.misc.arcstats.size
behavior in addition to the stuff included on your graphs now? All I
can tell from your graphs is that v_free_count+v_cache_count shifted a
bit lower relative to v_free_target+v_cache_min. It would be
interesting to see what ef
2009/12/15 Ivan Voras :
> I have tried before reducing priority of ZFS taskqueues but only to
> PRIBIO, not below it - not much effect wrt "pauses".
I was testing with getting the thread as low priority as PUSER (with
original pjd@ patch) and it was actually performing better than the
current solu
2009/12/15 Wiktor Niesiobedzki :
> 2009/12/15 Ivan Voras :
>> The context of this post is file servers running FreeBSD 8 and ZFS with
>> compressed file systems on low-end hardware, or actually high-end hardware
>> on VMWare ESX 3.5 and 4, which kind of makes it low-end as far as storage is
>> conc
2009/12/15 Ivan Voras :
> The context of this post is file servers running FreeBSD 8 and ZFS with
> compressed file systems on low-end hardware, or actually high-end hardware
> on VMWare ESX 3.5 and 4, which kind of makes it low-end as far as storage is
> concerned. The servers are standby backup m
On Sat, Sep 12, 2009 at 01:49:36PM +0200, Giulio Ferro wrote:
[...]
> Now I try to do the same on a zfs partition on the same machine
> This is what I see with ls
> ---
> ls -la
> total 4
> drwxrwx--- 3 www www 4 Sep 12 13:43 .
>
Nate Eldredge wrote:
On SysV, you can get BSD-type behavior by setting the sgid bit on the
directory in question, e.g. "chmod g+s dir". Then new files will
inherit their group from the directory. I suspect this will work on
FreeBSD/ZFS too even though "chmod g+s" on a directory is undocumente
Adrian Penisoara wrote:
Is the ownership of the new file decided by the open() syscall or by
the filesystem layer ?
On a superficial lookup through the sources it appears a filesystem
layer choice...
Which of the following would then be the best option (also taking POLA
into account):
* leave t
On Wed, Sep 16, 2009 at 9:00 AM, Christoph Hellwig wrote:
> Btw, on Linux all the common filesystem support the SysV behaviour
> by default but have a mount option bsdgroups/grpid that turns on the BSD
> hebaviour. I would recommend you do the same just with reversed signs
> on FreeBSD. ??Having
On Wed, Sep 16, 2009 at 12:36:57PM +0200, Adrian Penisoara wrote:
> Which of the following would then be the best option (also taking POLA
> into account):
> * leave things are they are
> * make ZFS under FreeBSD behave the way open(2) describes
> * have a new ZFS property govern the behavior an
On Tue, Sep 15, 2009 at 03:18:41PM -0700, Nate Eldredge wrote:
> >What I ask now is: is this a bug or a feature?
>
> Both, I think :)
Or none, just different implementation of the same open() function
complying with the Open Group Base Specifications ;-)
Quotting
http://www.opengroup.org/onlinep
Hi,
On Wed, Sep 16, 2009 at 12:18 AM, Nate Eldredge
wrote:
[...]
> [On UFS, files are created with the same group as the directory that
> contains them. On ZFS, they are created with the primary group of the user
> who creates them.]
>
>> What I ask now is: is this a bug or a feature?
>
> Both,
On Sat, 12 Sep 2009, Giulio Ferro wrote:
I don't know if this is the correct list to discuss this matter, if not
I apologize in advance.
freebsd-questions might have been better, but I don't think you're too far
off. It wasn't necessary to post three times though :)
[On UFS, files are crea
On 09/12/2009 04:49 AM, Giulio Ferro wrote:
[...]
> How can I achieve my goal in ZFS, that is allowing members of the same
> group to operate with the files / dirs they create?
Does setting the setgid bit on the directory have any effect?
--
Benjamin Lee
http://www.b1c1l1.com/
signature.asc
On Thu, Sep 03, 2009 at 03:41:14PM +0400, Andrey V. Elsukov wrote:
> krad wrote:
> >There was a change between zfs v7 and v13. IN 7 when you did a zfs list it
> >would show snapshots, after 13 it didnt unless you supplied the switch. It
> >still catches me out as we have a right mix of zfs version
krad wrote:
There was a change between zfs v7 and v13. IN 7 when you did a zfs list it
would show snapshots, after 13 it didnt unless you supplied the switch. It
still catches me out as we have a right mix of zfs version at work, so dont
feel to bad 8)
Try:
# zpool set listsnapshots=on
--
WBR
1 - 100 of 191 matches
Mail list logo