Re: ZFS Boot Menu

2013-10-08 Thread Volodymyr Kostyrko
06.10.2013 08:54, Teske, Devin wrote: On Sep 30, 2013, at 6:20 AM, Volodymyr Kostyrko wrote: 29.09.2013 00:30, Teske, Devin wrote: Interested in feedback, but moreover I would like to see who is interested in tackling this with me? I can't do it alone... I at least need testers whom will prov

Re: ZFS Boot Menu

2013-10-05 Thread Teske, Devin
On Sep 30, 2013, at 6:20 AM, Volodymyr Kostyrko wrote: > 29.09.2013 00:30, Teske, Devin wrote: >> Interested in feedback, but moreover I would like to see who is >> interested in tackling this with me? I can't do it alone... I at least >> need testers whom will provide feedback and edge-case test

Re: ZFS Boot Menu

2013-09-30 Thread Volodymyr Kostyrko
29.09.2013 00:30, Teske, Devin wrote: Interested in feedback, but moreover I would like to see who is interested in tackling this with me? I can't do it alone... I at least need testers whom will provide feedback and edge-case testing. Sign me in, I'm not fluent with forth but testing something

Re: ZFS Boot Menu

2013-09-30 Thread Lars Engels
Am 28.09.2013 23:30, schrieb Teske, Devin: In my recent interview on bsdnow.tv, I was pinged on BEs in Forth. I'd like to revisit this. Back on Sept 20th, 2012, I posted some pics demonstrating what exactly code that was in HEAD (at the time) was/is capable of. These three pictures (posted the

Re: Zfs encryption property for freebsd 8.3

2013-09-03 Thread Alan Somers
On Tue, Sep 3, 2013 at 9:01 AM, Florent Peterschmitt wrote: > Le 03/09/2013 16:53, Alan Somers a écrit : >> GELI is full-disk encryption. It's far superior to ZFS encryption. > > Yup, but is there a possibility to encrypt a ZFS volume (not a whole > pool) with a separate GELI partition? You mean

Re: Zfs encryption property for freebsd 8.3

2013-09-03 Thread Florent Peterschmitt
Le 03/09/2013 16:53, Alan Somers a écrit : > GELI is full-disk encryption. It's far superior to ZFS encryption. Yup, but is there a possibility to encrypt a ZFS volume (not a whole pool) with a separate GELI partition? Also, in-ZFS encryption would be a nice thing if it could work like an LVM/LU

Re: Zfs encryption property for freebsd 8.3

2013-09-03 Thread Alan Somers
On Tue, Sep 3, 2013 at 6:22 AM, Florent Peterschmitt wrote: > Le 03/09/2013 14:14, Emre Çamalan a écrit : >> Hi, >> I want to encrypt some disk on my server with Zfs encryption property but it >> is not available. > > "That would require ZFS v30. As far as I am aware Oracle has not > released the

Re: Zfs encryption property for freebsd 8.3

2013-09-03 Thread Florent Peterschmitt
Le 03/09/2013 14:14, Emre Çamalan a écrit : > Hi, > I want to encrypt some disk on my server with Zfs encryption property but it > is not available. "That would require ZFS v30. As far as I am aware Oracle has not released the code under CDDL." From http://forums.freebsd.org/showthread.php?t=30

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-25 Thread Wojciech Puchar
here is my real world production example of users mail as well as documents. /dev/mirror/home1.eli      2788 1545  1243    55% 1941057 20981181    8%   /home Not the same data, I imagine. A mix. 90% Mailboxes and user data (documents, pictures), rest are some .tar.gz backups.

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Nikolay Denev
On Jan 24, 2013, at 4:24 PM, Wojciech Puchar wrote: >> > Except it is on paper reliability. This "on paper" reliability saved my ass numerous times. For example I had one home NAS server machine with flaky SATA controller that would not detect one of the four drives from time to time on reboo

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Zaphod Beeblebrox
On Thu, Jan 24, 2013 at 2:26 PM, Wojciech Puchar < woj...@wojtek.tensor.gdynia.pl> wrote: > There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the >> "average" file at 30,127 bytes. But for the full breakdown: >> > > quite low. what do you store. > Apparently you're not really

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Wojciech Puchar
There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the "average" file at 30,127 bytes. But for the full breakdown: quite low. what do you store. here is my real world production example of users mail as well as documents. /dev/mirror/home1.eli 2788 1545 124355%

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Wojciech Puchar
So far I've not lost a single ZFS pool or any data stored. so far my house wasn't robbed. ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "freebsd-hackers-unsubscr..

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Zaphod Beeblebrox
Ok... here's the existing data: There are 3,236,316 files summing to 97,500,008,691 bytes. That puts the "average" file at 30,127 bytes. But for the full breakdown: 512 : 7758 1024 : 139046 2048 : 1468904 4096 : 325375 8192 : 492399 16384 : 324728 32768 : 263210 65536 : 102407 131072 : 43046 26

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Adam Nowacki
On 2013-01-24 15:45, Zaphod Beeblebrox wrote: Ok... so my question then would be... what of the small files. If I write several small files at once, does the transaction use a record, or does each file need to use a record? Additionally, if small files use sub-records, when you delete that file

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Adam Nowacki
On 2013-01-24 15:24, Wojciech Puchar wrote: For me the reliability ZFS offers is far more important than pure performance. Except it is on paper reliability. This "on paper" reliability in practice saved a 20TB pool. See one of my previous emails. Any other filesystem or hardware/software rai

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Wojciech Puchar
several small files at once, does the transaction use a record, or does each file need to use a record? Additionally, if small files use sub-records, when you delete that file, does the sub-record get moved or just wasted (until the record is completely free)? writes of small files are always g

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Zaphod Beeblebrox
Wow!.! OK. It sounds like you (or someone like you) can answer some of my burning questions about ZFS. On Thu, Jan 24, 2013 at 8:12 AM, Adam Nowacki wrote: > Lets assume 5 disk raidz1 vdev with ashift=9 (512 byte sectors). > > A worst case scenario could happen if your random i/o workload was

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Wojciech Puchar
then stored on a different disk. You could think of it as a regular RAID-5 with stripe size of 32768 bytes. PostgreSQL uses 8192 byte pages that fit evenly both into ZFS record size and column size. Each page access requires only a single disk read. Random i/o performance here should be 5 time

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-24 Thread Adam Nowacki
On 2013-01-23 21:22, Wojciech Puchar wrote: While RAID-Z is already a king of bad performance, I don't believe RAID-Z is any worse than RAID5. Do you have any actual measurements to back up your claim? it is clearly described even in ZFS papers. Both on reads and writes it gives single drive

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread matt
On 01/23/13 14:27, Wojciech Puchar wrote: >> > > both "works". For todays trend of solving everything by more hardware > ZFS may even have "enough" performance. > > But still it is dangerous for a reasons i explained, as well as it > promotes bad setups and layouts like making single filesystem out

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
even you need normal performance use gmirror and UFS I've no objection. If it works for you -- go for it. both "works". For todays trend of solving everything by more hardware ZFS may even have "enough" performance. But still it is dangerous for a reasons i explained, as well as it promot

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
associated with mirroring. Thanks for the link, but I could have done that; I am attempting to explain to Wojciech that his habit of making bold assertions and as you can see it is not a bold assertion, just you use something without even reading it's docs. Not mentioning doing any more resea

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Chris Rees
On 23 Jan 2013 21:45, "Michel Talon" wrote: > > On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > > > > > So we have to take your word for it? > > Provide a link if you're going to make assertions, or they're no more > > than > > your own opinion. > > I've heard this same thing -- every vde

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Nikolay Denev
On Jan 23, 2013, at 11:09 PM, Mark Felder wrote: > On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > >> >> So we have to take your word for it? >> Provide a link if you're going to make assertions, or they're no more than >> your own opinion. > > I've heard this same thing -- every vde

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Artem Belevich
On Wed, Jan 23, 2013 at 1:25 PM, Wojciech Puchar wrote: >>> gives single drive random I/O performance. >> >> >> For reads - true. For writes it's probably behaves better than RAID5 > > > yes, because as with reads it gives single drive performance. small writes > on RAID5 gives lower than single d

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Michel Talon
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > > So we have to take your word for it? > Provide a link if you're going to make assertions, or they're no more > than > your own opinion. I've heard this same thing -- every vdev == 1 drive in performance. I've never seen any proof/pape

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
"1 drive in performance" only applies to number of random i/o operations vdev can perform. You still get increased throughput. I.e. 5-drive RAIDZ will have 4x bandwidth of individual disks in vdev, but unless your work is serving movies it doesn't matter.

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Chris Rees
On 23 January 2013 21:24, Wojciech Puchar wrote: >> >> I've heard this same thing -- every vdev == 1 drive in performance. I've >> never seen any proof/papers on it though. > > read original ZFS papers. No, you are making the assertion, provide a link. Chris _

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
gives single drive random I/O performance. For reads - true. For writes it's probably behaves better than RAID5 yes, because as with reads it gives single drive performance. small writes on RAID5 gives lower than single disk performance. If you need higher performance, build your pool out

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
I've heard this same thing -- every vdev == 1 drive in performance. I've never seen any proof/papers on it though. read original ZFS papers. ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To un

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Artem Belevich
On Wed, Jan 23, 2013 at 1:09 PM, Mark Felder wrote: > On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > >> >> So we have to take your word for it? >> Provide a link if you're going to make assertions, or they're no more than >> your own opinion. > > > I've heard this same thing -- every vde

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Artem Belevich
On Wed, Jan 23, 2013 at 12:22 PM, Wojciech Puchar wrote: >>> While RAID-Z is already a king of bad performance, >> >> >> I don't believe RAID-Z is any worse than RAID5. Do you have any actual >> measurements to back up your claim? > > > it is clearly described even in ZFS papers. Both on reads an

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Mark Felder
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: So we have to take your word for it? Provide a link if you're going to make assertions, or they're no more than your own opinion. I've heard this same thing -- every vdev == 1 drive in performance. I've never seen any proof/papers on

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Chris Rees
On 23 Jan 2013 20:23, "Wojciech Puchar" wrote: >>> >>> While RAID-Z is already a king of bad performance, >> >> >> I don't believe RAID-Z is any worse than RAID5. Do you have any actual >> measurements to back up your claim? > > > it is clearly described even in ZFS papers. Both on reads and writ

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
This is because RAID-Z spreads each block out over all disks, whereas RAID5 (as it is typically configured) puts each block on only one disk. So to read a block from RAID-Z, all data disks must be involved, vs. for RAID5 only one disk needs to have its head moved. For other workloads (especially

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
While RAID-Z is already a king of bad performance, I don't believe RAID-Z is any worse than RAID5. Do you have any actual measurements to back up your claim? it is clearly described even in ZFS papers. Both on reads and writes it gives single drive random I/O performance. __

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-22 Thread Matthew Ahrens
On Mon, Jan 21, 2013 at 11:36 PM, Peter Jeremy wrote: > On 2013-Jan-21 12:12:45 +0100, Wojciech Puchar < woj...@wojtek.tensor.gdynia.pl> wrote: >>While RAID-Z is already a king of bad performance, > > I don't believe RAID-Z is any worse than RAID5. Do you have any actual > measurements to back up

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-21 Thread Peter Jeremy
On 2013-Jan-21 12:12:45 +0100, Wojciech Puchar wrote: >That's why i use properly tuned UFS, gmirror, and prefer not to use >gstripe but have multiple filesystems When I started using ZFS, I didn't fully trust it so I had a gmirrored UFS root (including a full src tree). Over time, I found that

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-21 Thread Wojciech Puchar
Please don't misinterpret this post: ZFS's ability to recover from fairly catastrophic failures is pretty stellar, but I'm wondering if there can be from my testing it is exactly opposite. You have to see a difference between marketing and reality. a little room for improvement. I use RAID

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-20 Thread Attila Nagy
Hi, On 01/20/13 23:26, Zaphod Beeblebrox wrote: 1) a pause for scrub... such that long scrubs could be paused during working hours. While not exactly pause, but isn't playing with scrub_delay works here? vfs.zfs.scrub_delay: Number of ticks to delay scrub Set this to a high value during wo

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-23 Thread Ivan Voras
On 23/08/2011 11:59, Aled Morris wrote: On 23 August 2011 10:52, Ivan Voras wrote: I agree but there are at least two things going for making the increase anyway: 1) 2 TB drives cost $80 2) Where the space is really important, the person in charge usually knows it and can choose a non-defaul

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-23 Thread Aled Morris
On 23 August 2011 10:52, Ivan Voras wrote: > > I agree but there are at least two things going for making the increase > anyway: > > 1) 2 TB drives cost $80 > 2) Where the space is really important, the person in charge usually knows > it and can choose a non-default size like 512b fragments. > >

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-23 Thread Ivan Voras
On 23/08/2011 03:23, Peter Jeremy wrote: On 2011-Aug-22 12:45:08 +0200, Ivan Voras wrote: It would be suboptimal but only for the slight waste of space that would have otherwise been reclaimed if the block or fragment size remained 512 or 2K. This waste of space is insignificant for the vast ma

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-22 Thread Peter Jeremy
On 2011-Aug-22 12:45:08 +0200, Ivan Voras wrote: >It would be suboptimal but only for the slight waste of space that would >have otherwise been reclaimed if the block or fragment size remained 512 >or 2K. This waste of space is insignificant for the vast majority of >users and there are no perf

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-22 Thread Ivan Voras
On 19/08/2011 14:21, Aled Morris wrote: On 19 August 2011 11:15, Tom Evans wrote: On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote: Some latest hard drives have logical sectors of 512 byte when they actually have 4k physical sectors. ... Shouldn't UFS and ZFS drivers be able to either read

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-19 Thread Aled Morris
On 19 August 2011 11:15, Tom Evans wrote: > On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote: > > Some latest hard drives have logical sectors of 512 byte when they > actually > > have 4k physical sectors. > ... > Shouldn't UFS and ZFS drivers be able to either read the right sector size > > from th

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-19 Thread Pieter de Goeje
On Friday, August 19, 2011 12:15:30 PM Tom Evans wrote: > On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote: > > Some latest hard drives have logical sectors of 512 byte when they > > actually have 4k physical sectors. Here is the document describing what > > to do in such case: > > http://ivoras.net/bl

Re: ZFS installs on HD with 4k physical blocks without any warning as on 512 block size device

2011-08-19 Thread Tom Evans
On Thu, Aug 18, 2011 at 6:50 PM, Yuri wrote: > Some latest hard drives have logical sectors of 512 byte when they actually > have 4k physical sectors. Here is the document describing what to do in such > case: > http://ivoras.net/blog/tree/2011-01-01.freebsd-on-4k-sector-drives.html . > For UFS: n

Re: ZFS pool lost

2011-05-06 Thread Alexander Leidinger
Quoting Stefan Esser (from Thu, 05 May 2011 13:04:59 +0200): Sorry for the follow-up to my own posting, but I noticed, that I left out significant information. The system is an Athlon64 (X2, but was running with SMP disabled at the time) in 32 bit mode (i386) with 4GB RAM running 8-STABLE in a

Re: zfs: ru_inblock/ru_oublock accounting

2010-10-08 Thread Andriy Gapon
on 07/10/2010 20:33 Andriy Gapon said the following: > > A simple, probably incomplete and perhaps incorrect patch for > ru_inblock/ru_oublock accounting in zfs: > http://people.freebsd.org/~avg/zfs-ru.diff I've updated the patch at the same location. Thanks to "swell.k" for pointing out that the

Re: zfs + uma

2010-09-24 Thread Andriy Gapon
on 22/09/2010 10:25 Andriy Gapon said the following: > 2. patch that attempts to implement Jeff's three suggestions; I've tested > per-CPU cache size adaptive behavior, works well, but haven't tested per-CPU > cache draining yet: > http://people.freebsd.org/~avg/uma-2.diff Now I've fully tested th

Re: zfs + uma

2010-09-22 Thread Andriy Gapon
on 21/09/2010 19:16 Alan Cox said the following: > Actually, I think that there is a middle ground between "per-cpu caches" and > "directly from the VM" that we are missing. When I've looked at the default > configuration of ZFS (without the extra UMA zones enabled), there is an > incredible amoun

Re: zfs + uma

2010-09-21 Thread Alan Cox
On Tue, Sep 21, 2010 at 1:39 AM, Jeff Roberson wrote: > On Tue, 21 Sep 2010, Andriy Gapon wrote: > > on 19/09/2010 11:42 Andriy Gapon said the following: >> >>> on 19/09/2010 11:27 Jeff Roberson said the following: >>> I don't like this because even with very large buffers you can still

Re: zfs + uma

2010-09-21 Thread Jeff Roberson
On Tue, 21 Sep 2010, Andriy Gapon wrote: on 19/09/2010 11:42 Andriy Gapon said the following: on 19/09/2010 11:27 Jeff Roberson said the following: I don't like this because even with very large buffers you can still have high enough turnover to require per-cpu caching. Kip specifically added

Re: zfs + uma

2010-09-21 Thread Jeff Roberson
On Tue, 21 Sep 2010, Andriy Gapon wrote: on 19/09/2010 01:16 Jeff Roberson said the following: Additionally we could make a last ditch flush mechanism that runs on each cpu in How would you qualify a "last ditch" trigger? Would this be called from "standard" vm_lowmem look or would there be s

Re: zfs + uma

2010-09-21 Thread Andriy Gapon
on 21/09/2010 09:35 Jeff Roberson said the following: > On Tue, 21 Sep 2010, Andriy Gapon wrote: > >> on 19/09/2010 01:16 Jeff Roberson said the following: >>> Additionally we could make a last ditch flush mechanism that runs on each >>> cpu in >> >> How would you qualify a "last ditch" trigger?

Re: zfs + uma

2010-09-21 Thread Andriy Gapon
on 21/09/2010 09:39 Jeff Roberson said the following: > I'm afraid there is not enough context here for me to know what 'the same > mechanism' is or what solaris does. Can you elaborate? This was in my first post: [[[ There is this good book: http://books.google.com/books?id=r_cecYD4AKkC&printsec

Re: zfs + uma

2010-09-20 Thread Andriy Gapon
on 19/09/2010 01:16 Jeff Roberson said the following: > Additionally we could make a last ditch flush mechanism that runs on each cpu > in How would you qualify a "last ditch" trigger? Would this be called from "standard" vm_lowmem look or would there be some extra check for even more severe memo

Re: zfs + uma

2010-09-20 Thread Andriy Gapon
on 19/09/2010 11:42 Andriy Gapon said the following: > on 19/09/2010 11:27 Jeff Roberson said the following: >> I don't like this because even with very large buffers you can still have >> high >> enough turnover to require per-cpu caching. Kip specifically added UMA >> support >> to address thi

Re: zfs + uma

2010-09-20 Thread Andriy Gapon
on 19/09/2010 11:27 Jeff Roberson said the following: > On Sun, 19 Sep 2010, Andriy Gapon wrote: > >> on 19/09/2010 01:16 Jeff Roberson said the following: >>> Additionally we could make a last ditch flush mechanism that runs on each >>> cpu in >>> turn and flushes some or all of the buckets in p

Re: zfs + uma

2010-09-19 Thread Robert N. M. Watson
On 19 Sep 2010, at 09:42, Andriy Gapon wrote: > on 19/09/2010 11:27 Jeff Roberson said the following: >> I don't like this because even with very large buffers you can still have >> high >> enough turnover to require per-cpu caching. Kip specifically added UMA >> support >> to address this iss

Re: zfs + uma

2010-09-19 Thread Robert N. M. Watson
On 19 Sep 2010, at 09:21, Andriy Gapon wrote: >> I believe the combination of these approaches would significantly solve the >> problem and should be relatively little new code. It should also preserve >> the >> adaptable nature of the system without penalizing resource heavy systems. I >> wou

Re: zfs + uma

2010-09-19 Thread Jeff Roberson
On Sun, 19 Sep 2010, Andriy Gapon wrote: on 19/09/2010 01:16 Jeff Roberson said the following: Not specifically in reaction to Robert's comment but I would like to add my thoughts to this notion of resource balancing in buckets. I really prefer not to do any specific per-zone tuning except in

Re: zfs + uma

2010-09-19 Thread Andriy Gapon
on 19/09/2010 11:27 Jeff Roberson said the following: > I don't like this because even with very large buffers you can still have high > enough turnover to require per-cpu caching. Kip specifically added UMA > support > to address this issue in zfs. If you have allocations which don't require >

Re: zfs + uma

2010-09-19 Thread Andriy Gapon
on 19/09/2010 01:16 Jeff Roberson said the following: > Not specifically in reaction to Robert's comment but I would like to add my > thoughts to this notion of resource balancing in buckets. I really prefer not > to do any specific per-zone tuning except in extreme cases. This is because > quite

Re: zfs + uma

2010-09-18 Thread Jeff Roberson
On Sat, 18 Sep 2010, Robert Watson wrote: On Fri, 17 Sep 2010, Andre Oppermann wrote: Although keeping free items around improves performance, it does consume memory too. And the fact that that memory is not freed on lowmem condition makes the situation worse. Interesting. We may run int

Re: zfs + uma

2010-09-18 Thread Marcin Cieslak
> FWIW, kvm_read taking the second argument as unsigned long instead of > void* seems a bit inconsistent: I think it done on purpose, since address in the kernel address space has nothing to do with pointers for mere userland mortals. We shouldn't bother compiler with aliasing and other stuff in c

Re: zfs + uma

2010-09-18 Thread pluknet
On 18 September 2010 17:52, Robert N. M. Watson wrote: > > On 18 Sep 2010, at 13:35, Fabian Keil wrote: > >> Doesn't build for me on amd64: >> >> f...@r500 /usr/src/tools/tools/umastat $make >> Warning: Object directory not changed from original >> /usr/src/tools/tools/umastat >> cc -O2 -pipe  -f

Re: zfs + uma

2010-09-18 Thread Garrett Cooper
On Sat, Sep 18, 2010 at 6:52 AM, Robert N. M. Watson wrote: > > On 18 Sep 2010, at 13:35, Fabian Keil wrote: > >> Doesn't build for me on amd64: >> >> f...@r500 /usr/src/tools/tools/umastat $make >> Warning: Object directory not changed from original >> /usr/src/tools/tools/umastat >> cc -O2 -pip

Re: zfs + uma

2010-09-18 Thread Robert N. M. Watson
On 18 Sep 2010, at 13:35, Fabian Keil wrote: > Doesn't build for me on amd64: > > f...@r500 /usr/src/tools/tools/umastat $make > Warning: Object directory not changed from original > /usr/src/tools/tools/umastat > cc -O2 -pipe -fno-omit-frame-pointer -std=gnu99 -fstack-protector > -Wsystem-he

Re: zfs + uma

2010-09-18 Thread Andriy Gapon
on 18/09/2010 14:30 Robert N. M. Watson said the following: > Those issues are closely related, and in particular, wanted to point Andre at > umastat since he's probably not aware of it.. :-) I didn't know about the tool too, so thanks! But I perceived the issues as quite opposite: small items vs

Re: zfs + uma

2010-09-18 Thread Fabian Keil
Robert Watson wrote: > On Fri, 17 Sep 2010, Andre Oppermann wrote: > > >> Although keeping free items around improves performance, it does consume > >> memory too. And the fact that that memory is not freed on lowmem > >> condition > >> makes the situation worse. > > > > Interesting. We may

Re: zfs + uma

2010-09-18 Thread Robert N. M. Watson
On 18 Sep 2010, at 12:27, Andriy Gapon wrote: > on 18/09/2010 14:23 Robert Watson said the following: >> I've been keeping a vague eye out for this over the last few years, and >> haven't >> spotted many problems in production machines I've inspected. You can use the >> umastat tool in the tool

Re: zfs + uma

2010-09-18 Thread Andriy Gapon
on 18/09/2010 14:23 Robert Watson said the following: > I've been keeping a vague eye out for this over the last few years, and > haven't > spotted many problems in production machines I've inspected. You can use the > umastat tool in the tools tree to look at the distribution of memory over > bu

Re: zfs + uma

2010-09-18 Thread Robert Watson
On Fri, 17 Sep 2010, Andre Oppermann wrote: Although keeping free items around improves performance, it does consume memory too. And the fact that that memory is not freed on lowmem condition makes the situation worse. Interesting. We may run into related issues with excessive mbuf (cluste

Re: zfs + uma

2010-09-17 Thread Andre Oppermann
On 17.09.2010 10:14, Andriy Gapon wrote: I've been investigating interaction between zfs and uma for a while. You might remember that there is a noticeable fragmentation in zfs uma zones when uma use is not enabled for actual data/metadata buffers. I also noticed that when uma use is enabled fo

Re: zfs + uma

2010-09-17 Thread Andriy Gapon
on 17/09/2010 15:30 Andre Oppermann said the following: > Having a general solutions for that is appreciated. Maybe the size > of the free per-cpu buckets should be specified when setting up the > UMA zone. Of certain frequently re-used elements we may want to > cache more, other less. This kind

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-23 Thread jhell
On 08/23/2010 22:10, Artem Belevich wrote: > First prepare the data. > * You'll need some files totalling around the amount of physical > memory on your box. Multiple copies of /usr/src should do the trick. > * Place one copy on UFS filesystem and another on ZFS > > Experiment #1: > * Prime ARC b

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-23 Thread Artem Belevich
Could you try following experiments before and after the patch while monitoring kstat.zfs.misc.arcstats.size and vm.stats.vm.v_inactive_count. First prepare the data. * You'll need some files totalling around the amount of physical memory on your box. Multiple copies of /usr/src should do the tri

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-23 Thread jhell
On 08/23/2010 16:42, jhell wrote: > On 08/23/2010 03:28, Artem Belevich wrote: >> Can anyone test the patch on a system with mix of UFS/ZFS filesystems >> and see if the change mitigates or solves the issue with inactive >> memory excessively backpressuring ARC. > > I have a system currently patch

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-23 Thread jhell
On 08/23/2010 03:28, Artem Belevich wrote: > Can anyone test the patch on a system with mix of UFS/ZFS filesystems > and see if the change mitigates or solves the issue with inactive > memory excessively backpressuring ARC. I have a system currently patched up to ZFSv15 and mm@'s metaslab patch ru

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-23 Thread Andriy Gapon
on 23/08/2010 10:28 Artem Belevich said the following: > If we could also deal with zone fragmentation issue you've written in > another thread, that should bring ZFS even closer to being usable > without shaman-style (the one with lots of muttering, swearing and > dancing around) tuning. > > Actu

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-23 Thread Artem Belevich
Ah! After re-reading your first email and I think I've finally got what you're saying -- with your change ARC would only start giving up memory when pagedaemon is awake. Presumably once it's awake it will also run through inactive list pushing some of it to cache. On the other hand existing code vo

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-22 Thread Andriy Gapon
on 23/08/2010 02:52 Artem Belevich said the following: > Do you by any chance have a graph showing kstat.zfs.misc.arcstats.size > behavior in addition to the stuff included on your graphs now? Yes, I do and not by a chance :-) > All I > can tell from your graphs is that v_free_count+v_cache_cou

Re: ZFS arc_reclaim_needed: better cooperation with pagedaemon

2010-08-22 Thread Artem Belevich
Do you by any chance have a graph showing kstat.zfs.misc.arcstats.size behavior in addition to the stuff included on your graphs now? All I can tell from your graphs is that v_free_count+v_cache_count shifted a bit lower relative to v_free_target+v_cache_min. It would be interesting to see what ef

Re: ZFS, compression, system load, pauses (livelocks?)

2009-12-15 Thread Wiktor Niesiobedzki
2009/12/15 Ivan Voras : > I have tried before reducing priority of ZFS taskqueues but only to > PRIBIO, not below it - not much effect wrt "pauses". I was testing with getting the thread as low priority as PUSER (with original pjd@ patch) and it was actually performing better than the current solu

Re: ZFS, compression, system load, pauses (livelocks?)

2009-12-15 Thread Ivan Voras
2009/12/15 Wiktor Niesiobedzki : > 2009/12/15 Ivan Voras : >> The context of this post is file servers running FreeBSD 8 and ZFS with >> compressed file systems on low-end hardware, or actually high-end hardware >> on VMWare ESX 3.5 and 4, which kind of makes it low-end as far as storage is >> conc

Re: ZFS, compression, system load, pauses (livelocks?)

2009-12-15 Thread Wiktor Niesiobedzki
2009/12/15 Ivan Voras : > The context of this post is file servers running FreeBSD 8 and ZFS with > compressed file systems on low-end hardware, or actually high-end hardware > on VMWare ESX 3.5 and 4, which kind of makes it low-end as far as storage is > concerned. The servers are standby backup m

Re: ZFS group ownership

2009-09-22 Thread Pawel Jakub Dawidek
On Sat, Sep 12, 2009 at 01:49:36PM +0200, Giulio Ferro wrote: [...] > Now I try to do the same on a zfs partition on the same machine > This is what I see with ls > --- > ls -la > total 4 > drwxrwx--- 3 www www 4 Sep 12 13:43 . >

Re: ZFS group ownership

2009-09-17 Thread Giulio Ferro
Nate Eldredge wrote: On SysV, you can get BSD-type behavior by setting the sgid bit on the directory in question, e.g. "chmod g+s dir". Then new files will inherit their group from the directory. I suspect this will work on FreeBSD/ZFS too even though "chmod g+s" on a directory is undocumente

Re: ZFS group ownership

2009-09-16 Thread Giulio Ferro
Adrian Penisoara wrote: Is the ownership of the new file decided by the open() syscall or by the filesystem layer ? On a superficial lookup through the sources it appears a filesystem layer choice... Which of the following would then be the best option (also taking POLA into account): * leave t

Re: ZFS group ownership

2009-09-16 Thread Linda Messerschmidt
On Wed, Sep 16, 2009 at 9:00 AM, Christoph Hellwig wrote: > Btw, on Linux all the common filesystem support the SysV behaviour > by default but have a mount option bsdgroups/grpid that turns on the BSD > hebaviour.  I would recommend you do the same just with reversed signs > on FreeBSD.  ??Having

Re: ZFS group ownership

2009-09-16 Thread Christoph Hellwig
On Wed, Sep 16, 2009 at 12:36:57PM +0200, Adrian Penisoara wrote: > Which of the following would then be the best option (also taking POLA > into account): > * leave things are they are > * make ZFS under FreeBSD behave the way open(2) describes > * have a new ZFS property govern the behavior an

Re: ZFS group ownership

2009-09-16 Thread Romain Tartière
On Tue, Sep 15, 2009 at 03:18:41PM -0700, Nate Eldredge wrote: > >What I ask now is: is this a bug or a feature? > > Both, I think :) Or none, just different implementation of the same open() function complying with the Open Group Base Specifications ;-) Quotting http://www.opengroup.org/onlinep

Re: ZFS group ownership

2009-09-16 Thread Adrian Penisoara
Hi, On Wed, Sep 16, 2009 at 12:18 AM, Nate Eldredge wrote: [...] > [On UFS, files are created with the same group as the directory that > contains them.  On ZFS, they are created with the primary group of the user > who creates them.] > >> What I ask now is: is this a bug or a feature? > > Both,

Re: ZFS group ownership

2009-09-15 Thread Nate Eldredge
On Sat, 12 Sep 2009, Giulio Ferro wrote: I don't know if this is the correct list to discuss this matter, if not I apologize in advance. freebsd-questions might have been better, but I don't think you're too far off. It wasn't necessary to post three times though :) [On UFS, files are crea

Re: ZFS group ownership

2009-09-15 Thread Benjamin Lee
On 09/12/2009 04:49 AM, Giulio Ferro wrote: [...] > How can I achieve my goal in ZFS, that is allowing members of the same > group to operate with the files / dirs they create? Does setting the setgid bit on the directory have any effect? -- Benjamin Lee http://www.b1c1l1.com/ signature.asc

Re: Fw: Re: ZFS continuously growing [SOLVED]

2009-09-03 Thread Bernd Walter
On Thu, Sep 03, 2009 at 03:41:14PM +0400, Andrey V. Elsukov wrote: > krad wrote: > >There was a change between zfs v7 and v13. IN 7 when you did a zfs list it > >would show snapshots, after 13 it didnt unless you supplied the switch. It > >still catches me out as we have a right mix of zfs version

Re: Fw: Re: ZFS continuously growing [SOLVED]

2009-09-03 Thread Andrey V. Elsukov
krad wrote: There was a change between zfs v7 and v13. IN 7 when you did a zfs list it would show snapshots, after 13 it didnt unless you supplied the switch. It still catches me out as we have a right mix of zfs version at work, so dont feel to bad 8) Try: # zpool set listsnapshots=on -- WBR

  1   2   >