On Tue, 22 Sep 2009 13:26:59 -0400
Richard Elling wrote:
> > That seems to differ quite a bit from what I've seen; perhaps I am
> > misunderstanding... is the "+ 1 block" of a different size than the
> > recordsize? With recordsize=1k:
> >
> > $ ls -ls foo
> > 2261 -rw-r--r-- 1 root root
On Sep 22, 2009, at 8:07 AM, Andrew Deason wrote:
On Mon, 21 Sep 2009 18:20:53 -0400
Richard Elling wrote:
On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling wrote:
You don't know the max overhead for the file before it is
allocated. You c
On Mon, 21 Sep 2009 18:20:53 -0400
Richard Elling wrote:
> On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
>
> > On Mon, 21 Sep 2009 17:13:26 -0400
> > Richard Elling wrote:
> >
> >> You don't know the max overhead for the file before it is
> >> allocated. You could guess at a max of 3x size
On Sep 21, 2009, at 2:43 PM, Andrew Deason wrote:
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling wrote:
OK, so the problem you are trying to solve is "how much stuff can I
place in the remaining free space?" I don't think this is knowable
for a dynamic file system like ZFS where metadata
On Mon, 21 Sep 2009 17:13:26 -0400
Richard Elling wrote:
> OK, so the problem you are trying to solve is "how much stuff can I
> place in the remaining free space?" I don't think this is knowable
> for a dynamic file system like ZFS where metadata is dynamically
> allocated.
Yes. And I acknowle
On Sep 21, 2009, at 7:11 AM, Andrew Deason wrote:
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling wrote:
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other
features as per best practices for your workload? In
On Sun, 20 Sep 2009 20:31:57 -0400
Richard Elling wrote:
> If you are just building a cache, why not just make a file system and
> put a reservation on it? Turn off auto snapshots and set other
> features as per best practices for your workload? In other words,
> treat it like we
> treat dump spa
If you are just building a cache, why not just make a file system and
put a reservation on it? Turn off auto snapshots and set other features
as per best practices for your workload? In other words, treat it like
we
treat dump space.
I think that we are getting caught up in trying to answer th
On Fri, 18 Sep 2009 17:54:41 -0400
Robert Milkowski wrote:
> There will be a delay of up-to 30s currently.
>
> But how much data do you expect to be pushed within 30s?
> Lets say it would be even 10g to lots of small file and you would
> calculate the total size by only summing up a logical siz
Andrew Deason wrote:
On Fri, 18 Sep 2009 16:38:28 -0400
Robert Milkowski wrote:
No. We need to be able to tell how close to full we are, for
determining when to start/stop removing things from the cache
before we can add new items to the cache again.
but having a dedicated dataset
On Fri, 18 Sep 2009 16:38:28 -0400
Robert Milkowski wrote:
> > No. We need to be able to tell how close to full we are, for
> > determining when to start/stop removing things from the cache
> > before we can add new items to the cache again.
> >
>
> but having a dedicated dataset will let you
Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being used in the dataset.
Would
On Sep 18, 2009, at 7:36 AM, Andrew Deason wrote:
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski wrote:
if you would create a dedicated dataset for your cache and set quota
on it then instead of tracking a disk space usage for each file you
could easily check how much disk space is being
On Fri, 18 Sep 2009 12:48:34 -0400
Richard Elling wrote:
> The transactional nature of ZFS may work against you here.
> Until the data is committed to disk, it is unclear how much space
> it will consume. Compression clouds the crystal ball further.
...but not impossible. I'm just looking for a
On Thu, 17 Sep 2009 18:40:49 -0400
Robert Milkowski wrote:
> if you would create a dedicated dataset for your cache and set quota
> on it then instead of tracking a disk space usage for each file you
> could easily check how much disk space is being used in the dataset.
> Would it suffice for you
if you would create a dedicated dataset for your cache and set quota on
it then instead of tracking a disk space usage for each file you could
easily check how much disk space is being used in the dataset.
Would it suffice for you?
Setting recordsize to 1k if you have lots of files (I assume)
On Thu, 17 Sep 2009 22:55:38 +0100
Robert Milkowski wrote:
> IMHO you won't be able to lower a file blocksize other than by
> creating a new file. For example:
Okay, thank you.
> If you are not worried with this extra overhead and you are mostly
> concerned with proper accounting of used disk
Andrew Deason wrote:
As I'm sure you're all aware, filesize in ZFS can differ greatly from
actual disk usage, depending on access patterns. e.g. truncating a 1M
file down to 1 byte still uses up about 130k on disk when
recordsize=128k. I'm aware that this is a result of ZFS's rather
different int
As I'm sure you're all aware, filesize in ZFS can differ greatly from
actual disk usage, depending on access patterns. e.g. truncating a 1M
file down to 1 byte still uses up about 130k on disk when
recordsize=128k. I'm aware that this is a result of ZFS's rather
different internals, and that it wor
19 matches
Mail list logo