> This is expected because of the copy-onwrite nature of ZFS. During
> truncate it is trying to allocate
> new disk blocks probably to write the new metadata and fails to find them.
I realize there is a fundamental issue with copy on write, but does
this mean ZFS does not maintain some kind of re
Masthan,
*/dudekula mastan <[EMAIL PROTECTED]>/* wrote:
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
Is this the size given by zfs list ? Or is the amount of disk space that
you had ?
The reason I ask this is because ZFS/Zpool takes up some amount of
Hi All,
No one has any idea on this ?
-Masthan
dudekula mastan <[EMAIL PROTECTED]> wrote:
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
On this zpool, my application writes 100 files each of size 10 MB.
First 96 files were written successfully
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
On this zpool, my application writes 100 files each of size 10 MB.
First 96 files were written successfully with out any problem.
But the 97 file is not written successfully , it written only 5 MB (the
retu