On 02/10/2013 01:01 PM, Koopmann, Jan-Peter wrote:
> Why should it?
>
> I believe currently only Nexenta but correct me if I am wrong
The code has been mainlined a while ago, see:
https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/io/comstar/lu/stmf_sbd/sbd.c#L3702-L3730
http
>No tools, ZFS does it automaticaly when freeing blocks when the
>underlying device advertises the functionality.
>
>ZFS ZVOLs shared over COMSTAR advertise SCSI UNMAP as well.
If a system was running something older, e.g., Solaris 11; the "free"
blocks will not be marked such on the server e
On 02/12/13 15:07, Thomas Nau wrote:
Darren
On 02/12/2013 11:25 AM, Darren J Moffat wrote:
On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi
unmap support (I believe currently only Nexenta but correct me if I am
Darren
On 02/12/2013 11:25 AM, Darren J Moffat wrote:
On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi
unmap support (I believe currently only Nexenta but correct me if I am
wrong) the blocks will not be freed, wi
>> Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
>> support (I believe currently only Nexenta but correct me if I am wrong) the
>> blocks will not be freed, will they?
>
>
> Solaris 11.1 has ZFS with SCSI UNMAP support.
Freeing unused blocks works perfectly well with fst
On 02/10/13 12:01, Koopmann, Jan-Peter wrote:
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Solaris 11.1 has ZFS with SCSI UNMAP suppor
I forgot about compression. Makes sense. As long as the zeroes find their way
to the backend storage this should work. Thanks!
Kind regards
JP
smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
On 2013-02-10 10:57, Datnus wrote:
I run dd if=/dev/zero of=testfile bs=1024k count=5 inside the iscsi vmfs
from ESXi and rm textfile.
However, the zpool list doesn't decrease at all. In fact, the used storage
increase when I do dd.
FreeNas 8.0.4 and ESXi 5.0
Help.
Thanks.
Did you also en
Why should it?
Unless you do a shrink on the vmdk and use a zfs variant with scsi unmap
support (I believe currently only Nexenta but correct me if I am wrong) the
blocks will not be freed, will they?
Kind regards
JP
Sent from a mobile device.
Am 10.02.2013 um 11:01 schrieb "Datnus" :
> I
I run dd if=/dev/zero of=testfile bs=1024k count=5 inside the iscsi vmfs
from ESXi and rm textfile.
However, the zpool list doesn't decrease at all. In fact, the used storage
increase when I do dd.
FreeNas 8.0.4 and ESXi 5.0
Help.
Thanks.
___
zfs
On Sat, May 8, 2010 at 5:04 AM, Lutz Schumann
wrote:
> Now If I think there would be "write all zero, clear the block again" in ZFS
> (which I suggest in this thread) I could do the following:
>
> Fill the disks within the VM's with all zero (dd if=/dev/zero of=/MYFILE
> bs=1M ...). This could e
I have to come back to this issue after a while cause it just hit me.
I have a VMWare vSphere 4 test host. I have various machines in there to do
tests for performance and other stuff. So a lot of IO/ benchmarks are done and
a lot of data is created during this benchmarks.
The vSphere test ma
On Feb 26, 2010, at 11:55 AM, Lutz Schumann wrote:
> This would be an idea and I thought about this. However I see the following
> problems:
>
> 1) using deduplication
>
> This will reduce the on disk size however the DDT will grow forever and for
> the deletion of zvols this will mean a lot
On Fri, Feb 26, 2010 at 2:42 PM, Lutz Schumann
wrote:
>
> Now If a virtual machine writes to the zvol, blocks are allocated on disk.
> Reads are now partial from disk (for all blocks written) and from ZFS layer
> (all unwritten blocks).
>
> If the virtual machine (which may be vmware / xen / hyper
This would be an idea and I thought about this. However I see the following
problems:
1) using deduplication
This will reduce the on disk size however the DDT will grow forever and for the
deletion of zvols this will mean a lot of time and work (see other threads
regarding DDT memory issues o
On 02/26/10 11:42, Lutz Schumann wrote:
Idea:
- If the guest writes a block with 0's only, the block is freed again
- if someone reads this block again - it wil get the same 0's it would get
if the 0's would be written
- The checksum of a "all 0" block dan be hard coded for SHA1 / Flec
On 26 February, 2010 - Lutz Schumann sent me these 2,2K bytes:
> Hello list,
>
> ZFS can be used in both file level (zfs) and block level access (zvol). When
> using zvols, those are always thin provisioned (space is allocated on first
> write). We use zvols with comstar to do iSCSI and FC acc
Hello list,
ZFS can be used in both file level (zfs) and block level access (zvol). When
using zvols, those are always thin provisioned (space is allocated on first
write). We use zvols with comstar to do iSCSI and FC access - and exuse me in
advance - but this may also be a more comstar relat
18 matches
Mail list logo