Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-03 Thread Robert Milkowski
On 03/03/2010 15:19, Tomas Ögren wrote: Memtest doesn't want potential errors to be hidden by ECC, so it disables ECC to see them if they occur. still it is valid question - is there a way under OS to check if ECC is disabled or enabled? -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-03 Thread Robert Milkowski
On 03/03/2010 16:33, Darren J Moffat wrote: Robert Milkowski wrote: On 03/03/2010 15:19, Tomas Ögren wrote: Memtest doesn't want potential errors to be hidden by ECC, so it disables ECC to see them if they occur. still it is valid question - is there a way under OS to check if EC

Re: [zfs-discuss] Hardware for high-end ZFS NAS file server - 2010 March edition

2010-03-04 Thread Robert Milkowski
On 04/03/2010 09:46, Dan Dascalescu wrote: Please recommend your up-to-date high-end hardware components for building a highly fault-tolerant ZFS NAS file server. 2x M5000 + 4x EMC DMX Sorry, I couldn't resist :) -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] ZFS and a botched SAN migration

2010-03-05 Thread Robert Milkowski
k. It's dead. # But I'm willing to go through more hackery if needed. (If I need to destroy and re-create these LUNS on the storage array, I can do that too, but I'm hoping for something more host based) --Jason you need to destroy zfs labels. overwrite with zeros using dd be

Re: [zfs-discuss] full backup == scrub?

2010-03-08 Thread Robert Milkowski
l its copies are read and validated. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] rpool devaliases

2010-03-09 Thread Robert Milkowski
On 09/03/2010 13:18, Tony MacDoodle wrote: Can I create a devalias to boot the other mirror similar to UFS? yes ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-11 Thread Robert Milkowski
panic. For more information look at: http://blogs.sun.com/mws/entry/fma_on_x64_and_at http://milek.blogspot.com/2006/05/psh-smf-less-downtime.html -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Robert Milkowski
take into account more that just a server where a scrub will be running as while it might not impact the server it might cause an issue for others, etc. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Robert Milkowski
subdirectory. So unless you use NFSv4 with mirror mounts or an automounter other NFS version will show you contents of a directory and not a filesystem. It doesn't matter if it is a zfs or not. -- Robert Milkowski http://milek.blogspot.com _

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Robert Milkowski
On 22/03/2010 08:49, Andrew Gabriel wrote: Robert Milkowski wrote: To add my 0.2 cents... I think starting/stopping scrub belongs to cron, smf, etc. and not to zfs itself. However what would be nice to have is an ability to freeze/resume a scrub and also limit its rate of scrubbing. One

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-22 Thread Robert Milkowski
server does. look for mirror mounts feature in NFSv4. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
r off getting NetApp Well, spend some extra money on a really fast NVRAM solution for ZIL and you will get much faster ZFS environment than NetApp and still you will spend much less money. Not to mention all the extra flexibity compared to NetApp. -- Robert Milkowski http:

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
than last 30s if the nfs server would suddenly lost power. To clarify - if ZIL is disabled it makes no difference at all for a pool/filesystem level consistency. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
r log is put on a separate device? Well, it is actually different. With ZFS you can still guearantee it to be consistent on-disk while others generally can't and often you will have to do fsck to even mount a fs in r/w... -- Robert Milkowski http://milek.bl

Re: [zfs-discuss] Simultaneous failure recovery

2010-03-31 Thread Robert Milkowski
spare to cover the other failed drive? And can I hotspare it manually? I could do a straight replace, but that isn't quite the same thing. It seems like it is even driven. Hmmm.. perhaps it shouldn't be. Anyway you can do zpool replace and it is the same thing, why wou

Re: [zfs-discuss] Simultaneous failure recovery

2010-03-31 Thread Robert Milkowski
o use zpool replace. Once you fix the failed drive and it re-synchronizes a hot spare will detach automatically (regardless if you forced it to kick-in via zpool replace or if it did so due to FMA). For more details see http://blogs.sun.com/eschrock/entry/zfs_hot_spares -- Robert Milkowski http:

Re: [zfs-discuss] bit-flipping in RAM...

2010-03-31 Thread Robert Milkowski
ld cause a significant performance problem. or there might be an extra zpool level (or system wide) property to enable checking checksums onevery access from ARC - there will be a siginificatn performance impact but then it might be acceptable for really paranoid folks especially with modern ha

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
need to re-import a database or recover lots of files over NFS - your service is down and disabling ZIL makes a recovery MUCH faster. Then there are cases when leaving the ZIL disabled is acceptable as well. -- Robert Milkowski http://milek.blogspot.com ___

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
Unless you are talking about doing regular snapshots and making sure that application is consistent while doing so - for example putting all Oracle tablespaces in a hot backup mode and taking a snapshot... otherwise it doesn't really make sense. -- Robert Milkowski http://mil

Re: [zfs-discuss] bit-flipping in RAM...

2010-03-31 Thread Robert Milkowski
On 31/03/2010 16:44, Bob Friesenhahn wrote: On Wed, 31 Mar 2010, Robert Milkowski wrote: or there might be an extra zpool level (or system wide) property to enable checking checksums onevery access from ARC - there will be a siginificatn performance impact but then it might be acceptable for

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
e thing is well-documented. I double checked the documentation and you're right - the default has changed to sync. I haven't found in which RH version it happened but it doesn't really matter. So yes, I was wrong - the current default it seems to be sync on L

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Robert Milkowski
sfy a race condition for the sake of internal consistency. Applications which need to know their next commands will not begin until after the previous sync write was committed to disk. ROTFL!!! I think you should explain it even further for Casper :) :) :) :) :) :) :) -- Robert Milk

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Robert Milkowski
you can export a share with as sync (default) or async share while on Solaris you can't really currently force a NFS server to start working in an async mode. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] can't destroy snapshot

2010-04-01 Thread Robert Milkowski
s are part of a cluster both of them have a full access to shared storage and you can force zpool import on both nodes at the same time. When you think about it you need actually such behavior for RAC to work on raw devices or real cluster volumes or filesystems, etc. -- Robert Milkowski http://mil

Re: [zfs-discuss] can't destroy snapshot

2010-04-01 Thread Robert Milkowski
the pool, resume the resource group and enable the storage resource The other approach is to keep a pool under a cluster management but eventually suspend a resource group so there won't be any unexpected failovers (but it really depends on circumstances and what you are t

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Robert Milkowski
On 02/04/2010 16:04, casper@sun.com wrote: sync() is actually *async* and returning from sync() says nothing about to clarify - in case of ZFS sync() is actually synchronous. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss

Re: [zfs-discuss] Question about large pools

2010-04-03 Thread Robert Milkowski
fine. So for example - on x4540 servers try to avoid creating a pool with a single RAID-Z3 group made of 44 disks, rather create 4 RAID-Z2 groups each made of 11 disks all of them in a single pool. -- Robert Milkowski http://milek.blogspot.com __

Re: [zfs-discuss] To slice, or not to slice

2010-04-03 Thread Robert Milkowski
ris is doing more or less for some time now. look in the archives of this mailing list for more information. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Robert Milkowski
letely die as well. Other than that you are fine even with unmirrored slog device. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Robert Milkowski
normal reboots zfs won't read data from slog. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread Robert Milkowski
, while accessing \\filer\arch\myfolder\myfile.txt works. Any ideas? We are running snv_130. you are not using Samba daemon, are you? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] Is file cloning anywhere on ZFS roadmap

2010-04-21 Thread Robert Milkowski
without going through the process of actually copying the blocks, but just duplicating its meta data like NetApp does? I don't know about file cloning but why not put each VM on top of a zvol - then you can clone a zvol. ? -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] Double slash in mountpoint

2010-04-21 Thread Robert Milkowski
but it suggests that it had nothing to do with a double slash - rather some process (your shell?) had an open file within the mountpoint. But supplying -f you forced zfs to unmount it anyway. -- Robert Milkowski http://milek.blogspot.com On 21/04/2010 06:16, Ryan John wrote: Thanks. That

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-21 Thread Robert Milkowski
size for database vs. default, atime off vs. on, lzjb, gzip, ssd). Also comparison of benchmark results with all default zfs setting compared to whatever setting you did which gave you the best result. -- Robert Milkowski http://milek.blogspot.com __

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Robert Milkowski
attach EBS. That way Solaris won't automatically try to import the pool and your scripts will do it once disks are available. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-24 Thread Robert Milkowski
u can also find some benchmarks with sysbench + mysql or oracle. I don't remember if I posted or not some of my results but I'm pretty sure you can find others. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Robert Milkowski
. You will need to power cycle. The system won't boot up again; you'll have to The system should boot-up properly even if some pools are not accessible (except rpool of course). If it is not the case then there is a bug - last time I checked it worked perfectly fine. -- Robert

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-25 Thread Robert Milkowski
. Then you can "zpool import" I think requiring the -f or -F, and reboot again normal. I just did a test on Solaris 10/09 - and system came up properly, entirely on its own, with a failed pool. zpool status showed the pool as unavailable (as I removed an underlying device) which is fi

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Robert Milkowski
(and do so with -R). That way you can easily script it so import happens after your disks ara available. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Robert Milkowski
ch means it couldn't discover it. does 'zpool import' (no other options) list the pool? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Robert Milkowski
s no room for improvement here. All I'm saying is that it is not as easy problem as it seems. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Compellant announces zNAS

2010-04-29 Thread Robert Milkowski
ution*. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Robert Milkowski
0 zil synchronicity No promise on date, but it will bubble to the top eventually. So everyone knows - it has been integrated into snv_140 :) -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Robert Milkowski
when it is off it will give you an estimate of what's the absolute maximum performance increase (if any) by having a dedicated ZIL device. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] ZIL behavior on import

2010-05-05 Thread Robert Milkowski
fails prior to completing a series of writes and I reboot using a failsafe (i.e. install disc), will the log be replayed after a zpool import -f ? yes -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss

[zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
nformation on it you might look at http://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
opose that it shouldn't but it was changed again during a PSARC review that it should. And I did a copy'n'paste here. Again, sorry for the confusion. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 13:12, Robert Milkowski wrote: On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited. Sorry for the confusion but there was a discussion if it shou

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
ce failover in a cluster L2ARC will be kept warm. Then the only thing which might affect L2 performance considerably would be a L2ARC device failure... -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
would probably decrease performance and would invalidate all blocks if only a single l2arc device would die. Additionally having each block only on one l2arc device allows to read from all of l2arc devices at the same time. -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 21:45, Nicolas Williams wrote: On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote: On 5/6/10 5:28 AM, Robert Milkowski wrote: sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction

Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Robert Milkowski
are very useful at times. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Odd dump volume panic

2010-05-17 Thread Robert Milkowski
s/zvol.c#1785) - but zfs send|recv should replicate it I think. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Robert Milkowski
: why do you need to do this at all? Isn't the ZFS ARC supposed to release memory when the system is under pressure? Is that mechanism not working well in some cases ... ? My understanding is that if kmem gets heavily fragmaneted ZFS won't be able to give back much memory.

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
0 IOPS to a single SAS port. It also scales well - I did run above dd's over 4x SAS ports at the same time and it scaled linearly by achieving well over 400k IOPS. hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw. 1.27.3.0), connected to F5100. -- Robert Milkowski

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
On 10/06/2010 15:39, Andrey Kuzmin wrote: On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote: On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through a sing

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
port is nothing unusual and has been the case for at least several years. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
cely coalesce these IOs and do a sequential writes with large blocks. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
On 11/06/2010 10:58, Andrey Kuzmin wrote: On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote: On 11/06/2010 09:22, sensille wrote: Andrey Kuzmin wrote: On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling mailto:ri

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Robert Milkowski
full priority. Is this problem known to the developers? Will it be addressed? http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473 -- Robert Milkowski http://milek.blogspot.com

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-16 Thread Robert Milkowski
whole point of having L2ARC is to serve high random read iops from RAM and L2ARC device instead of disk drives in a main pool. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] At what level does the “zfs ” directory exist?

2010-06-16 Thread Robert Milkowski
? It maps the snapshots so windows can access them via "previous versions" from the explorers context menu. btw: the CIFS service supports Windows Shadow Copies out-of-the-box. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discus

Re: [zfs-discuss] At what level does the “zfs ” directory exist?

2010-06-17 Thread Robert Milkowski
. Previous Versions should work even if you have a one large filesystems with all users homes as directories within. What Solaris/OpenSolaris version did you try for the 5k test? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Robert Milkowski
lly intent to get it integrated into ON? Because if you do then I think that getting Nexenta guys expanding on it would be better for everyone instead of having them reinventing the wheel... -- Robert Milkowski http://milek.blogspot.com ___ zfs-discu

[zfs-discuss] raid-z - not even iops distribution

2010-06-18 Thread Robert Milkowski
rather except all of them to get about the same number of iops. Any idea why? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread Robert Milkowski
dedup enabled in a pool you can't really get a dedup ratio per share. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Robert Milkowski
big of a file are you making? RAID-Z does not explicitly do the parity distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths to distribute IOPS. Adam On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote: Hi, zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
smaller writes to metadata that will distribute parity. What is the total width of your raidz1 stripe? 4x disks, 16KB recordsize, 128GB file, random read with 16KB block. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 23/06/2010 19:29, Ross Walker wrote: On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote: 128GB. Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 24/06/2010 14:32, Ross Walker wrote: On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
performance as a much greater number of disk drives in RAID-10 configuration and if you don't need much space it could make sense. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
ndom reads. http://blogs.sun.com/roch/entry/when_to_and_not_to -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-19 Thread Robert Milkowski
(async or sync) to be written synchronously. ps. still, I'm not saying it would made ZFS ACID. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

Re: [zfs-discuss] carrying on [was: Legality and the future of zfs...]

2010-07-19 Thread Robert Milkowski
outdone, they've stopped other OS releases as well. Surely, this is a temporary situation. AFAIK the dev OSOL releases are still being produced - they haven't been made public since b134 though. -- Robert Milkowski http://milek.blogspot.com _

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Robert Milkowski
han a regression. Are you sure it is not a debug vs. non-debug issue? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Debunking the dedup memory myth

2010-07-20 Thread Robert Milkowski
"compress" the file much better than a compression. Also please note that you can use both: compression and dedup at the same time. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-21 Thread Robert Milkowski
hough but it might be that a stripe size was not matched to ZFS recordsize and iozone block size in this case. The issue with raid-z and random reads is that as cache hit ratio goes down to 0 the IOPS approaches IOPS of a single drive. For a little bit more information see http://blogs.sun.

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-22 Thread Robert Milkowski
On 22/07/2010 03:25, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Milkowski I had a quick look at your results a moment ago. The problem is that you used a server with 4GB of RAM + a raid card

[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]

2010-07-28 Thread Robert Milkowski
fyi -- Robert Milkowski http://milek.blogspot.com Original Message Subject:zpool import despite missing log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From: Tim Haley To: psarc-...@sun.com CC: zfs-t...@sun.com I am sponsoring

[zfs-discuss] Fwd: Read-only ZFS pools [PSARC/2010/306 FastTrack timeout 08/06/2010]

2010-07-30 Thread Robert Milkowski
fyi Original Message Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout 08/06/2010] Date: Fri, 30 Jul 2010 14:08:38 -0600 From: Tim Haley To: psarc-...@sun.com CC: zfs-t...@sun.com I am sponsoring the following fast-track for George Wilson.

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Robert Milkowski
recent build you have zfs set sync={disabled|default|always} which also works with zvols. So you do have a control over how it is supposed to behave and to make it nice it is even on per zvol basis. It is just that the default is synchronous. -- Robert Milko

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Robert Milkowski
x27;t remember if it offered or not an ability to manipulate zvol's WCE flag but if it didn't then you can do it anyway as it is a zvol property. For an example see http://milek.blogspot.com/2010/02/zvols-write-cache.html -- Robert Milkowski http://mil

Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Robert Milkowski
27;s the main reason behind the scrub - to be able to detect and repair checksum errors (if any) while a redundant copy is still fine. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] Quickest way to find files with cksum errors without doing scrub

2009-09-28 Thread Robert Milkowski
Robert Milkowski wrote: Bob Friesenhahn wrote: On Mon, 28 Sep 2009, Richard Elling wrote: Scrub could be faster, but you can try tar cf - . > /dev/null If you think about it, validating checksums requires reading the data. So you simply need to read the data. This should work but

Re: [zfs-discuss] extremely slow writes (with good reads)

2009-09-28 Thread Robert Milkowski
- reserved area). -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Robert Milkowski
ith de-dup) would behave the same here. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] strange pool disks usage pattern

2009-10-02 Thread Robert Milkowski
Maurilio Longo wrote: Carson, the strange thing is that this is happening on several disks (can it be that are all failing?) What is the controller bug you're talking about? I'm running snv_114 on this pc, so it is fairly recent. Best regards. Maurilio. See 'iostat -En' output. __

Re: [zfs-discuss] ZFS caching of compressed data

2009-10-02 Thread Robert Milkowski
d on where an actual bottleneck is. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cachefile for snail zpool import mystery?

2009-10-02 Thread Robert Milkowski
btw: IIRC Sun Cluster HAS+ agane will automatically make use of cache files -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-02 Thread Robert Milkowski
the snapshot version). Out of curiosity, is there an easy way to find such a file? Find files with modification or creation time after last snapshot was created. Files which were modified after may still have most of their blocks refered by a snapshot though. -- Robert Milkowski http:

Re: [zfs-discuss] ZFS caching of compressed data

2009-10-02 Thread Robert Milkowski
Stuart Anderson wrote: On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote: Stuart Anderson wrote: I am wondering if the following idea makes any sense as a way to get ZFS to cache compressed data in DRAM? In particular, given a 2-way zvol mirror of highly compressible data on persistent

Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-03 Thread Robert Milkowski
case as you do have clones. In your case you are concerned with files you would like do delete to regain disk space and they are still in a snapshot... in most cases it is relatively easy to plan for it with a dedicated filesystem(s) for temporary files, et

Re: [zfs-discuss] Terrible ZFS performance on a Dell 1850 w/ PERC 4e/Si (Sol10U6)

2009-10-09 Thread Robert Milkowski
Before you do a dd test try first to do: echo zfs_vdev_max_pending/W0t1 | mdb -kw and let us know if it helped or not. iostat -xnz 1 output while you are doing dd would also help. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss

Re: [zfs-discuss] primarycache and secondarycache properties on Solaris 10 u8

2009-10-16 Thread Robert Milkowski
uot; of the UFS directio? No. UFS directio does 3 things: 1. unbuffered I/O 2. allow concurrent writers (no single-writer lock) 3. provide an improved async I/O code path for the record - iirc UFS will also disable read-aheads with direct

[zfs-discuss] [Fwd: snv_123: kernel memory leak?]

2009-10-22 Thread Robert Milkowski
B 0B 0 0 - -- --- -- - - > btw: ::memstat and ::kmastat is *very* fast in this build, it used to take a minute and now it is instantaneous :) -- Robert Milkowski http://milek.blogspot.com -- This message posted from opensolaris.org _

Re: [zfs-discuss] strange results ...

2009-10-22 Thread Robert Milkowski
loads this is desired behavior for many other it is not (like parsing with grep like tool large log files which are not getting cached...). -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs recv complains about destroyed filesystem

2009-10-23 Thread Robert Milkowski
; so everything was replicated as expected. However zfs recv -F should not complain that it can't open snap1. -- Robert Milkowski http://milek.blogspot.com -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs recv complains about destroyed filesystem

2009-10-26 Thread Robert Milkowski
I created http://defect.opensolaris.org/bz/show_bug.cgi?id=12249 -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] FW: File level cloning

2009-10-29 Thread Robert Milkowski
create a dedicated zfs zvol or filesystem for each file representing your virtual machine. Then if you need to clone a VM you clone its zvol or the filesystem. Jeffry Molanus wrote: I'm not doing anything yet; I just wondered if ZFS provides any methods to do file level cloning instead of comp

Re: [zfs-discuss] ZFS dedup issue

2009-11-03 Thread Robert Milkowski
Cyril Plisko wrote: I think I'm observing the same (with changeset 10936) ... # mkfile 2g /var/tmp/tank.img # zpool create tank /var/tmp/tank.img # zfs set dedup=on tank # zfs create tank/foobar This has to do with the fact that dedup space accounting is charged to all f

  1   2   3   4   5   6   7   8   9   10   >