Re: [zfs-discuss] Crucial RealSSD C300 and cache flush?

2010-06-24 Thread Arne Jansen
Hi, Roy Sigurd Karlsbakk wrote: > Crucial RealSSD C300 has been released and showing good numbers for use as > Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as > opposed to Intel units etc? > I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday

Re: [zfs-discuss] Crucial RealSSD C300 and cache flush?

2010-06-24 Thread Fred Liu
Looking forward to see your test report from intel x-25 and ocz vertex 2 pro... Thanks. Fred -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Arne Jansen Sent: 星期四, 六月 24, 2010 16:15 To: Roy Sigurd Karlsbakk Cc: OpenS

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one (z1) or two (z2) specific disks? No. There are always small

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 23/06/2010 19:29, Ross Walker wrote: On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote: 128GB. Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one (

Re: [zfs-discuss] Crucial RealSSD C300 and cache flush?

2010-06-24 Thread Arne Jansen
Arne Jansen wrote: > Hi, > > Roy Sigurd Karlsbakk wrote: >> Crucial RealSSD C300 has been released and showing good numbers for use as >> Zil and L2ARC. Does anyone know if this unit flushes its cache on request, >> as opposed to Intel units etc? >> > > I had a chance to get my hands on a Cruci

Re: [zfs-discuss] c5->c9 device name change prevents beadm activate

2010-06-24 Thread Brian Nitz
Lori, In my case what may have caused the problem is that after a previous upgrade failed, I used this zfs send/recv procedure to give me (what I thought was) a sane rpool: http://blogs.sun.com/migi/entry/broken_opensolaris_never Is it possible that a zfs recv of a root pool contains the dev

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: > On 23/06/2010 18:50, Adam Leventhal wrote: >>> Does it mean that for dataset used for databases and similar environments >>> where basically all blocks have fixed size and there is no other data all >>> parity information will end-up on one

Re: [zfs-discuss] Crucial RealSSD C300 and cache flush?

2010-06-24 Thread Arne Jansen
Arne Jansen wrote: > Hi, > > Roy Sigurd Karlsbakk wrote: >> Crucial RealSSD C300 has been released and showing good numbers for use as >> Zil and L2ARC. Does anyone know if this unit flushes its cache on request, >> as opposed to Intel units etc? >> > > Also the IOPS with cache flushes is quite

Re: [zfs-discuss] Crucial RealSSD C300 and cache flush?

2010-06-24 Thread David Dyer-Bennet
On Thu, June 24, 2010 08:58, Arne Jansen wrote: > Cross check: we pulled also while writing with cache enabled, and it lost > 8 writes. I'm SO pleased to see somebody paranoid enough to do that kind of cross-check doing this benchmarking! "Benchmarking is hard!" > So I'd say, yes, it flushes i

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 24/06/2010 14:32, Ross Walker wrote: On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data al

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Bob Friesenhahn
On Thu, 24 Jun 2010, Ross Walker wrote: Raidz is definitely made for sequential IO patterns not random. To get good random IO with raidz you need a zpool with X raidz vdevs where X = desired IOPS/IOPS of single drive. Remarkably, I have yet to see mention of someone testing a raidz which is

[zfs-discuss] zfs failsafe pool mismatch

2010-06-24 Thread Shawn Belaire
I have a customer that described this issue to me in general terms. I'd like to know how to replicated it, and what the best practice is to a avoid the issue, or fix it in an accepted manner. If they kernel patch, and reboot they may get messages informing them that the pool version is down rev

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 24/06/2010 15:54, Bob Friesenhahn wrote: On Thu, 24 Jun 2010, Ross Walker wrote: Raidz is definitely made for sequential IO patterns not random. To get good random IO with raidz you need a zpool with X raidz vdevs where X = desired IOPS/IOPS of single drive. Remarkably, I have yet to see

Re: [zfs-discuss] ZFS forensics/revert/restore shellscript and how-to.

2010-06-24 Thread Eric Jones
Where is the link to the script, and does it work with RAIDZ arrays? Thanks so much. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs failsafe pool mismatch

2010-06-24 Thread Cindy Swearingen
Hi Shawn, I think this can happen if you apply patch 141445-09. It should not happen in the future. I believe the workaround is this: 1. Boot the system from the correct media. 2. Install the boot blocks on the root pool disk(s). 3. Upgrade the pool. Thanks, Cindy On 06/24/10 09:24, Shawn

[zfs-discuss] ZFS Filesystem Recovery on RAIDZ Array

2010-06-24 Thread Eric Jones
This day went from usual Thursday to worst day of my life in the span of about 10 seconds. Here's the scenario: 2 Computer, both Solaris 10u8, one is the primary, one is the backup. Primary system is RAIDZ2, Backup is RAIDZ with 4 drives. Every night, Primary mirrors to Backup using the 'zfs

[zfs-discuss] Gonna be stupid here...

2010-06-24 Thread Erik Trimble
But it's early (for me), and I can't remember the answer here. I'm sizing an Oracle database appliance. I'd like to get one of the F20 96GB flash accellerators to play with, but I can't imagine I'd be using the whole thing for ZIL. The DB is likely to be a couple TB in size. Couple of ques

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote: > On 24/06/2010 14:32, Ross Walker wrote: >> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: >> >> >>> On 23/06/2010 18:50, Adam Leventhal wrote: >>> > Does it mean that for dataset used for databases and similar environment

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Adam Leventhal
Hey Robert, I've filed a bug to track this issue. We'll try to reproduce the problem and evaluate the cause. Thanks for bringing this to our attention. Adam On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote: > On 23/06/2010 18:50, Adam Leventhal wrote: >>> Does it mean that for dataset used

Re: [zfs-discuss] Gonna be stupid here...

2010-06-24 Thread Darren J Moffat
On 24/06/2010 17:49, Erik Trimble wrote: But it's early (for me), and I can't remember the answer here. I'm sizing an Oracle database appliance. I'd like to get one of the F20 96GB flash accellerators to play with, but I can't imagine I'd be using the whole thing for ZIL. The DB is likely to be

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Arne Jansen
Ross Walker wrote: Raidz is definitely made for sequential IO patterns not random. To get good random IO with raidz you need a zpool with X raidz vdevs where X = desired IOPS/IOPS of single drive. I have seen statements like this repeated several times, though I haven't been able to find an

Re: [zfs-discuss] c5->c9 device name change prevents beadm activate

2010-06-24 Thread Lori Alt
On 06/24/10 03:27 AM, Brian Nitz wrote: Lori, In my case what may have caused the problem is that after a previous upgrade failed, I used this zfs send/recv procedure to give me (what I thought was) a sane rpool: http://blogs.sun.com/migi/entry/broken_opensolaris_never Is it possible that a

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 24/06/2010 20:52, Arne Jansen wrote: Ross Walker wrote: Raidz is definitely made for sequential IO patterns not random. To get good random IO with raidz you need a zpool with X raidz vdevs where X = desired IOPS/IOPS of single drive. I have seen statements like this repeated several tim

Re: [zfs-discuss] One dataset per user?

2010-06-24 Thread Paul B. Henson
On Tue, 22 Jun 2010, Arne Jansen wrote: > We found that the zfs utility is very inefficient as it does a lot of > unnecessary and costly checks. Hmm, presumably somebody at Sun doesn't agree with that assessment or you'd think they'd take them out :). Mounting/sharing by hand outside of the zfs