Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Bob Friesenhahn
On Mon, 21 Jun 2010, Edward Ned Harvey wrote: log and cache devices don't stripe. You can add more than one, and The term 'stripe' has been so outrageously severely abused in this forum that it is impossible to know what someone is talking about when they use the term. Seemingly intelligen

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Richard Elling
On Jun 21, 2010, at 11:17 AM, Arne Jansen wrote: > Roy Sigurd Karlsbakk wrote: >> Hi all >> I plan to setup a new system with four Crucial RealSSD 256MB SSDs for both >> SLOG and L2ARC. The plan is to use four small slices for the SLOG, striping >> two mirrors. I have seen questions in here abou

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > I plan to setup a new system with four Crucial RealSSD 256MB SSDs for > both SLOG and L2ARC. The plan is to use four small slices for the SLOG, > striping two mirrors.

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread James C. McPherson
On 22/06/10 01:05 AM, Fredrich Maney wrote: On Mon, Jun 21, 2010 at 8:59 AM, James C. McPherson wrote: [...] So when I'm trying to figure out who I need to yell at because they're using more than our acceptable limit (30Gb), I have to run "du -s /builds/[zyx]". And that takes time. Lots of tim

Re: [zfs-discuss] zfs periodic writes on idle system [Re: Getting desktop to auto sleep]

2010-06-21 Thread Jürgen Keil
> Why does zfs produce a batch of writes every 30 seconds on opensolaris b134 > (5 seconds on a post b142 kernel), when the system is idle? It was caused by b134 gnome-terminal. I had an iostat running in a gnome-terminal window, and the periodic iostat output is written to a temporary file by gno

Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-21 Thread Arne Jansen
Wes Felter wrote: On 6/19/10 3:56 AM, Arne Jansen wrote: while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device IMO it might be better to use the smallest (50GB, maybe overprovisioned down to ~20GB) Vertex 2

[zfs-discuss] Seriously degraded SAS multipathing performance

2010-06-21 Thread Josh Simon
I'm seeing seriously degraded performance with round-robin SAS multipathing. I'm hoping you guys can help me achieve full throughput across both paths. My System Config: OpenSolaris snv_134 2 x E5520 2.4 GHz Xeon Quad-Core Processors 48 GB RAM 2 x LSI SAS 9200-8e (eight-port external 6Gb/s SATA

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Arne Jansen
Roy Sigurd Karlsbakk wrote: - mirroring l2arc won't gain anything, as it doesn't contain any information that cannot be rebuilt if a device is lost. Further, if a device is lost, the system just uses the remaining devices. So I wouldn't waste any space mirroring l2arc, I'll just stripe them. I

Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-21 Thread Wes Felter
On 6/19/10 3:56 AM, Arne Jansen wrote: while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device IMO it might be better to use the smallest (50GB, maybe overprovisioned down to ~20GB) Vertex 2 Pro as slog and a m

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Brandon High
On Mon, Jun 21, 2010 at 11:24 AM, Roy Sigurd Karlsbakk wrote: > Any idea if something like a small, decently priced, supercapped SLC SSD > exist? The new OCZ Deneva drives (or others based on the SF-1500) should work well, but I don't know if there's pricing available yet. -B -- Brandon High

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Roy Sigurd Karlsbakk
> - mirroring l2arc won't gain anything, as it doesn't contain any > information that cannot be rebuilt if a device is lost. Further, if a > device is lost, > the system just uses the remaining devices. So I wouldn't waste any > space mirroring l2arc, I'll just stripe them. I don't plan to attempt

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Arne Jansen
Roy Sigurd Karlsbakk wrote: Hi all I plan to setup a new system with four Crucial RealSSD 256MB SSDs for both SLOG and L2ARC. The plan is to use four small slices for the SLOG, striping two mirrors. I have seen questions in here about the theoretical benefit of doing this, but I haven't seen

Re: [zfs-discuss] SLOG striping?

2010-06-21 Thread Bob Friesenhahn
On Mon, 21 Jun 2010, Roy Sigurd Karlsbakk wrote: I plan to setup a new system with four Crucial RealSSD 256MB SSDs for both SLOG and L2ARC. The plan is to use four small slices for the SLOG, striping two mirrors. I have seen questions in here about the theoretical benefit of doing this, but I

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-21 Thread Roy Sigurd Karlsbakk
- Original Message - > Hi > Currently I have 400+ users with quota set to 500MB limit. Currently > the file system is using veritas file system. > > I am planning to migrate all these home directory to a new server with > ZFS. How can i migrate the quotas. > > I can create 400+ file syste

[zfs-discuss] SLOG striping?

2010-06-21 Thread Roy Sigurd Karlsbakk
Hi all I plan to setup a new system with four Crucial RealSSD 256MB SSDs for both SLOG and L2ARC. The plan is to use four small slices for the SLOG, striping two mirrors. I have seen questions in here about the theoretical benefit of doing this, but I haven't seen any answers, just some doubt a

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Roy Sigurd Karlsbakk
- Original Message - > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > > > Close to 1TB SSD cache will also help to boost read > > speeds, > > Remember, this will not boost large sequential reads. (Could po

Re: [zfs-discuss] Many checksum errors during resilver.

2010-06-21 Thread Cindy Swearingen
Hi Justin, This looks like an older Solaris 10 release. If so, this looks like a zpool status display bug, where it looks like the checksum errors are occurring on the replacement device, but they are not. I would review the steps described in the hardware section of the ZFS troubleshooting wiki

[zfs-discuss] Many checksum errors during resilver.

2010-06-21 Thread Justin Daniel Meyer
I've decided to upgrade my home server capacity by replacing the disks in one of my mirror vdevs. The procedure appeared to work out, but during resilver, a couple million checksum errors were logged on the new device. I've read through quite a bit of the archive and searched around a bit, but

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Bob Friesenhahn
On Mon, 21 Jun 2010, Arne Jansen wrote: Especially if the characteristics are different I find it a good idea to mix all on one set of spindles. This way you have lots of spindles for fast access and lots of space for the sake of space. If you devide the available spindles in two sets you wil

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Bob Friesenhahn
On Mon, 21 Jun 2010, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk Close to 1TB SSD cache will also help to boost read speeds, Remember, this will not boost large sequential reads. (Could po

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Fredrich Maney
On Mon, Jun 21, 2010 at 8:59 AM, James C. McPherson wrote: [...] > So when I'm > trying to figure out who I need to yell at because they're > using more than our acceptable limit (30Gb), I have to run > "du -s /builds/[zyx]". And that takes time. Lots of time. [...] Why not just use quotas? fpsm

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Darren J Moffat
On 21/06/2010 13:59, James C. McPherson wrote: On 21/06/10 10:38 PM, Edward Ned Harvey wrote: From: James C. McPherson [mailto:j...@opensolaris.org] On the build systems that I maintain inside the firewall, we mandate one filesystem per user, which is a very great boon for system administration

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread James C. McPherson
On 21/06/10 10:38 PM, Edward Ned Harvey wrote: From: James C. McPherson [mailto:j...@opensolaris.org] On the build systems that I maintain inside the firewall, we mandate one filesystem per user, which is a very great boon for system administration. What's the reasoning behind it? Politeness

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > Close to 1TB SSD cache will also help to boost read > speeds, Remember, this will not boost large sequential reads. (Could possibly maybe even hurt it.) This will

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Edward Ned Harvey
> From: James C. McPherson [mailto:j...@opensolaris.org] > > On the build systems that I maintain inside the firewall, > we mandate one filesystem per user, which is a very great > boon for system administration. What's the reasoning behind it? > My management scripts are > considerably faster

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Arne Jansen
David Magda wrote: > On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote: > >> So far the plan is to keep it in one pool for design and >> administration simplicity. Why would you want to split up (net) 40TB >> into more pools? Seems to me that'll mess up things a bit, having to >> split up SSDs

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Roy Sigurd Karlsbakk
- Original Message - > On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote: > > > So far the plan is to keep it in one pool for design and > > administration simplicity. Why would you want to split up (net) 40TB > > into more pools? Seems to me that'll mess up things a bit, having to > >

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread David Magda
On Jun 21, 2010, at 05:00, Roy Sigurd Karlsbakk wrote: So far the plan is to keep it in one pool for design and administration simplicity. Why would you want to split up (net) 40TB into more pools? Seems to me that'll mess up things a bit, having to split up SSDs for use on different pools,

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Roy Sigurd Karlsbakk
> Btw, what did you plan to use as L2ARC/slog? I was thinking of using four Crucial RealSSD 256MB SSDs with a small RAID1+0 for SLOG and the rest for L2ARC. The system will be mainly used for reads, so I don't think the SLOG needs will be too tough. If you have another suggestion, please tell :

Re: [zfs-discuss] One dataset per user?

2010-06-21 Thread Roy Sigurd Karlsbakk
- Original Message - > On Jun 20, 2010, at 11:55, Roy Sigurd Karlsbakk wrote: > > > There will also be a few common areas for each department and > > perhaps a backup area. > > The back up area should be on a different set of disks. > > IMHO, a back up isn't a back up unless it is an /in