Re: [zfs-discuss] swap - where is it coming from?

2010-06-09 Thread Erik Trimble
Think of swap space as "current total available virtual memory space available for program use". It is composed of unreserved RAM (that is, RAM not currently reserved for usage other than program memory - things such as file caching, certain ZFS functionality, kernel usage) PLUS whatever disk

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-09 Thread Khyron
My inclination, based on what I've read and heard from others, is to say "no". But again, the best way to find out is to write the code. :\ On Wed, Jun 9, 2010 at 11:45, Edward Ned Harvey wrote: > > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > > boun...@opensolaris.org] On Be

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-09 Thread Maurice Volaski
I'm not sure where the poster got this information, or how it seems to be at odds with the design goals of AVS. Perhaps they only looked at one piece of the puzzle and then got lost? I wrote it :-) It's right there in the manual, in fact: http://docs.sun.com/source/819-6148-10/chap4.html#pgfId-

Re: [zfs-discuss] swap - where is it coming from?

2010-06-09 Thread Erik Trimble
On 6/9/2010 7:20 PM, Greg Eanes wrote: On Wed, Jun 9, 2010 at 8:17 PM, devsk wrote: $ swap -s total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k available $ swap -l swapfile devswaplo blocks free /dev/dsk/c6t0d0s1 215,1 8 12594952 12

Re: [zfs-discuss] swap - where is it coming from?

2010-06-09 Thread Greg Eanes
On Wed, Jun 9, 2010 at 8:17 PM, devsk wrote: > $ swap -s > total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k > available > > $ swap -l > swapfile             dev    swaplo   blocks     free > /dev/dsk/c6t0d0s1   215,1         8 12594952 12594952 > > Can someone please do

[zfs-discuss] swap - where is it coming from?

2010-06-09 Thread devsk
$ swap -s total: 473164k bytes allocated + 388916k reserved = 862080k used, 6062060k available $ swap -l swapfile devswaplo blocks free /dev/dsk/c6t0d0s1 215,1 8 12594952 12594952 Can someone please do the math for me here? I am not able to figure the total. What

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Kyle McDonald
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 6/9/2010 5:04 PM, Edward Ned Harvey wrote: > > Everything is faster with more ram. There is no limit, unless the total > used disk in your system is smaller than the available ram in your system > ... which seems very improbable. > Off topic, bu

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-09 Thread Maurice Volaski
Are you sure of that? This directly contradicts what David Magda said yesterday. Yes. Just how is what he said contradictory? > Unfortunately, there are no simple, easy to implement heartbeat mechanisms for Solaris. Not so. Sun/Solaris Cluster is (fairly) simple and (relatively) easy to i

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Joe Auty > > I'm also noticing that I'm a little short on RAM. I have 6 320 gig > drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would > mean that I need at least 6.4 gig of RAM

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Brandon High
On Wed, Jun 9, 2010 at 9:20 AM, Geoff Nordli wrote: > Have you played with the flush interval? > > I am using iscsi based zvols, and I am thinking about not using the caching > in vbox and instead rely on the comstar/zfs side. > > What do you think? If you care about your data, IgnoreFlush should

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-09 Thread Richard Elling
On Jun 8, 2010, at 5:17 PM, Moazam Raja wrote: > Hi all, I'm trying to accomplish server to server storage replication > in synchronous mode where each server is a Solaris/OpenSolaris machine > with its own local storage. > > For Linux, I've been able to achieve what I want with DRBD but I'm > hop

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Garrett D'Amore
You can hardly have too much. At least 8 GB, maybe 16 would be good. The benefit will depend on your workload, but zfs and buffer cache will use it all if you have a big enough read working set. -- Garrett Joe Auty wrote: >I'm also noticing that I'm a little short on RAM. I have 6 320 gig >

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Geoff Nordli
> >Brandon High wrote: >On Tue, Jun 8, 2010 at 10:33 AM, besson3c wrote: > > >What VM software are you using? There are a few knobs you can turn in VBox >which will help with slow storage. See >http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions on >reducing the flush interval.

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Geoff Nordli
> On Behalf Of Joe Auty >Sent: Tuesday, June 08, 2010 11:27 AM > > >I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm >evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet) is >giving me kernel panics on the host while starting up VMs which are obviousl

Re: [zfs-discuss] Add remote disk to zfs pool

2010-06-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Alvin Lobo > > Is there a way that i can add one HDD from server A and one HDD from > server B to a zfs pool so that there is an online snapshot taken at > regular intervals. hence maintaining

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Toyama Shunji > > Certainly I feel it is difficult, but is it logically impossible to > write a filter program to do that, with reasonable memory use? Good question. I don't know the answer.

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of besson3c > > I'm wondering if somebody can kindly direct me to a sort of newbie way > of assessing whether my ZFS pool performance is a bottleneck that can > be improved upon, and/or whether I

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Travis Tabbal
NFS writes on ZFS blows chunks performance wise. The only way to increase the write speed is by using an slog, the problem is that a "proper" slog device (one that doesn't lose transactions) does not exist for a reasonable price. The least expensive SSD that will work is the Intel X25-E, and eve

Re: [zfs-discuss] Add remote disk to zfs pool

2010-06-09 Thread Cindy Swearingen
Hi Alvin, Which Solaris release is this? If you are using a OpenSolaris release (build 131), you might consider the zpool split feature that allows you to clone a mirrored pool by attaching the HDD to the pool, letting it resilver, and using zpool split to clone the pool. Then, move the HDD and

Re: [zfs-discuss] Drive showing as "removed"

2010-06-09 Thread Cindy Swearingen
Hi Joe, I have no clue why this drive was removed, particularly for a one time failure. I would reconnect/reseat this disk and see if the system recognizes it. If it resilvers, then you're back in business, but I would use zpool status and fmdump to monitor this pool and its devices more often.

Re: [zfs-discuss] Drive showing as "removed"

2010-06-09 Thread Joe Auty
Cindy Swearingen wrote: According to this report, I/O to this device caused a probe failure because the device isn't available on May 31. I was curious if this device had any previous issues over a longer period of time. Failing or faulted drives can also kill your pool's perf

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Joe Auty
I'm also noticing that I'm a little short on RAM. I have 6 320 gig drives and 4 gig of RAM. If the formula is POOL_SIZE/250, this would mean that I need at least 6.4 gig of RAM. What role does RAM play with queuing and caching and other things which might impact overall disk performance? How much

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Ross Walker
On Jun 8, 2010, at 1:33 PM, besson3c wrote: Sure! The pool consists of 6 SATA drives configured as RAID-Z. There are no special read or write cache drives. This pool is shared to several VMs via NFS, these VMs manage email, web, and a Quickbooks server running on FreeBSD, Linux, and Wind

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-09 Thread Fredrich Maney
On Wed, Jun 9, 2010 at 7:40 AM, Maurice Volaski wrote: >> For Linux, I've been able to achieve what I want with DRBD but I'm >> hoping I can find a similar solution on Solaris so that I can leverage >> ZFS. It seems that solution is Sun Availability Suite (AVS)? > > AVS is like DRBD, but only to a

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-09 Thread Maurice Volaski
For Linux, I've been able to achieve what I want with DRBD but I'm hoping I can find a similar solution on Solaris so that I can leverage ZFS. It seems that solution is Sun Availability Suite (AVS)? AVS is like DRBD, but only to a point. If the drives on your primary fail, the primary will star

Re: [zfs-discuss] Add remote disk to zfs pool

2010-06-09 Thread Roy Sigurd Karlsbakk
- "Alvin Lobo" skrev: > Is there a way that i can add one HDD from server A and one HDD from > server B to a zfs pool so that there is an online snapshot taken at > regular intervals. hence maintaining a copy on both HDD's. I think zfs send/receive might be what you are looking for. Vennlig

[zfs-discuss] Add remote disk to zfs pool

2010-06-09 Thread Alvin Lobo
Is there a way that i can add one HDD from server A and one HDD from server B to a zfs pool so that there is an online snapshot taken at regular intervals. hence maintaining a copy on both HDD's. -- This message posted from opensolaris.org ___ zfs-disc