Re: [zfs-discuss] Synchronous Mount?

2007-01-23 Thread Jason J. W. Williams
Hi Prashanth, This was about a year ago. I believe I ran bonnie++ and IOzone tests. Tried also to simulate an OLTP load. The 15-20% overhead for ZFS was vs. UFS on a raw disk...UFS on SVM was almost exactly 15% lower performance than raw UFS. UFS and XFS on raw disk were pretty similar in terms o

Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Jason J. W. Williams
Wow. That's an incredibly cool story. Thank you for sharing it! Does the Thumper today pretty much resemble what you saw then? Best Regards, Jason On 1/23/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote: > This is a bit off-topic...but since the Thumper is the poster child > for ZFS I hope its no

Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Bryan Cantrill
> This is a bit off-topic...but since the Thumper is the poster child > for ZFS I hope its not too off-topic. > > What are the actual origins of the Thumper? I've heard varying stories > in word and print. It appears that the Thumper was the original server > Bechtolsheim designed at Kealia as a

Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Richard Elling
Jason J. W. Williams wrote: Hi All, This is a bit off-topic...but since the Thumper is the poster child for ZFS I hope its not too off-topic. What are the actual origins of the Thumper? I've heard varying stories in word and print. It appears that the Thumper was the original server Bechtolshei

Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Torrey McMahon
Neal Pollack wrote: Jason J. W. Williams wrote: So I was curious if anyone had any insights into the history/origins of the Thumper...or just wanted to throw more rumors on the fire. ;-) Thumper was created to hold the the entire electronic transcript of the Bill Clinton impeachment proceed

Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Neal Pollack
Jason J. W. Williams wrote: Hi All, This is a bit off-topic...but since the Thumper is the poster child for ZFS I hope its not too off-topic. What are the actual origins of the Thumper? I've heard varying stories in word and print. It appears that the Thumper was the original server Bechtolshei

Re: [zfs-discuss] Synchronous Mount?

2007-01-23 Thread Prashanth Radhakrishnan
Hi Jason, > My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to > ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits > of 50%+. SVM only reduced performance by about 15%. ZFS was similar, > though a tad higher. Yes, LVM snapshots' overhead is high. But I've seen

Re: [zfs-discuss] Synchronous Mount?

2007-01-23 Thread Jason J. W. Williams
Hi Prashanth, My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits of 50%+. SVM only reduced performance by about 15%. ZFS was similar, though a tad higher. Also, my understanding is you can't write to a ZFS sna

Re: [zfs-discuss] Synchronous Mount?

2007-01-23 Thread Prashanth Radhakrishnan
> > Is there someway to synchronously mount a ZFS filesystem? > > '-o sync' does not appear to be honoured. > > No there isn't. Why do you think it is necessary? Specifically, I was trying to compare ZFS snapshots with LVM snapshots on Linux. One of the tests does writes to an ext3FS (that's on

[zfs-discuss] Thumper Origins Q

2007-01-23 Thread Jason J. W. Williams
Hi All, This is a bit off-topic...but since the Thumper is the poster child for ZFS I hope its not too off-topic. What are the actual origins of the Thumper? I've heard varying stories in word and print. It appears that the Thumper was the original server Bechtolsheim designed at Kealia as a mas

Re: [zfs-discuss] X2100 not hotswap

2007-01-23 Thread Bart Smaalders
Frank Cusack wrote: It's interesting the topics that come up here, which really have little to do with zfs. I guess it just shows how great zfs is. I mean, you would never have a ufs list that talked about the merits of sata vs sas and what hardware do i buy. Also interesting is that zfs expos

Re: [zfs-discuss] file not persistent after node bounce when there is a bad disk?

2007-01-23 Thread Peter Buckingham
Hi Eric, eric kustarz wrote: The first thing i would do is see if any I/O is happening ('zpool iostat 1'). If there's none, then perhaps the machine is hung (which you then would want to grab a couple of '::threadlist -v 10's from mdb to figure out if there are hung threads). there seems to

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Jason J. W. Williams
Hi Peter, Ah! That clears it up for me. Thank you. Best Regards, Jason On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote: On 1/23/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: > Hi Peter, > > Perhaps I'm a bit dense, but I've been befuddled by the x+y notation > myself. Is it X stripes

[zfs-discuss] Some questions I had while testing ZFS.

2007-01-23 Thread Jeffrey Scott
I'm looking at bringing up a new Solaris 10 based file server running off an older UltraSPARC-IIi 360MHz with 512mb ram. I've brought up the 11/06 release from scratch no patches installed at this time. I have 4 externally attached 36gb scsi devices off the hosts systems scsi bus. After setti

Re: [zfs-discuss] file not persistent after node bounce when there is a bad disk?

2007-01-23 Thread eric kustarz
Note that the bad disk on the node caused a normal reboot to hang. I also verified that sync from the command line hung. I don't know how ZFS (or Solaris) handles situations involving bad disks...does a bad disk block proper ZFS/OS handling of all IO, even to the other healthy disks?

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
> Note also that for most applications, the size of their IO operations > would often not match the current page size of the buffer, causing > additional performance and scalability issues. Thanks for mentioning this, I forgot about it. Since ZFS's default block size is configured to be larger th

Re: [zfs-discuss] On-failure policies for pools

2007-01-23 Thread Richard Elling
Peter Schuller wrote: Hello, There have been comparisons posted here (and in general out there on the net) for various RAID levels and the chances of e.g. double failures. One problem that is rarely addressed though, is the various edge cases that significantly impact the probability of loss

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Peter Tribble
On 1/23/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: Hi Peter, Perhaps I'm a bit dense, but I've been befuddled by the x+y notation myself. Is it X stripes consisting of Y disks? Sorry. Took a short cut on that bit. It's x data disks + y parity. So in the case of raidz1, y=1; in the c

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Jason J. W. Williams
Hi Peter, Perhaps I'm a bit dense, but I've been befuddled by the x+y notation myself. Is it X stripes consisting of Y disks? Best Regards, Jason On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote: On 1/23/07, Neal Pollack <[EMAIL PROTECTED]> wrote: > Hi: (Warning, new zfs user question

Re: [zfs-discuss] X2100 not hotswap

2007-01-23 Thread Frank Cusack
It's interesting the topics that come up here, which really have little to do with zfs. I guess it just shows how great zfs is. I mean, you would never have a ufs list that talked about the merits of sata vs sas and what hardware do i buy. Also interesting is that zfs exposes hardware bugs yet

Re: [zfs-discuss] X2100 not hotswap, was Re: External drive enclosures + Sun Server for massstorage

2007-01-23 Thread Toby Thain
On 23-Jan-07, at 4:51 PM, Bart Smaalders wrote: Frank Cusack wrote: yes I am an experienced Solaris admin and know all about devfsadm :-) and the older disks command. It doesn't help in this case. I think it's a BIOS thing. Linux and Windows can't see IDE drives that aren't there at boot tim

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread Bart Smaalders
[EMAIL PROTECTED] wrote: In order to protect the user pages while a DIO is in progress, we want support from the VM that isn't presently implemented. To prevent a page from being accessed by another thread, we have to unmap the TLB/PTE entries and lock the page. There's a cost associated with t

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread johansen-osdev
> Basically speaking - there needs to be some sort of strategy for > bypassing the ARC or even parts of the ARC for applications that > may need to advise the filesystem of either: > 1) the delicate nature of imposing additional buffering for their > data flow > 2) already well optimized applicatio

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Peter Tribble
On 1/23/07, Neal Pollack <[EMAIL PROTECTED]> wrote: Hi: (Warning, new zfs user question) I am setting up an X4500 for our small engineering site file server. It's mostly for builds, images, doc archives, certain workspace archives, misc data. ... Can someone provide an actual example

[zfs-discuss] On-failure policies for pools

2007-01-23 Thread Peter Schuller
Hello, There have been comparisons posted here (and in general out there on the net) for various RAID levels and the chances of e.g. double failures. One problem that is rarely addressed though, is the various edge cases that significantly impact the probability of loss of data. In particular,

Re: [zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass

2007-01-23 Thread mike
ooh. they support it? cool. i'll have to explore that option now. however i still really want eSATA. On 1/23/07, Samuel Hexter <[EMAIL PROTECTED]> wrote: We've got two Areca ARC-1261ML cards (PCI-E x8, up to 16 SATA disks each) running a 12TB zpool on snv54 and Areca's arcmsr driver. They're a

Re: [zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun

2007-01-23 Thread Jason J. W. Williams
I believe the SmartArray is an LSI like the Dell PERC isn't it? Best Regards, Jason On 1/23/07, Robert Suh <[EMAIL PROTECTED]> wrote: People trying to hack together systems might want to look at the HP DL320s http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475 -f79-3232017

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Jason J. W. Williams
Hi Neal, We've been getting pretty good performance out of RAID-Z2 with 3x 6-disk RAID-Z2 stripes. More stripes mean better performance all around...particularly on random reads. But as a file-server that's probably not a concern. With RAID-Z2 it seems to me 2 hot-spares is very sufficient, but I

Re: [zfs-discuss] Re: ZFS direct IO

2007-01-23 Thread Jonathan Edwards
Roch I've been chewing on this for a little while and had some thoughts On Jan 15, 2007, at 12:02, Roch - PAE wrote: Jonathan Edwards writes: On Jan 5, 2007, at 11:10, Anton B. Rang wrote: DIRECT IO is a set of performance optimisations to circumvent shortcomings of a given filesystem.

Re: [zfs-discuss] file not persistent after node bounce when there is a bad disk?

2007-01-23 Thread Peter Buckingham
Tomas Ögren wrote: You know that this is a stripe over two 4-way mirrors, right? yes. performance isn't really a concern for us in this setup. persistence is. we want to be able to have access to files when disks fail. we need to be able to handle up to three disk failures. The slice layout

Re: [zfs-discuss] Re: External drive enclosures + Sun Server for massstorage

2007-01-23 Thread Bart Smaalders
Frank Cusack wrote: yes I am an experienced Solaris admin and know all about devfsadm :-) and the older disks command. It doesn't help in this case. I think it's a BIOS thing. Linux and Windows can't see IDE drives that aren't there at boot time either, and on Solaris the SATA controller runs

[zfs-discuss] Re: SAS support on Solaris

2007-01-23 Thread David J. Orman
*snip snip* > AFAIK > only Adaptec and LSI Logic are making controllers > today. With so few > manufacturers it's a scary investment. (Of course, > someone please > correct me if you know of other players.) There's a few others. Those are (of course) the major players (and with big names like

Re: [zfs-discuss] zpool split

2007-01-23 Thread Darren J Moffat
Nicolas Williams wrote: On Tue, Jan 23, 2007 at 04:49:38PM +, Darren J Moffat wrote: Jeremy Teo wrote: I'm defining "zpool split" as the ability to divide a pool into 2 separate pools, each with identical FSes. The typical use case would be to split a N disk mirrored pool into a N-1 pool an

Re: [zfs-discuss] zpool split

2007-01-23 Thread Nicolas Williams
On Tue, Jan 23, 2007 at 04:49:38PM +, Darren J Moffat wrote: > Jeremy Teo wrote: > >I'm defining "zpool split" as the ability to divide a pool into 2 > >separate pools, each with identical FSes. The typical use case would > >be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool,

[zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Neal Pollack
Hi: (Warning, new zfs user question) I am setting up an X4500 for our small engineering site file server. It's mostly for builds, images, doc archives, certain workspace archives, misc data. I'd like a trade off between space and safety of data. I have not set up a large ZFS system be

[zfs-discuss] Re: zpool split

2007-01-23 Thread Rainer Heilke
> While contemplating "zpool split" functionality, I > wondered whether we > really want such a feature because > > 1) SVM allows it and admins are used to it. > or > 2) We can't do what we want using zfs send |zfs recv I don't think this is an either/or scenario. There are simply too many times

[zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-01-23 Thread Rainer Heilke
> For the "clone another system" zfs send/recv might be > useful Keeping in mind that you only want to send/recv one half of the ZFS mirror... Rainer This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

RE: [zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun

2007-01-23 Thread Robert Suh
People trying to hack together systems might want to look at the HP DL320s http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475 -f79-3232017.html 12 drive bays, Intel Woodcrest, SAS (and SATA) controller. If you snoop around, you might be able to find drive carriers on eBay o

Re: [zfs-discuss] file not persistent after node bounce when there is a bad disk?

2007-01-23 Thread Tomas Ögren
On 22 January, 2007 - Peter Buckingham sent me these 5,2K bytes: > $ zpool status > pool: tank > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > tank ONLINE 0 0 0 > mirror ONLINE 0 0

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ? [MD21]

2007-01-23 Thread Joerg Schilling
Rob Logan <[EMAIL PROTECTED]> wrote: > > FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk. > > The MD21 is an ESDI to SCSI converter. > > yup... its the board in the middle left of > http://rob.com/sun/sun2/md21.jpg If you are talking about the middle right, this is a ACB-4000 series con

Re: [zfs-discuss] zpool split

2007-01-23 Thread Darren J Moffat
Jeremy Teo wrote: I'm defining "zpool split" as the ability to divide a pool into 2 separate pools, each with identical FSes. The typical use case would be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, and then transport the 1 disk pool to another machine. Can you pick anot

Re: [zfs-discuss] Re: Backup/Restore idea?

2007-01-23 Thread Wade . Stuart
If you are talking from one host to another, snapshots should actually be a usable solution. many filesystems only get 3 -> 10% churn per day and if you use rsync with -inplace will get you delta data on snapshots that is very similar to the actual block delta on the original server. For an e

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ? [MD21]

2007-01-23 Thread Rob Logan
> FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk. > The MD21 is an ESDI to SCSI converter. yup... its the board in the middle left of http://rob.com/sun/sun2/md21.jpg Rob ___ zfs-discuss mailing list zfs-discuss@opensola

[zfs-discuss] Re: Backup/Restore idea?

2007-01-23 Thread Gerrit Sere
Hello, Disk capacity is between 70 and 100GB and most of the time the diskspace is more then 90% full. Every day there is a full backup of the user data and on Friday for system files. We keep the backup tapes for 30 days. So, it's impossible to make 30 snapshots. Scripting solutions like tar (

[zfs-discuss] zpool split

2007-01-23 Thread Jeremy Teo
I'm defining "zpool split" as the ability to divide a pool into 2 separate pools, each with identical FSes. The typical use case would be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, and then transport the 1 disk pool to another machine. While contemplating "zpool split" fun

[zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass

2007-01-23 Thread Samuel Hexter
> Areca makes excellent PCI express cards - but probably have zero > support in Solaris/OpenSolaris. I use them in both Windows and Linux. > Works natively in FreeBSD too. They're the fastest cards on the market > I believe still. > > However probably not very appropriate for this since it's a Sol

Re: [zfs-discuss] zpool dumps core with did device

2007-01-23 Thread Ceri Davies
Hi Robert, On Tue, Jan 23, 2007 at 02:42:33PM +0100, Robert Milkowski wrote: > Tuesday, January 23, 2007, 1:48:50 PM, you wrote: > CD> On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote: > > >> Of course the question is why use ZFS over DID? > > CD> Actually the question is probably

Re[2]: [zfs-discuss] zpool dumps core with did device

2007-01-23 Thread Robert Milkowski
Hello Ceri, Tuesday, January 23, 2007, 1:48:50 PM, you wrote: CD> On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote: >> Hello Zoram, >> >> Tuesday, January 23, 2007, 11:27:48 AM, you wrote: >> >> ZT> Hi Ceri, >> >> ZT> I just saw your mail today. I'm replying In case you haven't

Re: [zfs-discuss] zpool dumps core with did device

2007-01-23 Thread Ceri Davies
On Tue, Jan 23, 2007 at 12:07:34PM +0100, Robert Milkowski wrote: > Hello Zoram, > > Tuesday, January 23, 2007, 11:27:48 AM, you wrote: > > ZT> Hi Ceri, > > ZT> I just saw your mail today. I'm replying In case you haven't found a > ZT> solution. > > ZT> This is > > ZT> 6475304 zfs core dumps

Re: [zfs-discuss] zpool dumps core with did device

2007-01-23 Thread Ceri Davies
On Tue, Jan 23, 2007 at 03:57:48PM +0530, Zoram Thanga wrote: > Hi Ceri, > > I just saw your mail today. I'm replying In case you haven't found a > solution. > > This is > > 6475304 zfs core dumps when trying to create new spool using "did" device > > The workaround suggests: > > Set environm

Re: [zfs-discuss] Re: How much do we really want zpool remove?

2007-01-23 Thread Mike Gerdts
On 1/23/07, Darren J Moffat <[EMAIL PROTECTED]> wrote: For the "clone another system" zfs send/recv might be useful Having support for this directly in flarcreate would be nice. It would make differential flars very quick and efficient. Mike -- Mike Gerdts http://mgerdts.blogspot.com/ _

Re: [zfs-discuss] Re: How much do we really want zpool remove?

2007-01-23 Thread Darren J Moffat
mario heimel wrote: this is a good point, the mirror loses all information about the zpool. this is very important for the ZFS Root pool, i don't know how often i have broken the svm-mirror of the root disks, to clone a system and bring the disk to a other system or use "live upgrade" and so on

Re[2]: [zfs-discuss] zpool dumps core with did device

2007-01-23 Thread Robert Milkowski
Hello Zoram, Tuesday, January 23, 2007, 11:27:48 AM, you wrote: ZT> Hi Ceri, ZT> I just saw your mail today. I'm replying In case you haven't found a ZT> solution. ZT> This is ZT> 6475304 zfs core dumps when trying to create new spool using "did" device ZT> The workaround suggests: ZT> Set

Re: [zfs-discuss] zpool dumps core with did device

2007-01-23 Thread Zoram Thanga
Hi Ceri, I just saw your mail today. I'm replying In case you haven't found a solution. This is 6475304 zfs core dumps when trying to create new spool using "did" device The workaround suggests: Set environmental variable NOINUSE_CHECK=1 And the problem does not exists. Thanks, Zoram C