Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-27 Thread Jason J. W. Williams
if you get rid of the HBA and log device, and run with ZIL > disabled (if your work load is compatible with a disabled ZIL.) By "get rid of the HBA" I assume you mean put in a battery-backed RAID card instead? -J ___ zfs-discuss mailing list zfs-discus

[zfs-discuss] Long resilver time

2010-09-26 Thread Jason J. W. Williams
I just witnessed a resilver that took 4h for 27gb of data. Setup is 3x raid-z2 stripes with 6 disks per raid-z2. Disks are 500gb in size. No checksum errors. It seems like an exorbitantly long time. The other 5 disks in the stripe with the replaced disk were at 90% busy and ~150io/s each during

Re: [zfs-discuss] Long resilver time

2010-09-26 Thread Jason J. W. Williams
Upgrading is definitely an option. What is the current snv favorite for ZFS stability? I apologize, with all the Oracle/Sun changes I haven't been paying as close attention to big reports on zfs-discuss as I used to. -J Sent via iPhone Is your e-mail Premiere? On Sep 26, 2010, at 10:22, Roy

Re: [zfs-discuss] Long resilver time

2010-09-27 Thread Jason J. W. Williams
134 it is. This is an OpenSolaris rig that's going to be replaced within the next 60 days, so just need to get it to something that won't through false checksum errors like the 120-123 builds do and has decent rebuild times. Future boxes will be NexentaStor. Thank you guys. :) -J On Sun, Sep 26

Re: [zfs-discuss] Long resilver time

2010-09-27 Thread Jason J. W. Williams
Err...I meant Nexenta Core. -J On Mon, Sep 27, 2010 at 12:02 PM, Jason J. W. Williams < jasonjwwilli...@gmail.com> wrote: > 134 it is. This is an OpenSolaris rig that's going to be replaced within > the next 60 days, so just need to get it to something that won't through &

Re: [zfs-discuss] Intermittent ZFS hang

2010-09-27 Thread Jason J. W. Williams
If one was sticking with OpenSolaris for the short term, is something older than 134 more stable/less buggy? Not using de-dupe. -J On Thu, Sep 23, 2010 at 6:04 PM, Richard Elling wrote: > Hi Charles, > There are quite a few bugs in b134 that can lead to this. Alas, due to the > new > regime, the

[zfs-discuss] Unusual Resilver Result

2010-09-29 Thread Jason J. W. Williams
Hi, I just replaced a drive (c12t5d0 in the listing below). For the first 6 hours of the resilver I saw no issues. However, sometime during the last hour of the resilver, the new drive and two others in the same RAID-Z2 strip threw a couple checksum errors. Also, two of the other drives in the str

Re: [zfs-discuss] Unusual Resilver Result

2010-09-30 Thread Jason J. W. Williams
Thanks Tuomas. I'll run the scrub. It's an aging X4500. -J On Thu, Sep 30, 2010 at 3:25 AM, Tuomas Leikola wrote: > On Thu, Sep 30, 2010 at 9:08 AM, Jason J. W. Williams < > jasonjwwilli...@gmail.com> wrote: > >> >> Should I be worried about these check

[zfs-discuss] Long import due to spares.

2010-10-05 Thread Jason J. W. Williams
Just for history as to why Fishworks was running on this box...we were in the beta program and have upgraded along the way. This box is an X4240 with 16x 146GB disks running the Feb 2010 release of FW with de-dupe. We were getting ready to re-purpose the box and getting our data off. We then delet

Re: [zfs-discuss] [OpenIndiana-discuss] Question about WD drives with Super Micro systems

2011-08-06 Thread Jason J. W. Williams
WD's drives have gotten better the last few years but their quality is still not very good. I doubt they test their drives extensively for heavy duty server configs, particularly since you don't see them inside any of the major server manufactures' boxes. Hitachi in particular does well in mas

Re: [zfs-discuss] [OpenIndiana-discuss] Question about WD drives with Super Micro systems

2011-08-06 Thread Jason J. W. Williams
This might be related to your issue: http://blog.mpecsinc.ca/2010/09/western-digital-re3-series-sata-drives.html On Saturday, August 6, 2011, Roy Sigurd Karlsbakk wrote: >> In my experience, SATA drives behind SAS expanders just don't work. >> They "fail" in the manner you >> describe, sooner or

Re: [zfs-discuss] [storage-discuss] ZFS iscsi snapshot - VSS compatible?

2009-01-07 Thread Jason J. W. Williams
Since iSCSI is block-level, I don't think the iSCSI intelligence at the file level you're asking for is feasible. VSS is used at the file-system level on either NTFS partitions or over CIFS. -J On Wed, Jan 7, 2009 at 5:06 PM, Mr Stephen Yum wrote: > Hi all, > > If I want to make a snapshot of an

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Jason J. W. Williams
Hi Neal, We've been getting pretty good performance out of RAID-Z2 with 3x 6-disk RAID-Z2 stripes. More stripes mean better performance all around...particularly on random reads. But as a file-server that's probably not a concern. With RAID-Z2 it seems to me 2 hot-spares is very sufficient, but I

Re: [zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun

2007-01-23 Thread Jason J. W. Williams
I believe the SmartArray is an LSI like the Dell PERC isn't it? Best Regards, Jason On 1/23/07, Robert Suh <[EMAIL PROTECTED]> wrote: People trying to hack together systems might want to look at the HP DL320s http://h10010.www1.hp.com/wwpc/us/en/ss/WF05a/15351-241434-241475-241475 -f79-3232017

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Jason J. W. Williams
Hi Peter, Perhaps I'm a bit dense, but I've been befuddled by the x+y notation myself. Is it X stripes consisting of Y disks? Best Regards, Jason On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote: On 1/23/07, Neal Pollack <[EMAIL PROTECTED]> wrote: > Hi: (Warning, new zfs user question

Re: [zfs-discuss] need advice: ZFS config ideas for X4500 Thumper?

2007-01-23 Thread Jason J. W. Williams
Hi Peter, Ah! That clears it up for me. Thank you. Best Regards, Jason On 1/23/07, Peter Tribble <[EMAIL PROTECTED]> wrote: On 1/23/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: > Hi Peter, > > Perhaps I'm a bit dense, but I've been befuddled by the

[zfs-discuss] Thumper Origins Q

2007-01-23 Thread Jason J. W. Williams
Hi All, This is a bit off-topic...but since the Thumper is the poster child for ZFS I hope its not too off-topic. What are the actual origins of the Thumper? I've heard varying stories in word and print. It appears that the Thumper was the original server Bechtolsheim designed at Kealia as a mas

Re: [zfs-discuss] Synchronous Mount?

2007-01-23 Thread Jason J. W. Williams
Hi Prashanth, My company did a lot of LVM+XFS vs. SVM+UFS testing in addition to ZFS. Overall, LVM's overhead is abysmal. We witnessed performance hits of 50%+. SVM only reduced performance by about 15%. ZFS was similar, though a tad higher. Also, my understanding is you can't write to a ZFS sna

Re: [zfs-discuss] Thumper Origins Q

2007-01-23 Thread Jason J. W. Williams
Wow. That's an incredibly cool story. Thank you for sharing it! Does the Thumper today pretty much resemble what you saw then? Best Regards, Jason On 1/23/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote: > This is a bit off-topic...but since the Thumper is the poster child > for ZFS I hope its no

Re: [zfs-discuss] Synchronous Mount?

2007-01-23 Thread Jason J. W. Williams
Hi Prashanth, This was about a year ago. I believe I ran bonnie++ and IOzone tests. Tried also to simulate an OLTP load. The 15-20% overhead for ZFS was vs. UFS on a raw disk...UFS on SVM was almost exactly 15% lower performance than raw UFS. UFS and XFS on raw disk were pretty similar in terms o

Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Jason J. W. Williams
'Cept the 3511 is highway robbery for what you get. ;-) Best Regards, Jason On 1/24/07, Richard Elling <[EMAIL PROTECTED]> wrote: Peter Eriksson wrote: >> too much of our future roadmap, suffice it to say that one should expect >> much, much more from Sun in this vein: innovative software and i

Re: [zfs-discuss] Thumper Origins Q

2007-01-24 Thread Jason J. W. Williams
Hi Wee, Having snapshots in the filesystem that work so well is really nice. How are y'all quiescing the DB? Best Regards, J On 1/24/07, Wee Yeh Tan <[EMAIL PROTECTED]> wrote: On 1/25/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote: > ... > after all, what was ZFS going to do with that expensive

Re: [zfs-discuss] ZFS or UFS - what to do?

2007-01-26 Thread Jason J. W. Williams
Hi Jeff, We're running a FLX210 which I believe is an Engenio 2884. In our case it also is attached to a T2000. ZFS has run VERY stably for us with data integrity issues at all. We did have a significant latency problem caused by ZFS flushing the write cache on the array after every write, but t

Re: [zfs-discuss] ZFS or UFS - what to do?

2007-01-26 Thread Jason J. W. Williams
Correction: "ZFS has run VERY stably for us with data integrity issues at all." should read "ZFS has run VERY stably for us with NO data integrity issues at all." On 1/26/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: Hi Jeff, We're running a FLX210 whi

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-01-26 Thread Jason J. W. Williams
To be fair, you can replace vdevs with same-sized or larger vdevs online. The issue is that you cannot replace with smaller vdevs nor can you eliminate vdevs. In other words, I can migrate data around without downtime, I just can't shrink or eliminate vdevs without send/recv. This is where the ph

Re: [zfs-discuss] multihosted ZFS

2007-01-26 Thread Jason J. W. Williams
You could use SAN zoning of the affected LUN's to keep multiple hosts from seeing the zpool. When failover time comes, you change the zoning to make the LUN's visible to the new host, then import. When the old host reboots, it won't find any zpool. Better safe than sorry Or change the LUN

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-26 Thread Jason J. W. Williams
Could the replication engine eventually be integrated more tightly with ZFS? That would be slick alternative to send/recv. Best Regards, Jason On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote: Project Overview: I propose the creation of a project on opensolaris.org, to bring to the community

Re: [zfs-discuss] Re: ZFS or UFS - what to do?

2007-01-29 Thread Jason J. W. Williams
Hi Jeff, Maybe I mis-read this thread, but I don't think anyone was saying that using ZFS on-top of an intelligent array risks more corruption. Given my experience, I wouldn't run ZFS without some level of redundancy, since it will panic your kernel in a RAID-0 scenario where it detects a LUN is

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-29 Thread Jason J. W. Williams
Thank you for the detailed explanation. It is very helpful to understand the issue. Is anyone successfully using SNDR with ZFS yet? Best Regards, Jason On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote: Jason J. W. Williams wrote: > Could the replication engine eventually be integr

Re: [zfs-discuss] hot spares - in standby?

2007-01-29 Thread Jason J. W. Williams
Hi Guys, I seem to remember the Massive Array of Independent Disk guys ran into a problem I think they called static friction, where idle drives would fail on spin up after being idle for a long time: http://www.eweek.com/article2/0,1895,1941205,00.asp Would that apply here? Best Regards, Jason

Re: [zfs-discuss] Project Proposal: Availability Suite

2007-01-29 Thread Jason J. W. Williams
Hi Jim, Thank you very much for the heads up. Unfortunately, we need the write-cache enabled for the application I was thinking of combining this with. Sounds like SNDR and ZFS need some more soak time together before you can use both to their full potential together? Best Regards, Jason On 1/2

Re: [zfs-discuss] hot spares - in standby?

2007-01-29 Thread Jason J. W. Williams
ork. Would dramatically cut down on the power. What do y'all think? Best Regards, Jason On 1/29/07, Toby Thain <[EMAIL PROTECTED]> wrote: On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote: > Hi Guys, > > I seem to remember the Massive Array of Independent Disk guys ran

Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-17 Thread Jason J. W. Williams
Hi Nicholas, ZFS itself is very stable and very effective as fast FS in our experience. If you browse the archives of the list you'll see that NFS performance is pretty acceptable, with some performance/RAM quirks around small files: http://www.opensolaris.org/jive/message.jspa?threadID=19858 ht

Re: [zfs-discuss] Zfs best practice for 2U SATA iSCSI NAS

2007-02-19 Thread Jason J. W. Williams
Hi Nicholas, Actually Virtual Iron, they have a nice system at the moment with live migration of windows guest. Ah. We looked at them for some Windows DR. They do have a nice product. 3. Which leads to: coming from Debian, how easy is system updates? I remember with OpenBSD system updates u

Re: [zfs-discuss] HELIOS and ZFS cache

2007-02-22 Thread Jason J. W. Williams
Hi Eric, Everything Mark said. We as a customer ran into this running MySQL on a Thumper (and T2000). We solved it on the Thumper by limiting the ARC to 4GB: /etc/system: set zfs:zfs_arc_max = 0x1 #4GB This has worked marvelously over the past 50 days. The ARC stays around 5-6GB now. L

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-22 Thread Jason J. W. Williams
Hi Przemol, I think Casper had a good point bringing up the data integrity features when using ZFS for RAID. Big companies do a lot of things "just because that's the certified way" that end up biting them in the rear. Trusting your SAN arrays is one of them. That all being said, the need to do m

Re: [zfs-discuss] ARGHH. An other panic!!

2007-02-23 Thread Jason J. W. Williams
Hi Gino, We've noticed similar strangeness with ZFS on MPXIO. If you actively fail over the path, everything works hunky dory. However, if one of the paths disappears unexpectedly (i.e. FC switch dies...or an array controller konks out) then ZFS will panic. UFS on MPXIO in as similar situation do

Re: [zfs-discuss] Re: ARGHH. An other panic!!

2007-02-26 Thread Jason J. W. Williams
Hi Gino, Was there more than one LUN in the RAID-Z using the port you disabled? -J On 2/26/07, Gino Ruopolo <[EMAIL PROTECTED]> wrote: Hi Jason, saturday we made some tests and found that disabling a FC port under heavy load (MPXio enabled) often takes to a panic. (using a RAID-Z !) No prob

Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Jason J. W. Williams
-) -J On 2/27/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote: > Hi Przemol, > > I think Casper had a good point bringing up the data integrity > features when using ZFS for RAID. Big companies do a lot of

Re: [zfs-discuss] X2200-M2

2007-03-12 Thread Jason J. W. Williams
Hi Brian, To my understanding the X2100 M2 and X2200 M2 are basically the same board OEM'd from Quanta...except the 2200 M2 has two sockets. As to ZFS and their weirdness, it would seem to me that fixing it would be more an issue of the SATA/SCSI driver. I may be wrong here. -J On 3/12/07, Bri

Re: [zfs-discuss] X2200-M2

2007-03-12 Thread Jason J. W. Williams
Hi Marty, We'd love to beta the driver. Currently, have 5 X2100 M2s in production and 1 in development. Best Regards, Jason On 3/12/07, Marty Faltesek <[EMAIL PROTECTED]> wrote: On Mon, 2007-03-12 at 12:14 -0700, Frank Cusack wrote: > On March 12, 2007 2:50:14 PM -0400 Rayson Ho <[EMAIL PROTEC

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-15 Thread Jason J. W. Williams
Hi Jim, My understanding is that the DLNC can consume quite a bit of memory too, and the ARC limitations (and memory culler) don't clean the DNLC yet. So if you're working with a lot of smaller files, you can still go way over your ARC limit. Anyone, please correct me if I've got that wrong. -J

Re: [zfs-discuss] Re: Re: Re: ZFS memory and swap usage

2007-03-19 Thread Jason J. W. Williams
Hi Rainer, While I would recommend upgrading to Build 54 or newer to use the system tunable, its not that big of a deal to set the ARC on boot up. We've done it on a T2000 for awhile, until we could take it down for an extended period of time to upgrade it. Definitely WOULD NOT run a database on

Re: [zfs-discuss] C'mon ARC, stay small...

2007-03-30 Thread Jason J. W. Williams
Hi Guys, Rather than starting a new thread I thought I'd continue this thread. I've been running Build 54 on a Thumper since Mid January and wanted to ask a question about the zfs_arc_max setting. We set it to " 0x1 #4GB", however its creeping over that till our Kernel memory usage is nea

[zfs-discuss] ZFS Snapshot "destroy to"

2007-05-11 Thread Jason J. W. Williams
Hey All, Is it possible (or even technically feasible) for zfs to have a "destroy to" feature? Basically destroy any snapshot older than a certain date? Best Regards, Jason ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari

Re: [zfs-discuss] ZFS Snapshot "destroy to"

2007-05-11 Thread Jason J. W. Williams
Hi Mark, Thank you very much. That's what I was kind of afraid of. Its fine to script it, just would be nice to have a built in function. :-) Thank you again. Best Regards, Jason On 5/11/07, Mark J Musante <[EMAIL PROTECTED]> wrote: On Fri, 11 May 2007, Jason J. W. Williams wrot

[zfs-discuss] ZFS ARC & DNLC Limitation

2007-09-24 Thread Jason J. W. Williams
Hello All, Awhile back (Feb '07) when we noticed ZFS was hogging all the memory on the system, y'all were kind enough to help us use the arc_max tunable to attempt to limit that usage to a hard value. Unfortunately, at the time a sticky problem was that the hard limit did not include DNLC entries

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-03 Thread Jason J. W. Williams
Hi Dale, We're testing out the enhanced arc_max enforcement (track DNLC entries) using Build 72 right now. Hopefully, it will fix the memory creep, which is the only real downside to ZFS for DB work it seems to me. Frankly, of our DB loads have improved performance with ZFS. I suspect its because

[zfs-discuss] Fracture Clone Into FS

2007-10-17 Thread Jason J. W. Williams
Hey Guys, Its not possible yet to fracture a snapshot or clone into a self-standing filesystem is it? Basically, I'd like to fracture a snapshot/clone into is own FS so I can rollback past that snapshot in the original filesystem and still keep that data. Thank you in advance. Best Regards, Jaso

Re: [zfs-discuss] Fracture Clone Into FS

2007-10-18 Thread Jason J. W. Williams
A (getting H), promote H, > then delete C, D, and E. That would leave you with: > > A -- H > \ > -- B -- F -- G > > Is that anything at all like what you're after? > > > --Bill > > On Wed, Oct 17, 2007 at 10:00:03PM -0600, Jason J. W. Williams wrote: >

Re: [zfs-discuss] Fracture Clone Into FS

2007-10-18 Thread Jason J. W. Williams
F). 4.) Promote clone_B. 5.) If clone_Bs data doesn't work out, promote clone_F to roll forward. Thank you in advance. Best Regards, Jason On 10/18/07, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: > Hi Bill, > > You've got it 99%. I want to roll E back to say B, and keep G

Re: [zfs-discuss] Yager on ZFS

2007-11-09 Thread Jason J. W. Williams
> A quick Google of ext3 fsck did not yield obvious examples of why people > needed to run fsck on ext3, though it did remind me that by default ext3 runs > fsck just for the hell of it every N (20?) mounts - could that have been part > of what you were seeing? I'm not sure if that's what Robe

[zfs-discuss] Count objects/inodes

2007-11-09 Thread Jason J. W. Williams
Hi Guys, Someone asked me how to count the number of inodes/objects in a ZFS filesystem and I wasn't exactly sure. "zdb -dv " seems like a likely candidate but I wanted to find out for sure. As to why you'd want to know this, I don't know their reasoning but I assume it has to do with the maximum

Re: [zfs-discuss] Yager on ZFS

2007-11-13 Thread Jason J. W. Williams
Hi Darren, > Ah, your "CPU end" was referring to the NFS client cpu, not the storage > device CPU. That wasn't clear to me. The same limitations would apply > to ZFS (or any other filesystem) when running in support of an NFS > server. > > I thought you were trying to describe a qualitative diff

[zfs-discuss] X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.

2007-12-04 Thread Jason J. W. Williams
Hey Guys, Have any of y'all seen a condition where the ILOM considers a disk faulted (status is 3 instead of 1), but ZFS keeps writing to the disk and doesn't report any errors? I'm going to do a scrub tomorrow and see what comes back. I'm curious what caused the ILOM to fault the disk. Any advice

Re: [zfs-discuss] X4500 ILOM thinks disk 20 is faulted, ZFS thinks not.

2007-12-04 Thread Jason J. W. Williams
ount seems a little fishy...like iostat -E doesn't like the X4500 for some reason. Thank you again for your help. Best Regards, Jason On Dec 4, 2007 2:54 AM, Ralf Ramge <[EMAIL PROTECTED]> wrote: > Jason J. W. Williams wrote: > > Have any of y'all seen a condition where

Re: [zfs-discuss] ZFS performance with Oracle

2007-12-05 Thread Jason J. W. Williams
Seconded. Redundant controllers means you get one controller that locks them both up, as much as it means you've got backup. Best Regards, Jason On Mar 21, 2007 4:03 PM, Richard Elling <[EMAIL PROTECTED]> wrote: > JS wrote: > > I'd definitely prefer owning a sort of SAN solution that would basica

[zfs-discuss] ZFS Not Offlining Disk on SCSI Sense Error (X4500)

2008-01-03 Thread Jason J. W. Williams
Hello, There seems to be a persistent issue we have with ZFS where one of the SATA disk in a zpool on a Thumper starts throwing sense errors, ZFS does not offline the disk and instead hangs all zpools across the system. If it is not caught soon enough, application data ends up in an inconsistent s

Re: [zfs-discuss] ZFS Not Offlining Disk on SCSI Sense Error (X4500)

2008-01-03 Thread Jason J. W. Williams
Hi Albert, Thank you for the link. ZFS isn't offlining the disk in b77. -J On Jan 3, 2008 3:07 PM, Albert Chin <[EMAIL PROTECTED]> wrote: > > On Thu, Jan 03, 2008 at 02:57:08PM -0700, Jason J. W. Williams wrote: > > There seems to be a persistent issue we have with ZFS wh

Re: [zfs-discuss] ZFS Not Offlining Disk on SCSI Sense Error (X4500)

2008-01-03 Thread Jason J. W. Williams
timeouts independent >of SCSI timeouts. > > Neither of these is trivial, and both potentially compromise data > integrity, hence the lack of such features. There's no easy solution to > the problem, but we're happy to hear ideas. > > - Eric > > On Thu, Jan 0

Re: [zfs-discuss] ZFS Not Offlining Disk on SCSI Sense Error (X4500)

2008-01-03 Thread Jason J. W. Williams
se situations? > How about '::zio_state'? > > - Eric > > > On Thu, Jan 03, 2008 at 03:11:39PM -0700, Jason J. W. Williams wrote: > > Hi Albert, > > > > Thank you for the link. ZFS isn't offlining the disk in b77. > > > > -J > > &g

[zfs-discuss] MySQL/ZFS backup program posted.

2008-01-17 Thread Jason J. W. Williams
Hey Y'all, I've posted the program (SnapBack) my company developed internally for backing up production MySQL servers using ZFS snapshots: http://blogs.digitar.com/jjww/?itemid=56 Hopefully, it'll save other folks some time. We use it a lot for standing up new MySQL slaves as well. Best Regards,

Re: [zfs-discuss] LVM on ZFS

2008-01-21 Thread Jason J. W. Williams
Hey Thiago, SVM is a direct replacement for LVM. Also, you'll notice about a 30% performance boost if you move from LVM to SVM. At least we did when we moved a couple of years ago. -J On Jan 21, 2008 8:09 AM, Thiago Sobral <[EMAIL PROTECTED]> wrote: > Hi folks, > > I need to manage volumes like

Re: [zfs-discuss] De-duplication in ZFS

2008-01-21 Thread Jason J. W. Williams
It'd be a really nice feature. Combined with baked-in replication it would be a nice alternative to our DD appliances. -J On Jan 21, 2008 2:03 PM, John Martinez <[EMAIL PROTECTED]> wrote: > > Great question. I've been wondering this myself over the past few > weeks, as de-dup is becoming more pop

Re: [zfs-discuss] 3ware support

2008-02-12 Thread Jason J. W. Williams
X4500 problems seconded. Still having issues with port resets due to the Marvell driver. Though they seem considerably more transient and less likely to lock up the entire systems in the most recent ( >b72) OpenSolaris builds. -J On Feb 12, 2008 9:35 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote: >

Re: [zfs-discuss] raid-z random read performance

2006-11-02 Thread Jason J. W. Williams
Hi Robert, Out of curiosity would it be possible to see the same test but hitting the disk with write operations instead of read? Best Regards, Jason On 11/2/06, Robert Milkowski <[EMAIL PROTECTED]> wrote: Hello zfs-discuss, Server: x4500, 2x Opetron 285 (dual-core), 16GB RAM, 48x500GB file

Re: [zfs-discuss] Filebench, X4200 and Sun Storagetek 6140

2006-11-03 Thread Jason J. W. Williams
Hi Louwtjie, Are you running FC or SATA-II disks in the 6140? How many spindles too? Best Regards, Jason On 11/3/06, Louwtjie Burger <[EMAIL PROTECTED]> wrote: Hi there I'm busy with some tests on the above hardware and will post some scores soon. For those that do _not_ have the above avail

Re: [zfs-discuss] ZFS for Linux 2.6

2006-11-06 Thread Jason J. W. Williams
Hi Yuen, Not to my knowledge. I believe this project is working on it though: http://zfs-on-fuse.blogspot.com/ Best Regards, Jason On 11/6/06, Yuen L. Lee <[EMAIL PROTECTED]> wrote: I'm curious whether there is a version of Linux 2.6 ZFS available? Many thanks. This message posted from opens

[zfs-discuss] ZFS send/receive VS. scp

2006-11-15 Thread Jason J. W. Williams
Hi there, I've been comparing using the ZFS send/receive function over SSH to simply scp'ing the contents of snapshot, and have found for me the performance is 2x faster for scp. Has anyone else noticed ZFS send/receive to be noticeably slower? Best Regards, Jason __

Re: [zfs-discuss] performance question

2006-11-15 Thread Jason J. W. Williams
Listman, What's the average size of your files? Do you have many file deletions/moves going on? I'm not that familiar with how Perforce handles moving files around. XFS is bad at small files (worse than most file systems), as SGI optimized it for larger files (> 64K). You might see a performance

Re: [zfs-discuss] ZFS send/receive VS. scp

2006-11-15 Thread Jason J. W. Williams
hat's going on. Best Regards, Jason On 11/15/06, Darren J Moffat <[EMAIL PROTECTED]> wrote: Jason J. W. Williams wrote: > Hi there, > > I've been comparing using the ZFS send/receive function over SSH to > simply scp'ing the contents of snapshot, and have found

Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Jason J. W. Williams
Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z? Thanks in advance, J On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote: On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote: > So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran > fo

[zfs-discuss] Convert Zpool RAID Types

2006-11-28 Thread Jason J. W. Williams
Hello, Is it possible to non-destructively change RAID types in zpool while the data remains on-line? -J ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-28 Thread Jason J. W. Williams
sentially I've got 12 disks to work with. Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help is much appreciated. Best Regards, Jason On 11/28/06, Richard Elling <[EMAIL PROTECTED]> wrote: Jason J. W. Williams wrote: > Is it possible to non-destructively change RAI

Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-29 Thread Jason J. W. Williams
from 200 to 20 once we cut the replication. Since the masters and slaves were using the same the volume groups and RAID-Z was striping across all of them on both the masters and slaves, I think this was a big problem. Any comments? Best Regards, Jason On 11/29/06, Richard Elling <[EMAIL PROT

Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-30 Thread Jason J. W. Williams
to me that there is some detailed information which would be needed for a full analysis. So, to keep the ball rolling, I'll respond generally. Jason J. W. Williams wrote: > Hi Richard, > > Been watching the stats on the array and the cache hits are < 3% on > these volumes. We&#x

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Jason J. W. Williams
Hi all, Having experienced this, it would be nice if there was an option to offline the filesystem instead of kernel panicking on a per-zpool basis. If its a system-critical partition like a database I'd prefer it to kernel-panick and thereby trigger a fail-over of the application. However, if it

Re: [zfs-discuss] ZFS related kernel panic

2006-12-04 Thread Jason J. W. Williams
Any chance we might get a short refresher warning when creating a striped zpool? O:-) Best Regards, Jason On 12/4/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote: Jason J. W. Williams wrote: > Hi all, > > Having experienced this, it would be nice if there was an option to > offl

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
Hi Luke, We've been using MPXIO (STMS) with ZFS quite solidly for the past few months. Failover is instantaneous when a write operations occurs after a path is pulled. Our environment is similar to yours, dual-FC ports on the host, and 4 FC ports on the storage (2 per controller). Depending on y

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
lped prove it wasn't ZFS or the storage. Does this help? Best Regards, Jason On 12/6/06, Douglas Denny <[EMAIL PROTECTED]> wrote: On 12/6/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: > We've been using MPXIO (STMS) with ZFS quite solidly for the past few > mon

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
<[EMAIL PROTECTED]> wrote: On 12/6/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote: > The configuration is a T2000 connected to a StorageTek FLX210 array > via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z > the LUNs across 3 array volume groups. For performan

Re: [zfs-discuss] ZFS failover without multipathing

2006-12-06 Thread Jason J. W. Williams
ng with clusters, we have a 5 minute failover requirement on the entire cluster to move. Therefore, it would be ideal to not have STMS(mpxio) enabled on the machines. Luke Schwab --- "Jason J. W. Williams" <[EMAIL PROTECTED]> wrote: > Hi Doug, > > The configuration is a

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-06 Thread Jason J. W. Williams
Hi Luke, Is the 4884 using two or four ports? Also, how many FSs are involved? Best Regards, Jason On 12/6/06, Luke Schwab <[EMAIL PROTECTED]> wrote: I, too, experienced a long delay while importing a zpool on a second machine. I do not have any filesystems in the pool. Just the Solaris 10 Op

Re: [zfs-discuss] System pause peculiarity with mysql on zfs

2006-12-07 Thread Jason J. W. Williams
Hi Dale, Are you using MyISAM or InnoDB? Also, what's your zpool configuration? Best Regards, Jason On 12/7/06, Dale Ghent <[EMAIL PROTECTED]> wrote: Hey all, I run a netra X1 as the mysql db server for my small personal web site. This X1 has two drives in it with SVM-mirrored UFS slices for

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-07 Thread Jason J. W. Williams
resides instead of zfs looking through every disk attached to the machine. Luke Schwab --- "Jason J. W. Williams" <[EMAIL PROTECTED]> wrote: > Hey Luke, > > Do you have IM? > > My Yahoo IM ID is [EMAIL PROTECTED] > -J > > On 12/6/06, Luke Schwab <[EMAIL P

Re: [zfs-discuss] Re: System pause peculiarity with mysql on zfs

2006-12-07 Thread Jason J. W. Williams
That's gotta be what it is. All our MySQL IOP issues have gone away one we moved to RAID-1 from RAID-Z. -J On 12/7/06, Anton B. Rang <[EMAIL PROTECTED]> wrote: This does look like the ATA driver bug rather than a ZFS issue per se. (For the curious, the reason ZFS triggers this when UFS doesn't

Re: [zfs-discuss] Re: System pause peculiarity with mysql on zfs

2006-12-07 Thread Jason J. W. Williams
Hi Dale, For what its worth, the SX releases tend to be pretty stable. I'm not sure if snv_52 has made a SX release yet. We ran for over 6 months on SX 10/05 (snv_23) with no downtime. Best Regards, Jason On 12/7/06, Dale Ghent <[EMAIL PROTECTED]> wrote: On Dec 7, 2006, at 6:14 PM, Anton B. Ra

Re: [zfs-discuss] Re: ZFS failover without multipathing

2006-12-07 Thread Jason J. W. Williams
Hi Luke, I wonder if it is the HBA. We had issues with Solaris and LSI HBAs back when we were using an Xserve RAID. Haven't had any of the issues you're describing between our LSI array and the Qlogic HBAs we're using now. If you have another type of HBA I'd try it. MPXIO and ZFS haven't ever c

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-08 Thread Jason J. W. Williams
pr 29 2006 c1t0d0s1 -> /dev/dsk/c1t0d0s1 lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t16d0s1 -> /dev/dsk/c1t16d0s1 lrwxrwxrwx 1 cx158393 staff 18 Apr 29 2006 c1t17d0s1 -> /dev/dsk/c1t17d0s1 Then: zpool import -d /mydkslist mypool Hope that helps... -r

Re: [zfs-discuss] ZFS Storage Pool advice

2006-12-12 Thread Jason J. W. Williams
Hi Kory, It depends on the capabilities of your array in our experience...and also the zpool type. If you're going to do RAID-Z in a write intensive environment you're going to have a lot more I/Os with three LUNs then a single large LUN. Your controller may go nutty. Also, (Richard can address

[zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-15 Thread Jason J. W. Williams
Hi Folks, Roch Bourbonnais and Richard Elling helped me tremendously with the issue of ZFS killing performance on arrays with battery-backed cache. Since this seems to have been mentioned a bit recently, and there are no instructions on how to fix it on Sun StorageTek/Engenio arrays, I wanted to

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-15 Thread Jason J. W. Williams
Hi Jeremy, It would be nice if you could tell ZFS to turn off fsync() for ZIL writes on a per-zpool basis. That being said, I'm not sure there's a consensus on that...and I'm sure not smart enough to be a ZFS contributor. :-) The behavior is a reality we had to deal with and workaround, so I pos

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-18 Thread Jason J. W. Williams
It seems to me that the optimal scenario would be network filesystems on top of ZFS, so you can get the data portability of a SAN, but let ZFS make all of the decisions. Short of that, ZFS on SAN-attached JBODs would give a similar benefit. Having benefited tremendously from being able to easily d

Re: [zfs-discuss] Re: ZFS and SE 3511

2006-12-19 Thread Jason J. W. Williams
I do see this note in the 3511 documentation: "Note - Do not use a Sun StorEdge 3511 SATA array to store single instances of data. It is more suitable for use in configurations where the array has a backup or archival role." My understanding of this particular scare-tactic wording (its also in

Re: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams
> Shouldn't there be a big warning when configuring a pool > with no redundancy and/or should that not require a -f flag ? why? what if the redundancy is below the pool .. should we warn that ZFS isn't directly involved in redundancy decisions? Because if the host controller port goes flaky an

Re: [zfs-discuss] Instructions for ignoring ZFS write cache flushing on intelligent arrays

2006-12-19 Thread Jason J. W. Williams
Hi Roch, That sounds like a most excellent resolution to me. :-) I believe Engenio devices support SBC-2. It seems to me making intelligent decisions for end-users is generally a good policy. Best Regards, Jason On 12/19/06, Roch - PAE <[EMAIL PROTECTED]> wrote: Jason J. W. Williams

Re: Re[2]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams
Hi Robert, I don't think its about assuming the admin is an idiot. It happened to me in development and I didn't expect it...I hope I'm not an idiot. :-) Just observing the list, a fair amount of people don't expect it. The likelihood you'll miss this one little bit of very important information

Re: Re[4]: [zfs-discuss] ZFS in a SAN environment

2006-12-19 Thread Jason J. W. Williams
Hi Robert I didn't take any offense. :-) I completely agree with you that zpool striping leverages standard RAID-0 knowledge in that if a device disappears your RAID group goes poof. That doesn't really require a notice...was just trying to be complete. :-) The surprise to me was that detecting

Re: [zfs-discuss] Re: ZFS and SE 3511

2006-12-20 Thread Jason J. W. Williams
n nice behavior to have, since popping and reinserting triggered a rebuild of the drive. Best Regards, Jason On 12/19/06, Toby Thain <[EMAIL PROTECTED]> wrote: On 19-Dec-06, at 2:42 PM, Jason J. W. Williams wrote: >> I do see this note in the 3511 documentation: "Note - Do not

Re: Re[6]: [zfs-discuss] ZFS in a SAN environment

2006-12-20 Thread Jason J. W. Williams
Hi Robert, I agree with others here that the kernel panic is undesired behavior. If ZFS would simply offline the zpool and not kernel panic, that would obviate my request for an informational message. It'd be pretty darn obvious what was going on. Best Regards, Jason On 12/20/06, Robert Milkows

  1   2   >