Re: [zfs-discuss] question about COW and snapshots

2011-06-17 Thread Ross Walker
On Jun 16, 2011, at 7:23 PM, Erik Trimble wrote: > On 6/16/2011 1:32 PM, Paul Kraus wrote: >> On Thu, Jun 16, 2011 at 4:20 PM, Richard Elling >> wrote: >> >>> You can run OpenVMS :-) >> Since *you* brought it up (I was not going to :-), how does VMS' >> versioning FS handle those issues ? >>

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-17 Thread Ross Walker
On Jun 17, 2011, at 7:06 AM, Edward Ned Harvey wrote: > I will only say, that regardless of whether or not that is or ever was true, > I believe it's entirely irrelevant. Because your system performs read and > write caching and buffering in ram, the tiny little ram on the disk can't > possibly

Re: [zfs-discuss] dual protocal on one file system?

2011-03-16 Thread Ross Walker
On Mar 16, 2011, at 8:13 AM, Paul Kraus wrote: > On Tue, Mar 15, 2011 at 11:00 PM, Edward Ned Harvey > wrote: > >> BTW, what is the advantage of the kernel cifs server as opposed to samba? >> It seems, years ago, somebody must have been standing around and saying >> "There is a glaring deficien

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-25 Thread Ross Walker
On Dec 24, 2010, at 1:21 PM, Richard Elling wrote: > Latency is what matters most. While there is a loose relationship between > IOPS > and latency, you really want low latency. For 15krpm drives, the average > latency > is 2ms for zero seeks. A decent SSD will beat that by an order of magni

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-15 Thread Ross Walker
On Dec 15, 2010, at 6:48 PM, Bob Friesenhahn wrote: > On Wed, 15 Dec 2010, Linder, Doug wrote: > >> But it sure would be nice if they spared everyone a lot of effort and >> annoyance and just GPL'd ZFS. I think the goodwill generated > > Why do you want them to "GPL" ZFS? In what way would

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-12-08 Thread Ross Walker
On Dec 8, 2010, at 11:41 PM, Edward Ned Harvey wrote: > For anyone who cares: > > I created an ESXi machine. Installed two guest (centos) machines and > vmware-tools. Connected them to each other via only a virtual switch. Used > rsh to transfer large quantities of data between the two guest

Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-08 Thread Ross Walker
On Dec 7, 2010, at 9:49 PM, Edward Ned Harvey wrote: >> From: Ross Walker [mailto:rswwal...@gmail.com] >> >> Well besides databases there are VM datastores, busy email servers, busy >> ldap servers, busy web servers, and I'm sure the list goes on and on. >>

Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Ross Walker
On Dec 7, 2010, at 12:46 PM, Roy Sigurd Karlsbakk wrote: >> Bear a few things in mind: >> >> iops is not iops. > > > I am totally aware of these differences, but it seems some people think RAIDz > is nonsense unless you don't need speed at all. My testing shows (so far) > that the speed is q

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-17 Thread Ross Walker
On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen wrote: > On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote: >>    Hi all, >> >>    Let me tell you all that the MC/S *does* make a difference...I had a >>    windows fileserver using an ISCSI connection to a host running snv_134 >>    with

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Ross Walker
On Nov 16, 2010, at 7:49 PM, Jim Dunham wrote: > On Nov 16, 2010, at 6:37 PM, Ross Walker wrote: >> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote: >>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter. >> >> For iSCSI one just needs to have a s

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Ross Walker
On Nov 16, 2010, at 4:04 PM, Tim Cook wrote: > > > On Wed, Nov 17, 2010 at 7:56 AM, Miles Nordin wrote: > > "tc" == Tim Cook writes: > >tc> Channeling Ethernet will not make it any faster. Each >tc> individual connection will be limited to 1gbit. iSCSI with >tc> mpxio may wo

Re: [zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-01 Thread Ross Walker
On Nov 1, 2010, at 3:33 PM, Mark Sandrock wrote: > Hello, > > I'm working with someone who replaced a failed 1TB drive (50% utilized), > on an X4540 running OS build 134, and I think something must be wrong. > > Last Tuesday afternoon, zpool status reported: > > scrub: resilver in progre

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-11-01 Thread Ross Walker
On Nov 1, 2010, at 5:09 PM, Ian D wrote: >> Maybe you are experiencing this: >> http://opensolaris.org/jive/thread.jspa?threadID=11942 > > It does look like this... Is this really the expected behaviour? That's just > unacceptable. It is so bad it sometimes drop connection and fail copies and

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-19 Thread Ross Walker
On Oct 19, 2010, at 4:33 PM, Tuomas Leikola wrote: > On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote: >> So are we all agreed then, that a vdev failure will cause pool loss ? >> -- > > unless you use copies=2 or 3, in which case your data is still safe > for those datasets that have this op

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-15 Thread Ross Walker
On Oct 15, 2010, at 5:34 PM, Ian D wrote: >> Has anyone suggested either removing L2ARC/SLOG >> entirely or relocating them so that all devices are >> coming off the same controller? You've swapped the >> external controller but the H700 with the internal >> drives could be the real culprit. Coul

Re: [zfs-discuss] Finding corrupted files

2010-10-15 Thread Ross Walker
On Oct 15, 2010, at 9:18 AM, Stephan Budach wrote: > Am 14.10.10 17:48, schrieb Edward Ned Harvey: >> >>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >>> boun...@opensolaris.org] On Behalf Of Toby Thain >>> I don't want to heat up the discussion about ZFS managed discs v

Re: [zfs-discuss] Finding corrupted files

2010-10-12 Thread Ross Walker
On Oct 12, 2010, at 8:21 AM, "Edward Ned Harvey" wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Stephan Budach >> >> c3t211378AC0253d0 ONLINE 0 0 0 > > How many disks are there inside of c3t211378

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-09 Thread Ross Walker
On Sep 9, 2010, at 8:27 AM, Fei Xu wrote: >> >> Service times here are crap. Disks are malfunctioning >> in some way. If >> your source disks can take seconds (or 10+ seconds) >> to reply, then of >> course your copy will be slow. Disk is probably >> having a hard time >> reading the data or som

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ross Walker
On Aug 27, 2010, at 1:04 AM, Mark wrote: > We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I > installed I selected the best bang for the buck on the speed vs capacity > chart. > > We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all > running NFS,

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 4:40 PM, Richard Elling wrote: > On Aug 21, 2010, at 10:14 AM, Ross Walker wrote: >> I'm planning on setting up an NFS server for our ESXi hosts and plan on >> using a virtualized Solaris or Nexenta host to serve ZFS over NFS. > > Please follow

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld wrote: > On 08/21/10 10:14, Ross Walker wrote: >> I am trying to figure out the best way to provide both performance and >> resiliency given the Equallogic provides the redundancy. > > (I have no specific experience with Equallo

[zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
I'm planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and resil

Re: [zfs-discuss] ZFS in Linux (was Opensolaris is apparently dead)

2010-08-19 Thread Ross Walker
On Aug 19, 2010, at 9:26 AM, joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote: > "Edward Ned Harvey" wrote: > >> The reasons for ZFS not in Linux must be more than just the license issue. > > If Linux has ZFS, then it would be possible to do > > -I/O performance analysis based

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ross Walker
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn wrote: > On Wed, 18 Aug 2010, Joerg Schilling wrote: >> >> Linus is right with his primary decision, but this also applies for static >> linking. See Lawrence Rosen for more information, the GPL does not distinct >> between static and dynamic linkin

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Ross Walker
On Aug 17, 2010, at 5:44 AM, joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote: > Frank Cusack wrote: > >> On 8/16/10 9:57 AM -0400 Ross Walker wrote: >>> No, the only real issue is the license and I highly doubt Oracle will >>> re-release ZFS under

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Ross Walker
On Aug 16, 2010, at 11:17 PM, Frank Cusack wrote: > On 8/16/10 9:57 AM -0400 Ross Walker wrote: >> No, the only real issue is the license and I highly doubt Oracle will >> re-release ZFS under GPL to dilute it's competitive advantage. > > You're saying Oracle wan

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
On Aug 15, 2010, at 9:44 PM, Peter Jeremy wrote: > Given that both provide similar features, it's difficult to see why > Oracle would continue to invest in both. Given that ZFS is the more > mature product, it would seem more logical to transfer all the effort > to ZFS and leave btrfs to die.

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
On Aug 16, 2010, at 9:06 AM, "Edward Ned Harvey" wrote: > ZFS does raid, and mirroring, and resilvering, and partitioning, and NFS, and > CIFS, and iSCSI, and device management via vdev's, and so on. So ZFS steps > on a lot of linux peoples' toes. They already have code to do this, or that,

Re: [zfs-discuss] ZFS and VMware

2010-08-14 Thread Ross Walker
On Aug 14, 2010, at 8:26 AM, "Edward Ned Harvey" wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey >> >> #3 I previously believed that vmfs3 was able to handle sparse files >> amazingly well, like, when you create

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais wrote: > > Le 5 août 2010 à 19:49, Ross Walker a écrit : > >> On Aug 5, 2010, at 11:10 AM, Roch wrote: >> >>> >>> Ross Walker writes: >>>> On Aug 4, 2010, at 12:04 PM, Roch wrote: >>>&

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 11:10 AM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 12:04 PM, Roch wrote: >> >>> >>> Ross Walker writes: >>>> On Aug 4, 2010, at 9:20 AM, Roch wrote: >>>> >>>>> >>>&g

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 12:04 PM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 9:20 AM, Roch wrote: >> >>> >>> >>> Ross Asks: >>> So on that note, ZFS should disable the disks' write cache, >>> not enable t

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 9:20 AM, Roch wrote: > > > Ross Asks: > So on that note, ZFS should disable the disks' write cache, > not enable them despite ZFS's COW properties because it > should be resilient. > > No, because ZFS builds resiliency on top of unreliable parts. it's able to > deal

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 3:52 AM, Roch wrote: > > Ross Walker writes: > >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >>> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>>> On Wed, May 26, 2010 at 5:08 A

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote: > On 03/08/2010 22:49, Ross Walker wrote: >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>> >>&g

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote: > On 03/08/2010 22:49, Ross Walker wrote: >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>> >>&g

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais wrote: > > Le 27 mai 2010 à 07:03, Brent Jones a écrit : > >> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly >> wrote: >>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands: >>> >>> sh-4.0# zfs create rpool/iscsi >>> sh-4.0#

Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Ross Walker
On Jul 26, 2010, at 2:51 PM, Dav Banks wrote: > I wanted to test it as a backup solution. Maybe that's crazy in itself but I > want to try it. > > Basically, once a week detach the 'backup' pool from the mirror, replace the > drives, add the new raidz to the mirror and let it resilver and sit

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-25 Thread Ross Walker
On Jul 23, 2010, at 10:14 PM, Edward Ned Harvey wrote: >> From: Arne Jansen [mailto:sensi...@gmx.net] >>> >>> Can anyone else confirm or deny the correctness of this statement? >> >> As I understand it that's the whole point of raidz. Each block is its >> own >> stripe. > > Nope, that doesn't

Re: [zfs-discuss] File cloning

2010-07-22 Thread Ross Walker
On Jul 22, 2010, at 2:41 PM, Miles Nordin wrote: >> "sw" == Saxon, Will writes: > >sw> 'clone' vs. a 'copy' would be very easy since we have >sw> deduplication now > > dedup doesn't replace the snapshot/clone feature for the > NFS-share-full-of-vmdk use case because there's no equi

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Ross Walker
On Jul 20, 2010, at 6:12 AM, v wrote: > Hi, > for zfs raidz1, I know for random io, iops of a raidz1 vdev eqaul to one > physical disk iops, since raidz1 is like raid5 , so is raid5 has same > performance like raidz1? ie. random iops equal to one physical disk's ipos. On reads, no, any part of

Re: [zfs-discuss] Need ZFS master!

2010-07-13 Thread Ross Walker
The whole disk layout should be copied from disk 1 to 2, then the slice on disk 2 that corresponds to the slice on disk 1 should be attached to the rpool which forms an rpool mirror (attached not added). Then you need to add the grub bootloader to disk 2. When it finishes resilvering then you

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Ross Walker
On Jul 11, 2010, at 5:11 PM, Freddie Cash wrote: > ZFS-FUSE is horribly unstable, although that's more an indication of > the stability of the storage stack on Linux. Not really, more an indication of the pseudo-VFS layer implemented in fuse. Remember fuse provides it's own VFS API separate fro

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-10 Thread Ross Walker
On Jul 10, 2010, at 5:46 AM, Erik Trimble wrote: > On 7/10/2010 1:14 AM, Graham McArdle wrote: >>> Instead, create "Single Disk" arrays for each disk. >>> >> I have a question related to this but with a different controller: If I'm >> using a RAID controller to provide non-RAID single-disk

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote: > On 24/06/2010 14:32, Ross Walker wrote: >> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: >> >> >>> On 23/06/2010 18:50, Adam Leventhal wrote: >>> >>>>> Does i

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: > On 23/06/2010 18:50, Adam Leventhal wrote: >>> Does it mean that for dataset used for databases and similar environments >>> where basically all blocks have fixed size and there is no other data all >>> parity information will end-up on one

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Ross Walker
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote: > > 128GB. > > Does it mean that for dataset used for databases and similar environments > where basically all blocks have fixed size and there is no other data all > parity information will end-up on one (z1) or two (z2) specific disks? W

Re: [zfs-discuss] SLOG striping? (Bob Friesenhahn)

2010-06-22 Thread Ross Walker
On Jun 22, 2010, at 8:40 AM, Jeff Bacon wrote: >> The term 'stripe' has been so outrageously severely abused in this >> forum that it is impossible to know what someone is talking about when >> they use the term. Seemingly intelligent people continue to use wrong >> terminology because they thin

Re: [zfs-discuss] Dedup... still in beta status

2010-06-16 Thread Ross Walker
On Jun 16, 2010, at 9:02 AM, Carlos Varela wrote: Does the machine respond to ping? Yes If there is a gui does the mouse pointer move? There is no GUI (nexentastor) Does the keyboard numlock key respond at all ? Yes I just find it very hard to believe that such a situation cou

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-14 Thread Ross Walker
On Jun 13, 2010, at 2:14 PM, Jan Hellevik wrote: Well, for me it was a cure. Nothing else I tried got the pool back. As far as I can tell, the way to get it back should be to use symlinks to the fdisk partitions on my SSD, but that did not work for me. Using -V got the pool back. What is

Re: [zfs-discuss] Please trim posts

2010-06-11 Thread Ross Walker
On Jun 11, 2010, at 2:07 AM, Dave Koelmeyer wrote: I trimmed, and then got complained at by a mailing list user that the context of what I was replying to was missing. Can't win :P If at a minimum one trims the disclaimers, footers and signatures, that's better then nothing. On long th

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Ross Walker
On Jun 10, 2010, at 5:54 PM, Richard Elling wrote: On Jun 10, 2010, at 1:24 PM, Arne Jansen wrote: Andrey Kuzmin wrote: Well, I'm more accustomed to "sequential vs. random", but YMMW. As to 67000 512 byte writes (this sounds suspiciously close to 32Mb fitting into cache), did you have w

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Ross Walker
On Jun 8, 2010, at 1:33 PM, besson3c wrote: Sure! The pool consists of 6 SATA drives configured as RAID-Z. There are no special read or write cache drives. This pool is shared to several VMs via NFS, these VMs manage email, web, and a Quickbooks server running on FreeBSD, Linux, and Wind

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Ross Walker
On Jun 7, 2010, at 2:10 AM, Erik Trimble wrote: Comments in-line. On 6/6/2010 9:16 PM, Ken wrote: I'm looking at VMWare, ESXi 4, but I'll take any advice offered. On Sun, Jun 6, 2010 at 19:40, Erik Trimble wrote: On 6/6/2010 6:22 PM, Ken wrote: Hi, I'm looking to build a virtualiz

Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Ross Walker
On Jun 2, 2010, at 12:03 PM, zfsnoob4 wrote: Wow thank you very much for the clear instructions. And Yes, I have another 120GB drive for the OS, separate from A, B and C. I will repartition the drive and install Solaris. Then maybe at some point I'll delete the entire drive and just instal

Re: [zfs-discuss] New SSD options

2010-05-21 Thread Ross Walker
On May 20, 2010, at 7:17 PM, Ragnar Sundblad wrote: On 21 maj 2010, at 00.53, Ross Walker wrote: On May 20, 2010, at 6:25 PM, Travis Tabbal wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ross Walker
On May 20, 2010, at 6:25 PM, Travis Tabbal wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems to be one SSD that is considered "working", the Zeus IOPS. Even if I had the money, I can't buy it. As my ap

Re: [zfs-discuss] ZFS High Availability

2010-05-13 Thread Ross Walker
On May 12, 2010, at 7:12 PM, Richard Elling wrote: On May 11, 2010, at 10:17 PM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is impo

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 3:06 PM, Manoj Joseph wrote: Ross Walker wrote: On May 12, 2010, at 1:17 AM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access t

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 1:17 AM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is imported during a failover. The problem is that we use Z

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ross Walker
On May 6, 2010, at 8:34 AM, Edward Ned Harvey wrote: From: Pasi Kärkkäinen [mailto:pa...@iki.fi] In neither case do you have data or filesystem corruption. ZFS probably is still OK, since it's designed to handle this (?), but the data can't be OK if you lose 30 secs of writes.. 30 secs o

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-23 Thread Ross Walker
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote: From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Thursday, April 22, 2010 6:34 AM On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote: If you combine the hypervisor and storage server and have students connect to the VMs via RDP or VNC

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Ross Walker
On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote: From: matthew patton [mailto:patto...@yahoo.com] Sent: Tuesday, April 20, 2010 12:54 PM Geoff Nordli wrote: With our particular use case we are going to do a "save state" on their virtual machines, which is going to write 100-400 MB per VM v

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-21 Thread Ross Walker
On Apr 20, 2010, at 12:13 AM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought: 1. Carve a 500GB s

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Ross Walker
On Apr 19, 2010, at 12:50 PM, Don wrote: Now I'm simply confused. Do you mean one cachefile shared between the two nodes for this zpool? How, may I ask, would this work? The rpool should be in /etc/zfs/zpool.cache. The shared pool should be in /etc/cluster/zpool.cache (or wherever you p

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Ross Walker
On Fri, Apr 2, 2010 at 8:03 AM, Edward Ned Harvey wrote: >> > Seriously, all disks configured WriteThrough (spindle and SSD disks >> > alike) >> > using the dedicated ZIL SSD device, very noticeably faster than >> > enabling the >> > WriteBack. >> >> What do you get with both SSD ZIL and WriteBack

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat wrote: > On 01/04/2010 14:49, Ross Walker wrote: >>> >>> We're talking about the "sync" for NFS exports in Linux; what do they >>> mean >>> with "sync" NFS exports? >> >>

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Apr 1, 2010, at 8:42 AM, casper@sun.com wrote: Is that what "sync" means in Linux? A sync write is one in which the application blocks until the OS acks that the write has been committed to disk. An async write is given to the OS, and the OS is permitted to buffer the write to di

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Mar 31, 2010, at 11:58 PM, Edward Ned Harvey wrote: We ran into something similar with these drives in an X4170 that turned out to be an issue of the preconfigured logical volumes on the drives. Once we made sure all of our Sun PCI HBAs where running the exact same version of firmware a

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Mar 31, 2010, at 11:51 PM, Edward Ned Harvey wrote: A MegaRAID card with write-back cache? It should also be cheaper than the F20. I haven't posted results yet, but I just finished a few weeks of extensive benchmarking various configurations. I can say this: WriteBack cache is much

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 10:25 PM, Richard Elling wrote: On Mar 31, 2010, at 7:11 PM, Ross Walker wrote: On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris with ZFS as an NFS server? :) I don&#

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris with ZFS as an NFS server? :) I don't think you'll find the performance you paid for with ZFS and Solaris at this time. I've been trying to more tha

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 11:48 AM, vikkr wrote: THX Ross, i plan exporting each drive individually over iSCSI. I this case, the write, as well as reading, will go to all 6 discs at once, right? The only question - how to calculate fault tolerance of such a system if the discs are all different

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 10:18 AM, vikkr wrote: Hi sorry for bad eng and picture :). Can such a decision? 3 servers openfiler give their drives 2 - 1 tb ISCSI server to OpenSolaris On OpenSolaris assembled a RAID-Z with double parity. Server OpenSolaris provides NFS access to this array, and du

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-17 Thread Ross Walker
On Mar 17, 2010, at 2:30 AM, Erik Ableson wrote: On 17 mars 2010, at 00:25, Svein Skogen wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 16.03.2010 22:31, erik.ableson wrote: On 16 mars 2010, at 21:00, Marc Nicholas wrote: On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen mailto

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote: On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker wrote: On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this single iscsi target. ZFS best practices, tell me that to be safe in case of corruption, pools should always be m

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 12:19 PM, Ware Adams wrote: On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote: Well, I actually don't know what implementation is inside this legacy machine. This machine is an AMI StoreTrends ITX, but maybe it has been built around IET, don't know. Well, maybe I s

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon wrote: Hello, I'd like to check for any guidance about using zfs on iscsi storage appliances. Recently I had an unlucky situation with an unlucky storage machine freezing. Once the storage was up again (rebooted) all other iscsi clients were

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 12:31 PM, Andrew wrote: Hi Ross, Ok - as a Solaris newbie.. i'm going to need your help. Format produces the following:- c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) / p...@0,0/pci15ad,1...@10/s...@4,0 what dd command do I need to run to reference thi

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 8:27 AM, Andrew wrote: Ok, The fault appears to have occurred regardless of the attempts to move to vSphere as we've now moved the host back to ESX 3.5 from whence it came and the problem still exists. Looks to me like the fault occurred as a result of a reboot. Any

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais wrote: I think This is highlighting that there is extra CPU requirement to manage small blocks in ZFS. The table would probably turn over if you go to 16K zfs records and 16K reads/writes form the application. Next step for you is to figure

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
On Mar 8, 2010, at 11:46 PM, ольга крыжановская anov...@gmail.com> wrote: tmpfs lacks features like quota and NFSv4 ACL support. May not be the best choice if such features are required. True, but if the OP is looking for those features they are more then unlikely looking for an in-memory fi

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Ross Walker
On Feb 25, 2010, at 9:11 AM, Giovanni Tirloni wrote: On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto > wrote: It's a kind gesture to say it'll continue to exist and all, but without commercial support from the manufacturer, it's relegated to hobbyist curiosity status for us. If I even mention

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ross Walker
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad wrote: On 18 feb 2010, at 13.55, Phil Harman wrote: ... Whilst the latest bug fixes put the world to rights again with respect to correctness, it may be that some of our performance workaround are still unsafe (i.e. if my iSCSI client assumes a

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-10 Thread Ross Walker
On Feb 9, 2010, at 1:55 PM, matthew patton wrote: The cheapest solution out there that isn't a Supermicro-like server chassis, is DAS in the form of HP or Dell MD-series which top out at 15 or 16 3" drives. I can only chain 3 units per SAS port off a HBA in either case. The new Dell MD11

Re: [zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)

2010-02-09 Thread Ross Walker
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey > wrote: How are you managing UID's on the NFS server? If user eharvey connects to server from client Mac A, or Mac B, or Windows 1, or Windows 2, or any of the linux machines ... the server has to know it's eharvey, and assign the correct UID'

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Ross Walker
On Feb 5, 2010, at 10:49 AM, Robert Milkowski wrote: Actually, there is. One difference is that when writing to a raid-z{1|2} pool compared to raid-10 pool you should get better throughput if at least 4 drives are used. Basically it is due to the fact that in RAID-10 the maximum you can g

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
Interesting, can you explain what zdb is dumping exactly? I suppose you would be looking for blocks referenced in the snapshot that have a single reference and print out the associated file/ directory name? -Ross On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote: Hi Ross, zdb - f..

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
system functions offered by OS. I scan every byte in every file manually and it ^^^ On February 3, 2010 10:11:01 AM -0500 Ross Walker > wrote: Not a ZFS method, but you could use rsync with the dry run option to list all changed fi

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 8:59 PM, Frank Cusack wrote: On February 3, 2010 6:46:57 PM -0500 Ross Walker wrote: So was there a final consensus on the best way to find the difference between two snapshots (files/directories added, files/directories deleted and file/directories changed)? Find

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 12:35 PM, Frank Cusack z...@linetwo.net> wrote: On February 3, 2010 12:19:50 PM -0500 Frank Cusack > wrote: If you do need to know about deleted files, the find method still may be faster depending on how ddiff determines whether or not to do a file diff. The docs don't expla

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 9:53 AM, Henu wrote: Okay, so first of all, it's true that send is always fast and 100% reliable because it uses blocks to see differences. Good, and thanks for this information. If everything else fails, I can parse the information I want from send stream :) But am I

Re: [zfs-discuss] Home ZFS NAS - 2 drives or 3?

2010-01-30 Thread Ross Walker
On Jan 30, 2010, at 2:53 PM, Mark wrote: I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB SATA drives. When I install opensolaris, I assume it will want to use all or part of one of those drives for the install. That leaves me with the remaining part of disk 1

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Ross Walker
On Jan 21, 2010, at 6:47 PM, Daniel Carosone wrote: On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote: + support file systems larger then 2GiB include 32-bit UIDs a GIDs file systems, but what about individual files within? I think the original author meant files bigger then 2

Re: [zfs-discuss] 4 Internal Disk Configuration

2010-01-14 Thread Ross Walker
On Jan 14, 2010, at 10:44 AM, "Mr. T Doodle" wrote: Hello, I have played with ZFS but not deployed any production systems using ZFS and would like some opinions I have a T-series box with 4 internal drives and would like to deploy ZFS with availability and performance in mind ;

Re: [zfs-discuss] I/O Read starvation

2010-01-11 Thread Ross Walker
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn > wrote: On Mon, 11 Jan 2010, bank kus wrote: Are we still trying to solve the starvation problem? I would argue the disk I/O model is fundamentally broken on Solaris if there is no fair I/O scheduling between multiple read sources until that

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-06 Thread Ross Walker
On Wed, Jan 6, 2010 at 4:30 PM, Wes Felter wrote: > Michael Herf wrote: > >> I agree that RAID-DP is much more scalable for reads than RAIDZx, and >> this basically turns into a cost concern at scale. >> >> The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be >> used instead of n

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-04 Thread Ross Walker
On Mon, Jan 4, 2010 at 2:27 AM, matthew patton wrote: > I find it baffling that RaidZ(2,3) was designed to split a record-size block > into N (N=# of member devices) pieces and send the uselessly tiny requests to > spinning rust when we know the massive delays entailed in head seeks and > rotat

Re: [zfs-discuss] zvol (slow) vs file (fast) performance snv_130

2010-01-04 Thread Ross Walker
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones wrote: > On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker wrote: >> On Dec 30, 2009, at 11:55 PM, "Steffen Plotner" >> wrote: >> >> Hello, >> >> I was doing performance testing, validating zvol performa

  1   2   >