Re: [zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread Gordon Ross
RackTop/EraStor/Illumos/???) > I'm not sure, but I think there are people running NexentaStor on that h/w. If not, then on something pretty close. NS supports clustering, etc. -- Gordon Ross Nexenta Systems, Inc. www.nexenta.com Enterprise cla

Re: [zfs-discuss] Creating NFSv4/ZFS XATTR through dirfd through /proc not allowed?

2012-07-13 Thread Gordon Ross
r#13 EACCES > > Accessing files or directories through /proc/$$/fd/ from a shell > otherwise works, only the xattr directories cause trouble. Native C > code has the same problem. > > Olga Does "runat" let you see those xattr files? -- Gordon Ross Nexenta System

Re: [zfs-discuss] [illumos-Developer] revisiting aclmode options

2011-08-02 Thread Gordon Ross
On Thu, Jul 21, 2011 at 9:58 PM, Paul B. Henson wrote: > On 7/19/2011 7:10 PM, Gordon Ross wrote: > >> The idea:  A new "aclmode" setting called "discard", meaning that >> the users don't care at all about the traditional mode bits.  A >> dataset

Re: [zfs-discuss] Entire client hangs every few seconds

2011-07-26 Thread Gordon Ross
Are the "disk active" lights typically ON when this happens? On Tue, Jul 26, 2011 at 3:27 PM, Garrett D'Amore wrote: > This is actually a recently known problem, and a fix for it is in the > 3.1 version, which should be available any minute now, if it isn't > already available. > > The problem ha

[zfs-discuss] SSD vs "hybrid" drive - any advice?

2011-07-21 Thread Gordon Ross
I'm looking to upgrade the disk in a high-end laptop (so called "desktop replacement" type). I use it for development work, runing OpenIndiana (native) with lots of ZFS data sets. These "hybrid" drives look kind of interesting, i.e. for about $100, one can get:  Seagate Momentus XT ST95005620AS 5

Re: [zfs-discuss] [illumos-Developer] revisiting aclmode options

2011-07-19 Thread Gordon Ross
On Mon, Jul 18, 2011 at 9:44 PM, Paul B. Henson wrote: > Now that illumos has restored the aclmode option to zfs, I would like to > revisit the topic of potentially expanding the suite of available modes. [...] At one point, I was experimenting with some code for smbfs that would "invent" the mod

Re: [zfs-discuss] question about COW and snapshots

2011-06-17 Thread Ross Walker
ersioning, nothing else (no API, no additional features, > etc.). I believe NTFS was built on the same concept of file streams the VMS FS used for versioning. It's a very simple versioning system. Personnally I use Sharepoint, but there are other content man

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-17 Thread Ross Walker
of latency hit which would kill read performance. Try disabling the on-board write or read cache and see how your sequential IO performs and you'll see just how valuable those puny caches are. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] dual protocal on one file system?

2011-03-16 Thread Ross Walker
at supports "Previous Versions" using the hosts native snapshot method. The one glaring deficiency Samba has though, in Sun's eyes not mine, is that it runs in user space, though I believe that's just the cover song for "It wasn't invented here"

Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-25 Thread Ross Walker
and sustained throughput in 1MB+ sequential IO workloads. Only SSD makers list their random IOPS workload numbers and their 4K IO workload numbers. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-15 Thread Ross Walker
"GPL" ZFS? In what way would that save you annoyance? I actually think Doug was trying to say he wished Oracle would open the development and make the source code open-sourced, not necessarily GPL'd. -Ross ___ zfs-discuss m

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-12-08 Thread Ross Walker
might find that as you get more machines on the storage the performance will decrease a lot faster then it otherwise would if it were standalone as it competes with the very machines it is suppose to be serving. -Ross ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-08 Thread Ross Walker
On Dec 7, 2010, at 9:49 PM, Edward Ned Harvey wrote: >> From: Ross Walker [mailto:rswwal...@gmail.com] >> >> Well besides databases there are VM datastores, busy email servers, busy >> ldap servers, busy web servers, and I'm sure the list goes on and on. >>

Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Ross Walker
gunpoint. Well besides databases there are VM datastores, busy email servers, busy ldap servers, busy web servers, and I'm sure the list goes on and on. I'm sure it is much harder to list servers that are truly sequential in IO then random. This is especially

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-17 Thread Ross Walker
utilizing 1Gbps before MC/S then going to MC/S won't give you more, as you weren't using what you had (in fact added latency in MC/S may give you less!). I am going to say that the speed improvement from 134->151a was due to OS and comstar improvements and not the MC/S. -Ross

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Ross Walker
On Nov 16, 2010, at 7:49 PM, Jim Dunham wrote: > On Nov 16, 2010, at 6:37 PM, Ross Walker wrote: >> On Nov 16, 2010, at 4:04 PM, Tim Cook wrote: >>> AFAIK, esx/i doesn't support L4 hash, so that's a non-starter. >> >> For iSCSI one just needs to have a s

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Ross Walker
unless you have at least as many TCP streams as cores, which is > honestly kind of obvious. lego-netadmin bias. > > > > AFAIK, esx/i doesn't support L4 hash, so that's a non-starter. For iSCSI one just needs to have a second (third or fourth...) iSCSI session on a different IP to the target and run mpio/mpxio/mpath whatever your OS calls multi-pathing. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-01 Thread Ross Walker
ot creation/deletion during a resilver causes it to start over. Try suspending all snapshot activity during the resilver. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-11-01 Thread Ross Walker
sustained throughput to give an accurate figure based on one's setup, otherwise start with a reasonable value, say 1GB, and decrease until the pauses stop. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] vdev failure -> pool loss ?

2010-10-19 Thread Ross Walker
datasets that have this option set. This doesn't prevent pool loss in the face of a vdev failure, merely reduces the likelihood of file loss due to block corruption. A loss of a vdev (mirror, raidz or non-redundant disk) means the loss of the pool. -Ross __

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-15 Thread Ross Walker
> like a DDRDrive X1 and an OCZ Z-Drive which are both PCIe cards and don't use > the local controller. What mount options are you using on the Linux client for the NFS share? -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Finding corrupted files

2010-10-15 Thread Ross Walker
> the 100 TB range? That would be quite a number of single drives then, > especially when you want to go with zpool raid-1. A pool consisting of 4 disk raidz vdevs (25% overhead) or 6 disk raidz2 vdevs (33% overhead) should deliver the storage and performance for a pool that size, versu

Re: [zfs-discuss] Finding corrupted files

2010-10-12 Thread Ross Walker
FS' built-in mirrors, otherwise if I were to use HW RAID I would use RAID5/6/50/60 since errors encountered can be reproduced, two parity raids mirrored in ZFS would probably provide the best of both worlds, for a steep cost though. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-09 Thread Ross Walker
recover from a read error themselves. With ZFS one really needs to disable this and have the drives fail immediately. Check your drives to see if they have this feature, if so think about replacing the drives in the source pool that have long se

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Ross Walker
cache rather than disk. Breaking your pool into two or three, setting different vdev types of different type disks and tiering your VMs based on their performance profile would help. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org ht

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 4:40 PM, Richard Elling wrote: > On Aug 21, 2010, at 10:14 AM, Ross Walker wrote: >> I'm planning on setting up an NFS server for our ESXi hosts and plan on >> using a virtualized Solaris or Nexenta host to serve ZFS over NFS. > > Please follow

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld wrote: > On 08/21/10 10:14, Ross Walker wrote: >> I am trying to figure out the best way to provide both performance and >> resiliency given the Equallogic provides the redundancy. > > (I have no specific experience with Equallo

[zfs-discuss] ZFS with Equallogic storage

2010-08-21 Thread Ross Walker
s setup perform? Anybody with experience in this type of setup? -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS in Linux (was Opensolaris is apparently dead)

2010-08-19 Thread Ross Walker
the OS' VFS layer to the lower-level block layer, but this would assure both reliability and performance. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Ross Walker
ed in such a way > that it specifically depends on GPL components. This is how I see it as well. The big problem is not the insmod'ing of the blob but how it is distributed. As far as I know this can be circumvented by not including it in the main distribution but thro

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Ross Walker
On Aug 17, 2010, at 5:44 AM, joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote: > Frank Cusack wrote: > >> On 8/16/10 9:57 AM -0400 Ross Walker wrote: >>> No, the only real issue is the license and I highly doubt Oracle will >>> re-release ZFS under

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Ross Walker
On Aug 16, 2010, at 11:17 PM, Frank Cusack wrote: > On 8/16/10 9:57 AM -0400 Ross Walker wrote: >> No, the only real issue is the license and I highly doubt Oracle will >> re-release ZFS under GPL to dilute it's competitive advantage. > > You're saying Oracle wan

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
it maintainer. Linux is an evolving OS, what determines a FS's continued existence is the public's adoption rate of that FS. If nobody ends up using it then the kernel will drop it in which case it will eventually die. -Ross ___ zfs-discuss mailing

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Ross Walker
m competition in order to drive innovation so it would be beneficial for both FSs to continue together into the future. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS and VMware

2010-08-14 Thread Ross Walker
e same, regardless of NFS vs iSCSI. > > You should always copy files via GUI. That's the lesson here. Technically you should always copy vmdk files via vmfstool on the command line. That will give you wire speed transfers. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 2:24 PM, Roch Bourbonnais wrote: > > Le 5 août 2010 à 19:49, Ross Walker a écrit : > >> On Aug 5, 2010, at 11:10 AM, Roch wrote: >> >>> >>> Ross Walker writes: >>>> On Aug 4, 2010, at 12:04 PM, Roch wrote: >>>&

Re: [zfs-discuss] iScsi slow

2010-08-05 Thread Ross Walker
On Aug 5, 2010, at 11:10 AM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 12:04 PM, Roch wrote: >> >>> >>> Ross Walker writes: >>>> On Aug 4, 2010, at 9:20 AM, Roch wrote: >>>> >>>>> >>>&g

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 12:04 PM, Roch wrote: > > Ross Walker writes: >> On Aug 4, 2010, at 9:20 AM, Roch wrote: >> >>> >>> >>> Ross Asks: >>> So on that note, ZFS should disable the disks' write cache, >>> not enable t

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 9:20 AM, Roch wrote: > > > Ross Asks: > So on that note, ZFS should disable the disks' write cache, > not enable them despite ZFS's COW properties because it > should be resilient. > > No, because ZFS builds resiliency on top of unre

Re: [zfs-discuss] iScsi slow

2010-08-04 Thread Ross Walker
On Aug 4, 2010, at 3:52 AM, Roch wrote: > > Ross Walker writes: > >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >>> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>>> On Wed, May 26, 2010 at 5:08 A

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote: > On 03/08/2010 22:49, Ross Walker wrote: >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>> >>&g

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
On Aug 3, 2010, at 5:56 PM, Robert Milkowski wrote: > On 03/08/2010 22:49, Ross Walker wrote: >> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais >> wrote: >> >> >>> Le 27 mai 2010 à 07:03, Brent Jones a écrit : >>> >>> >>&g

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Ross Walker
her synchronous nor asynchronous is is simply SCSI over IP. It is the application using the iSCSI protocol that determines whether it is synchronous, issue a flush after write, or asynchronous, wait until target flushes. I think the ZFS developers didn't quite understand that and wanted stric

Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Ross Walker
or and let it resilver and sit for a > week. If that's the case why not create a second pool called 'backup' and 'zfs send' periodically to the backup pool? -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-25 Thread Ross Walker
n small operations, or (b) implement raidz such that striping > of blocks behaves differently for small operations (plus parity). So the > confirmation I'm looking for would be somebody who knows the actual source > code, and the actual architecture that was chosen to implement raidz i

Re: [zfs-discuss] File cloning

2010-07-22 Thread Ross Walker
orruption than the worry when > people give fire-and-brimstone speeches about never disabling > zil-writing while using the NFS server. but it seems to mostly work > anyway when I do this, so I'm probably confused about something. To add to Miles' comments, what you are tr

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-20 Thread Ross Walker
g written (worse performance). If it's a partial stripe width then the remaining data needs to be read off disk which doubles the IOs. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Need ZFS master!

2010-07-13 Thread Ross Walker
have an rpool mirror. -Ross On Jul 12, 2010, at 6:30 PM, "Beau J. Bechdol" wrote: > I do apologies but I am completely lost here Maybe I am just not > understanding. Are you saying that a slice has to be created on the seond > drive before it can bee added to th

Re: [zfs-discuss] Encryption?

2010-07-11 Thread Ross Walker
VFS API separate from the Linux VFS API so file systems can be implemented in user space. Fuse needs a little more work to handle ZFS as a file system. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Should i enable Write-Cache ?

2010-07-10 Thread Ross Walker
on a regular LSI SAS (non-RAID) controller. The only change the PERC made was to coerce the disk size down 128MB, so left 128MB unused at the end of the drive, which would mean new disks would be slightly bigger. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote: > On 24/06/2010 14:32, Ross Walker wrote: >> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: >> >> >>> On 23/06/2010 18:50, Adam Leventhal wrote: >>> >>>>> Does i

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Ross Walker
To get good random IO with raidz you need a zpool with X raidz vdevs where X = desired IOPS/IOPS of single drive. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Ross Walker
z2) specific disks? What's the record size on those datasets? 8k? -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ls says: /tank/ws/fubar: Operation not applicable

2010-06-22 Thread Gordon Ross
lstat64("/tank/ws/fubar", 0x080465D0) Err#89 ENOSYS -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ls says: /tank/ws/fubar: Operation not applicable

2010-06-22 Thread Gordon Ross
Anyone know why my ZFS filesystem might suddenly start giving me an error when I try to "ls -d" the top of it? i.e.: ls -d /tank/ws/fubar /tank/ws/fubar: Operation not applicable zpool status says all is well. I've tried snv_139 and snv_137 (my latest and previous installs). It's an amd64 box. B

Re: [zfs-discuss] SLOG striping? (Bob Friesenhahn)

2010-06-22 Thread Ross Walker
On Jun 22, 2010, at 8:40 AM, Jeff Bacon wrote: >> The term 'stripe' has been so outrageously severely abused in this >> forum that it is impossible to know what someone is talking about when >> they use the term. Seemingly intelligent people continue to use wrong >> terminology because they thin

Re: [zfs-discuss] Dedup... still in beta status

2010-06-16 Thread Ross Walker
Set a max size the ARC can grow to, saving room for system services, get an SSD drive to act as an L2ARC, run a scrub first to prime the L2ARC (actually probably better to run something targetting just those datasets in question), then delete the dedup objects, smallest to largest.

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving ba

2010-06-14 Thread Ross Walker
oblem again in the future. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Please trim posts

2010-06-11 Thread Ross Walker
On long threads with inlined comments, think about keeping the previous 2 comments before or trimming anything 3 levels of indents or more. Of course that's just my general rule of thumb and different discussions require different quotings, but just being mindful is often

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Ross Walker
a 1M bs or better instead. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Ross Walker
bably rethink the setup. ZIL wil not buy you much here and if your VM software is like VMware then each write over NFS will be marked FSYNC which will force the lack of IOPS to the surface. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Ross Walker
or VMs and data. If you need high performance data such as databases, use iSCSI zvols directly into the VM, otherwise NFS/CIFS into the VM should be good enough. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Ross Walker
There is a high potential for tears here. Get an external disk for your own sanity. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] New SSD options

2010-05-21 Thread Ross Walker
On May 20, 2010, at 7:17 PM, Ragnar Sundblad wrote: On 21 maj 2010, at 00.53, Ross Walker wrote: On May 20, 2010, at 6:25 PM, Travis Tabbal wrote: use a slog at all if it's not durable? You should disable the ZIL instead. This is basically where I was going. There only seems

Re: [zfs-discuss] New SSD options

2010-05-20 Thread Ross Walker
kup should do the trick. It might not have the capacity of an SSD, but in my experience it works well in the 1TB data moderately loaded range. Have more data/activity then try more cards and more pools, otherwise pony up the for a capacitor backed SSD. -Ross

Re: [zfs-discuss] ZFS High Availability

2010-05-13 Thread Ross Walker
one in containers within the 2 original VMs so as to maximize ARC space. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
On May 12, 2010, at 3:06 PM, Manoj Joseph wrote: Ross Walker wrote: On May 12, 2010, at 1:17 AM, schickb wrote: I'm looking for input on building an HA configuration for ZFS. I've read the FAQ and understand that the standard approach is to have a standby system with access t

Re: [zfs-discuss] ZFS High Availability

2010-05-12 Thread Ross Walker
ng state as the original. There should be no interruption of services in this setup. This type of arrangement provides for oodles of flexibility in testing/ upgrading deployments as well. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ross Walker
ent, but if an application doesn't flush it's data, then it can definitely have partially written data. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-23 Thread Ross Walker
On Apr 22, 2010, at 11:03 AM, Geoff Nordli wrote: From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Thursday, April 22, 2010 6:34 AM On Apr 20, 2010, at 4:44 PM, Geoff Nordli wrote: If you combine the hypervisor and storage server and have students connect to the VMs via RDP or VNC

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Ross Walker
ith it. It also allows you to abstract the hypervisor from the client. Need a bigger storage server with lots of memory, CPU and storage though. Later, if need be, you can break out the disks to a storage appliance with an 8GB FC or 10Gbe iSCSI interconnect. -Ross __

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-21 Thread Ross Walker
scratch non-important data or may be even mirrored with a slice from 750GB disk. Will this work as I am hoping it should? Any potential gotchas? Wouldn't it just be easier to zfs send to a file on the 1TB, build your raidz, then zfs recv into the new raidz from this file?

Re: [zfs-discuss] SSD best practices

2010-04-19 Thread Ross Walker
system. If so- how. If not- why is this unimportant? I don't run the cluster suite, but I'd be surprised if the software doesn't copy the cache to the passive node whenever it's updated. -Ross ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Ross Walker
and another. > ZFS is smart enough to aggregate all these tiny write operations into a > single larger sequential write before sending it to the spindle disks. Hmm, when you did the write-back test was the ZIL SSD included in the write-back? What I was proposing was write-back only on the dis

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
On Thu, Apr 1, 2010 at 10:03 AM, Darren J Moffat wrote: > On 01/04/2010 14:49, Ross Walker wrote: >>> >>> We're talking about the "sync" for NFS exports in Linux; what do they >>> mean >>> with "sync" NFS exports? >> >>

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
hey mean with "sync" NFS exports? See section A1 in the FAQ: http://nfs.sourceforge.net/ -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
t a drive a little smaller it still should fit. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Ross Walker
one little test. Seriously, all disks configured WriteThrough (spindle and SSD disks alike) using the dedicated ZIL SSD device, very noticeably faster than enabling the WriteBack. What do you get with both SSD ZIL and WriteBack disks enabled? I mean if you have both why not use bot

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
On Mar 31, 2010, at 10:25 PM, Richard Elling wrote: On Mar 31, 2010, at 7:11 PM, Ross Walker wrote: On Mar 31, 2010, at 5:39 AM, Robert Milkowski wrote: On Wed, Mar 31, 2010 at 1:00 AM, Karsten Weiss Use something other than Open/Solaris with ZFS as an NFS server? :) I don&#

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Ross Walker
ted the data would be lost too. Should we care more for data written remotely then locally? -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 11:48 AM, vikkr wrote: THX Ross, i plan exporting each drive individually over iSCSI. I this case, the write, as well as reading, will go to all 6 discs at once, right? The only question - how to calculate fault tolerance of such a system if the discs are all

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
over iSCSI and setting the 6 drives as a raidz2 or even raidz3 which will give 3-4 drives of capacity, raidz3 will provide resiliency of a drive failure during a server failure. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Can we get some documentation on iSCSI sharing after comstar took over?

2010-03-17 Thread Ross Walker
csi works as expected? -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
On Mar 15, 2010, at 11:10 PM, Tim Cook wrote: On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker wrote: On Mar 15, 2010, at 7:11 PM, Tonmaus wrote: Being an iscsi target, this volume was mounted as a single iscsi disk from the solaris host, and prepared as a zfs pool consisting of this

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
scenario is rather one to be avoided. There is nothing saying redundancy can't be provided below ZFS just if you want auto recovery you need redundancy within ZFS itself as well. You can have 2 separate raid arrays served up via iSCSI to ZFS which then makes a mirror out of the storage.

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
as an option to disable write-back caching at least then if it doesn't honor flushing your data should still be safe. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] corruption of ZFS on iScsi storage

2010-03-15 Thread Ross Walker
IET I hope you were NOT using the write-back option on it as it caches write data in volatile RAM. IET does support cache flushes, but if you cache in RAM (bad idea) a system lockup or panic will ALWAYS loose data. -Ross ___ zfs-discuss mailing lis

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
On Mar 11, 2010, at 12:31 PM, Andrew wrote: Hi Ross, Ok - as a Solaris newbie.. i'm going to need your help. Format produces the following:- c8t4d0 (VMware-Virtualdisk-1.0 cyl 65268 alt 2 hd 255 sec 126) / p...@0,0/pci15ad,1...@10/s...@4,0 what dd command do I need to run to refe

Re: [zfs-discuss] ZFS - VMware ESX --> vSphere Upgrade : Zpool Faulted

2010-03-11 Thread Ross Walker
ill need to get rid of the RDM and use the iSCSI initiator in the solaris vm to mount the volume. See how the first 34 sectors look, and if they are damaged take the backup GPT to reconstruct the primary GPT and recreate the MBR. -Ross ___ zfs-discu

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
for memory. It is a wonder it didn't deadlock. If I were to put a ZFS file system on a ramdisk, I would limit the size of the ramdisk and ARC so both, plus the kernel fit nicely in memory with room to spare for user apps. -Ross ___

Re: [zfs-discuss] terrible ZFS performance compared to UFS on ramdisk (70% drop)

2010-03-09 Thread Ross Walker
ory file system. This would be more for something like temp databases in a RDBMS or a cache of some sort. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [indiana-discuss] future of OpenSolaris

2010-02-25 Thread Ross Walker
due to the newness and the binary stability with patches. Without it OS is no longer really production quality. A little scattered in my reasoning but I think I get the main idea across. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-19 Thread Ross Walker
ting the storage policy up to the system admin rather then the storage admin. It would be better to put effort into supporting FUA and DPO options in the target then dynamically changing a volume's cache policy from the initiator side. -Ross _

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-10 Thread Ross Walker
e new Dell MD11XX series is 24 2.5" drives and you can chain 3 of them together off a single controller. If your drives are dual ported you can use both HBA ports for redundant paths. -Ross ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] NFS access by OSX clients (was Cores vs. Speed?)

2010-02-09 Thread Ross Walker
could do the same with LDAP, but winbind has the advantage of auto-creating UIDs based on the user's RID+mapping range which saves A LOT of work in creating UIDs in AD. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://

Re: [zfs-discuss] Cores vs. Speed?

2010-02-05 Thread Ross Walker
, but you need a lot more drives then what multiple mirror vdevs can provide IOPS wise with the same amount of spindles. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
Interesting, can you explain what zdb is dumping exactly? I suppose you would be looking for blocks referenced in the snapshot that have a single reference and print out the associated file/ directory name? -Ross On Feb 4, 2010, at 7:29 AM, Darren Mackay wrote: Hi Ross, zdb - f

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-04 Thread Ross Walker
system functions offered by OS. I scan every byte in every file manually and it ^^^ On February 3, 2010 10:11:01 AM -0500 Ross Walker > wrote: Not a ZFS method, but you could use rsync with the dry run option to list all changed fi

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 8:59 PM, Frank Cusack wrote: On February 3, 2010 6:46:57 PM -0500 Ross Walker wrote: So was there a final consensus on the best way to find the difference between two snapshots (files/directories added, files/directories deleted and file/directories changed)? Find

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
nly real option is rsync. Of course you can zfs send the snap to another system and do the rsync there against a local previous version. -Ross ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   4   5   6   7   >