Re: [zfs-discuss] Has anyone used a Dell with a PERC H310?

2012-05-06 Thread Greg Mason
I am currently trying to get two of these things running Illumian. I don't have any particular performance requirements, so I'm thinking of using some sort of supported hypervisor, (either RHEL and KVM or VMware ESXi) to get around the driver support issues, and passing the disks through to an I

Re: [zfs-discuss] ZFS Group Quotas

2010-08-18 Thread Greg Mason
t versions of linux (i.e. RHEL 6) are a bit better at NFSv4, but i'm not holding my breath. -- Greg Mason HPC Administrator Michigan State University Institute for Cyber Enabled Research High Performance Computing Center web: www.icer.msu.edu email: gma...@msu.edu _

Re: [zfs-discuss] ZFS flar image.

2009-09-14 Thread Greg Mason
As an alternative, I've been taking a snapshot of rpool on the golden system, sending it to a file, and creating a boot environment from the archived snapshot on target systems. After fiddling with the snapshots a little, I then either appropriately anonymize the system or provide it with its i

Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-20 Thread Greg Mason
How about the bug "removing slog not possible"? What if this slog fails? Is there a plan for such situation (pool becomes inaccessible in this case)? You can "zpool replace" a bad slog device now. -Greg ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-20 Thread Greg Mason
Of course, I would welcome a reply from anyone who has experience with this, not just Greg. Monish - Original Message - From: "Greg Mason" To: "HUGE | David Stahl" Cc: "zfs-discuss" Sent: Thursday, August 20, 2009 4:04 AM Subject: Re: [zfs-discuss] Ssd

Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-19 Thread Greg Mason
using the third-party parts is that the involved support organizations for the software/hardware will make it very clear that such a configuration is quite unsupported. That said, we've had pretty good luck with them. -Greg -- Greg Mason System Administrator High Performance Computing

[zfs-discuss] unexpected behavior with "nbmand=on" set

2009-08-19 Thread Greg Mason
on a test file system resolved both bugs, as well as other known issues that our users have been running into. All the various known issues this caused can be found at the MSU HPCC wiki: https://wiki.hpcc.msu.edu/display/Issues/Known+Issues, under "Home Directory file system." -Greg

Re: [zfs-discuss] Shrinking a zpool?

2009-08-06 Thread Greg Mason
ilesystems around to different systems. If you had only one filesystem in the pool, you could then safely destroy the original pool. This does mean you'd need 2x the size of the LUN during the transfer though. For replication of ZFS filesystems, we a similar process, with just a lot of inc

Re: [zfs-discuss] SSD's and ZFS...

2009-07-23 Thread Greg Mason
D is an MLC device. The Intel SSD is an SLC device. That right there accounts for the cost difference. The SLC device (Intel X25-E) will last quite a bit longer than the MLC device. -Greg -- Greg Mason System Administrator Michigan State University High Performance Computing Center ___

Re: [zfs-discuss] Question about user/group quotas

2009-07-09 Thread Greg Mason
Thanks for the link Richard, I guess the next question is, how safe would it be to run snv_114 in production? Running something that would be technically "unsupported" makes a few folks here understandably nervous... -Greg On Thu, 2009-07-09 at 10:13 -0700, Richard Elling wrote: &g

[zfs-discuss] Question about user/group quotas

2009-07-09 Thread Greg Mason
being able to utilize ZFS user quotas, as we're having problems with NFSv4 on our clients (SLES 10 SP2). We'd like to be able to use NFSv3 for now (one large ZFS filesystem, with user quotas set), until the flaws with our Linux NFS clients can be addressed. -- Greg Mason System Admi

Re: [zfs-discuss] importing pool with missing slog followup

2009-06-09 Thread Greg Mason
In my testing, I've seen that trying to duplicate zpool disks with dd often results in a disk that's unreadable. I believe it has something to do with the block sizes of dd. In order to make my own slog backups, I just used cat instead. I plugged the slog SSD into another system (not a necessary s

Re: [zfs-discuss] [storage-discuss] Supermicro SAS/SATA controllers?

2009-04-15 Thread Greg Mason
And it looks like the Intel fragmentation issue is fixed as well: http://techreport.com/discussions.x/16739 FYI, Intel recently had a new firmware release. IMHO, odds are that this will be as common as HDD firmware releases, at least for the next few years. http://news.cnet.com/8301-13924_3-10

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Greg Mason
Harry, ZFS will only compress data if it is able to gain more than 12% of space by compressing the data (I may be wrong on the exact percentage). If ZFS can't get get that 12% compression at least, it doesn't bother and will just store the block uncompressed. Also, the default ZFS compressio

Re: [zfs-discuss] zfs as a cache server

2009-04-09 Thread Greg Mason
Francois, Your best bet is probably a stripe of mirrors. i.e. a zpool made of many mirrors. This way you have redundancy, and fast reads as well. You'll also enjoy pretty quick resilvering in the event of a disk failure as well. For even faster reads, you can add dedicated L2ARC cache devic

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-03-03 Thread Greg Mason
Just my $0.02, but would pool shrinking be the same as vdev evacuation? I'm quite interested in vdev evacuation as an upgrade path for multi-disk pools. This would be yet another reason to for folks to use ZFS at home (you only have to buy cheap disks), but it would also be a good to have that

Re: [zfs-discuss] Write caches on X4540

2009-02-12 Thread Greg Mason
e: On Thu, Feb 12, 2009 at 10:33:40AM -0500, Greg Mason wrote: What I'm looking for is a faster way to do this than format -e -d -f

Re: [zfs-discuss] Write caches on X4540

2009-02-12 Thread Greg Mason
Are you sure thar write cache is back on after restart? Yes, I've checked with format -e, on each drive. When disabling the write cache with format, it also gives a warning stating this is the case. What I'm looking for is a faster way to do this than format -e -d -f

Re: [zfs-discuss] Write caches on X4540

2009-02-12 Thread Greg Mason
We use several X4540's over here as well, what type of workload do you have, and how much performance increase did you see by disabling the write caches? We see the difference between our tests completing in around 2.5 minutes (with write caches) to around a minute an and a half without them,

[zfs-discuss] Write caches on X4540

2009-02-11 Thread Greg Mason
We're using some X4540s, with OpenSolaris 2008.11. According to my testing, to optimize our systems for our specific workload, I've determined that we get the best performance with the write cache disabled on every disk, and with zfs:zfs_nocacheflush=1 set in /etc/system. The only issue is s

Re: [zfs-discuss] Send & Receive (and why does 'ls' modify a snapshot?)

2009-02-04 Thread Greg Mason
Tony, I believe you want to use "zfs recv -F" to force a rollback on the receiving side. I'm wondering if your ls is updating the atime somewhere, which would indeed be a change... -Greg ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-02-03 Thread Greg Mason
Orvar, With my testing, i've seen a 5x improvement with small file creation when working specifically with NFS. This is after I added an SSD for the ZIL. I recommend Richard Elling's zilstat (he posted links earlier). It'll let you see if a dedicated device for the ZIL will help your specific

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
I'll give this a script a shot a little bit later today. For ZIL sizing, I'm using either 1 or 2 32G Intel X25-E SSDs in my tests, which, according to what I've read, is 2-4 times larger than the maximum that ZFS can possibly use. We've got 32G of system memory in these Thors, and (if I'm not m

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
> If there was a latency issue, we would see such a problem with our > existing file server as well, which we do not. We'd also have much > greater problems than just file server performance. > > So, like I've said, we've ruled out the network as an issue. I should also add that I've tested the

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
Jim Mauro wrote: > >> This problem only manifests itself when dealing with many small files >> over NFS. There is no throughput problem with the network. > But there could be a _latency_ issue with the network. If there was a latency issue, we would see such a problem with our existing file ser

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
7200 RPM SATA disks. Tim wrote: > > > On Fri, Jan 30, 2009 at 8:24 AM, Greg Mason <mailto:gma...@msu.edu>> wrote: > > A Linux NFS file server, with a few terabytes of fibre-attached disk, > using XFS. > > I'm trying to get these Thors to p

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
I should also add that this "creating many small files" issue is the ONLY case where the Thors are performing poorly, which is why I'm focusing on it. Greg Mason wrote: > A Linux NFS file server, with a few terabytes of fibre-attached disk, > using XFS. > > I

Re: [zfs-discuss] write cache and cache flush

2009-01-30 Thread Greg Mason
A Linux NFS file server, with a few terabytes of fibre-attached disk, using XFS. I'm trying to get these Thors to perform at least as well as the current setup. A performance hit is very hard to explain to our users. > Perhaps I missed something, but what was your previous setup? > I.e. what di

Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Greg Mason
This problem only manifests itself when dealing with many small files over NFS. There is no throughput problem with the network. I've run tests with the write cache disabled on all disks, and the cache flush disabled. I'm using two Intel SSDs for ZIL devices. This setup is faster than using the

Re: [zfs-discuss] write cache and cache flush

2009-01-29 Thread Greg Mason
the funny thing is that I'm showing a performance improvement over write caches + cache flushes. The only way these pools are being accessed is over NFS. Well, at least the only way I care about when it comes to high performance. I'm pretty sure it would give a performance hit locally, but I do

[zfs-discuss] write cache and cache flush

2009-01-29 Thread Greg Mason
So, I'm still beating my head against the wall, trying to find our performance bottleneck with NFS on our Thors. We've got a couple Intel SSDs for the ZIL, using 2 SSDs as ZIL devices. Cache flushing is still enabled, as are the write caches on all 48 disk devices. What I'm thinking of doing i

Re: [zfs-discuss] Add SSD drive as L2ARC(?) cache to existing ZFS raid?

2009-01-29 Thread Greg Mason
How were you running this test? were you running it locally on the machine, or were you running it over something like NFS? What is the rest of your storage like? just direct-attached (SAS or SATA, for example) disks, or are you using a higher-end RAID controller? -Greg kristof wrote: > Kebab

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Greg Mason
If i'm not mistaken (and somebody please correct me if i'm wrong), the Sun 7000 series storage appliances (the Fishworks boxes) use enterprise SSDs, with dram caching. One such product is made by STEC. My understanding is that the Sun appliances use one SSD for the ZIL, and one as a read cache.

[zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-22 Thread Greg Mason
We're evaluating the possibility of speeding up NFS operations of our X4540s with dedicated log devices. What we are specifically evaluating is replacing 1 or two of our spare sata disks with sata SSDs. Has anybody tried using SSD device(s) as dedicated ZIL devices in a X4540? Are there any kno

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-19 Thread Greg Mason
> > Good idea. Thor has a CF slot, too, if you can find a high speed > CF card. > -- richard We're already using the CF slot for the OS. We haven't really found any CF cards that would be fast enough anyways :) ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-19 Thread Greg Mason
So, what we're looking for is a way to improve performance, without disabling the ZIL, as it's my understanding that disabling the ZIL isn't exactly a safe thing to do. We're looking for the best way to improve performance, without sacrificing too much of the safety of the data. The current

[zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-19 Thread Greg Mason
or the log device? And, yes, I already know that turning off the ZIL is a Really Bad Idea. We do, however, need to provide our users with a certain level of performance, and what we've got with the ZIL on the pool is completely unacceptable. Thanks for any pointers you may have... -- Gre

Re: [zfs-discuss] Using ZFS for replication

2009-01-15 Thread Greg Mason
zfs-auto-snapshot (SUNWzfs-auto-snapshot) is what I'm using. Only trick is that on the other end, we have to manage our own retention of the snapshots we send to our offsite/backup boxes. zfs-auto-snapshot can handle the sending of snapshots as well. We're running this in OpenSolaris 2008.11 (s