Something our users do quite a bit of is untarring archives with a lot of small files. Also, many small, quick writes are also one of the many workloads our users have.

Real-world test: our old Linux-based NFS server allowed us to unpack a particular tar file (the source for boost 1.37) in around 2-4 minutes, depending on load. This machine wasn't special at all, but it had fancy SGI disk on the back end, and was using the Linux-specific async NFS option.

We turned up our X4540s, and this same tar unpack took over 17 minutes! We disabled the ZIL for testing, and we dropped this to under 1 minute. With the X25-E as a slog, we were able to run this test in 2-4 minutes, same as the old storage.

That said, I strongly recommend using Richard Elling's zilstat. He's posted about it previously on this list. It will help you determine if adding a slog device will help your workload or not. I didn't know about this script at the time of our testing, so it ended up being some trial and error, running various tests on different hardware setups (which means creating and destroying quite a few pools).

-Greg

Jorgen Lundman wrote:

Does un-taring something count? It is what I used for our tests.

I tested with ZIL disable, zil cache on /tmp/zil, CF-card (300x) and cheap SSD. Waiting for X-25E SSDs to arrive for testing those:

http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/030183.html

If you want a quick answer, disable ZIL (you need to unmount/mount, export/import or reboot) on your ZFS volume and try it. That is the theoretical maximum. You can get close to this using various technologies, SSD and all that.

I am no expert on this, I knew nothing about it 2 weeks ago.

But for our provisioning engine to untar Movable-Types for customers, 5 mins to 45secs is quite an improvement. I can get that to 11seconds theoretically. (ZIL disable)

Lund


Monish Shah wrote:
Hello Greg,

I'm curious how much performance benefit you gain from the ZIL accelerator. Have you measured that? If not, do you have a gut feel about how much it helped? Also, for what kind of applications does it help?

(I know it helps with synchronous writes. I'm looking for real world answers like: "Our XYZ application was running like a dog and we added an SSD for ZIL and the response time improved by X%.")

Of course, I would welcome a reply from anyone who has experience with this, not just Greg.

Monish

----- Original Message ----- From: "Greg Mason" <gma...@msu.edu>
To: "HUGE | David Stahl" <dst...@hugeinc.com>
Cc: "zfs-discuss" <zfs-discuss@opensolaris.org>
Sent: Thursday, August 20, 2009 4:04 AM
Subject: Re: [zfs-discuss] Ssd for zil on a dell 2950


Hi David,

We are using them in our Sun X4540 filers. We are actually using 2 SSDs
per pool, to improve throughput (since the logbias feature isn't in an
official release of OpenSolaris yet). I kind of wish they made an 8G or
16G part, since the 32G capacity is kind of a waste.

We had to go the NewEgg route though. We tried to buy some Sun-branded
disks from Sun, but that's a different story. To summarize, we had to
buy the NewEgg parts to ensure a project stayed on-schedule.

Generally, we've been pretty pleased with them. Occasionally, we've had
an SSD that wasn't behaving well. Looks like you can replace log devices
now though... :) We use the 2.5" to 3.5" SATA adapter from IcyDock, in a
Sun X4540 drive sled. If you can attach a standard sata disk to a Dell
sled, this approach would most likely work for you as well. Only issue
with using the third-party parts is that the involved support
organizations for the software/hardware will make it very clear that
such a configuration is quite unsupported. That said, we've had pretty
good luck with them.

-Greg


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to