Many large-scale photo hosts start with netapp as the default "good
enough" way to handle multiple-TB storage. With a 1-5% cache on top,
the workload is truly random-read over many TBs. But these workloads
almost assume a frontend cache to take care of hot traffic, so L2ARC
is just a nice implement
left it on. Anything possible there?
The only other thing is that I did "zfs rollback" for a totally
unrelated filesystem in the pool, but I have no idea if this could
have affected it.
(I've verified that I got the right one with "zpool history".)
mike
On Tue, Jan 5, 2
I replayed a bunch of filesystems in order to get dedupe benefits.
Only thing is a couple of them are rolled back to November or so (and
I didn't notice before destroy'ing the old copy).
I used something like:
zfs snapshot pool/f...@dd
zfs send -Rp pool/f...@dd | zfs recv -d pool/fs2
(after done.
Just l2arc. Guess I can always repartition later.
mike
On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier wrote:
> Are you using the SSD for l2arc or zil or both?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-dis
Make that 25MB/sec, and rising...
So it's 8x faster now.
mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've written about my slow-to-dedupe RAIDZ.
After a week of.waitingI finally bought a little $100 30G OCZ
Vertex and plugged it in as a cache.
After <2 hours of warmup, my zfs send/receive rate on the pool is
>16MB/sec (reading and writing each at 16MB as measured by zpool
iostat).
That's
I have a 4-disk RAIDZ, and I reduced the time to scrub it from 80
hours to about 14 by reducing the number of snapshots, adding RAM,
turning off atime, compression, and some other tweaks. This week
(after replaying a large volume with dedup=on) it's back up, way up.
I replayed a 700G filesystem to
FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
running visibly faster (somewhere around 3-5x faster).
echo zfs_prefetch_disable/W0t1 | mdb -kw
Anyone else see a result like this?
I'm using the "read" bandwidth from the sending pool from "zpool
iostat -x 5" to estimate transf
For me, arcstat.pl is a slam-dunk predictor of dedup throughput. If my
"miss%" is in the single digits, dedup write speeds are reasonable. When the
arc misses go way up, dedup writes get very slow. So my guess is that this
issue depends entirely on whether or not the DDT is in RAM or not. I don't
h
Anyone who's lost data this way: were you doing weekly scrubs, or did you
find out about the simultaneous failures after not touching the bits for
months?
mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
My ARC is ~3GB.
I'm doing a test that copies 10GB of data to a volume where the blocks
should dedupe 100% with existing data.
First time, the test that runs <5MB sec, seems to average 10-30% ARC *miss*
rate. <400 arc reads/sec.
When things are working at disk bandwidth, I'm getting 3-5% ARC misse
I have observed the opposite, and I believe that all writes are slow to my
dedup'd pool.
I used local rsync (no ssh) for one of my migrations (so it was restartable,
as it took *4 days*), and the writes were slow just like zfs recv.
I have not seen fast writes of real data to the deduped volume,
Mine is similar (4-disk RAIDZ1)
- send/recv with dedup on: <4MB/sec
- send/recv with dedup off: ~80M/sec
- send > /dev/null: ~200MB/sec.
I know dedup can save some disk bandwidth on write, but it shouldn't save
much read bandwidth (so I think these numbers are right).
There's a warning in a Je
I have also had slow scrubbing on filesystems with lots of files, and I
agree that it does seem to degrade badly. For me, it seemed to go from 24
hours to 72 hours in a matter of a few weeks.
I did these things on a pool in-place, which helped a lot (no rebuilding):
1. reduced number of snapshots
Note you don't get the better vibration control and other improvements the
enterprise drives have. So it's not exactly that easy. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
Most manufacturers have a utility available that sets this behavior.
For WD drives, it's called WDTLER.EXE. You have to make a bootable USB stick to
run the app, but it is simple to change the setting to the enterprise behavior.
--
This message posted from opensolaris.org
___
zpool import done! Back online.
Total downtime for 4TB pool was about 8 hours, don't know how much of this was
completing the destroy transaction.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Am in the same boat, exactly. Destroyed a large set and rebooted, with a scrub
running on the same pool.
My reboot stuck on "Reading ZFS Config: *" for several hours (disks were
active). I cleared the zpool.cache from single-user and am doing an import (can
boot again). I wasn't able to boot m
I'm using the Caviar Green drives in a 5-disk config.
I downloaded the WDTLER utility and set all the drives to have a 7-second
timeout, like the RE series have.
WDTLER boots a small DOS app and you have to hit a key for each drive to
adjust. So this might take time for a large raidz2.
--
This
19 matches
Mail list logo