On Dec 19, 2011, at 11:00 PM, Stefan Esser wrote:

> Am 19.12.2011 19:03, schrieb Daniel Kalchev:
>> I have observed similar behavior, even more extreme on a spool with dedup 
>> enabled. Is dedup enabled on this spool?
> 
> Thank you for the report!
> 
> Well, I had dedup enabled for a few short tests. But since I have got
> "only" 8GB of RAM and dedup seems to require an order of magnitude more
> to be working well, I switched dedup off again after a few hours.

You will need to get rid of the DDT, as those are read nevertheless even with 
dedup (already) disabled. The tables refer to already deduped data.

In my case, I had about 2-3TB of deduced data, with 24GB RAM. There was no 
shortage of RAM and I could not confirm that ARC is full.. but somehow the pool 
was placing heavy read on one or two disks only (all others, nearly idle) -- 
apparently many small size reads.

I resolved my issue by copying the data to a newly created filesystem in the 
same pool -- luckily there was enough space available, then removing the 
'deduped' filesystems.

That last operation was particularly slow and at one time I had spontaneous 
reboot -- the pool was 'impossible to mount', and as weird as it sounds, I had 
'out of swap space' killing the 'zpool list' process.
I let it sit for few hours, until it has cleared itself.

I/O in that pool is back to normal now.

There is something terribly wrong with the dedup code.

Well, if your test data is not valuable, you can just delete it. :)

Daniel

_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to