> We had a situation where write speeds to a ZFS
> consisting of 2 7TB RAID5 LUNs came to a crawl.
Sounds like you've hit Bug# 6596237 "Stop looking and start ganging". We ran
into the same problem on our X4500 Thumpers. Write throughput dropped to 200
KB/s. We now keep utilization under 90%
there were 130G left on the zpool. df -h from before one if the file system was
destroyed is in the original post. Some file system viewed that as 1% full,
others as 94-97% (and some others with fairly random numbers), which is another
mystery to me as well. Shouldn't all file systems have show
On Thu, 7 Aug 2008, Andrey Dmitriev wrote:
> I am sure.. Nothing but this box ever accessed them. All NFS access
> was stopped to the box. The RAID sets are identical (9 drive RAID5).
> We tested the file system almost non-stop for almost two days and
> never did I ever get it to write above 4
I am sure.. Nothing but this box ever accessed them. All NFS access was stopped
to the box. The RAID sets are identical (9 drive RAID5). We tested the file
system almost non-stop for almost two days and never did I ever get it to write
above 4 megs (on average it was below 3 megs). The second I
On Thu, 7 Aug 2008, Andrey Dmitriev wrote:
> We had a situation where write speeds to a ZFS consisting of 2 7TB
> RAID5 LUNs came to a crawl. We have spent a good 100 men hours
> trying to troubleshoot the issue eliminating HW issues. In the end,
> when we whacked about 2TB out of 14, performan
All,
We had a situation where write speeds to a ZFS consisting of 2 7TB RAID5 LUNs
came to a crawl. We have spent a good 100 men hours trying to troubleshoot the
issue eliminating HW issues. In the end, when we whacked about 2TB out of 14,
performance went back to normal (300megs+ vs 3 megs whe