Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
Hi,
Sorry, I'm very busy those past few days.
> >> >
> >> > http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
>
> The commands described on that page do not have direct equivalents in
> zfs. There is currently no way to reduce the number of "t
On 2012-12-06 09:35, Albert Shih wrote:
1) add a 5th top-level vdev (eg. another set of 12 disks)
That's not a problem.
That IS a problem if you're going to ultimately remove an enclosure -
once added, you won't be able to remove the extra top-level VDEV from
your ZFS pool.
2) replace the
>
>
> At present, I do not see async write QoS as being interesting. That leaves
> sync writes and reads
> as the managed I/O. Unfortunately, with HDDs, the variance in response
> time >> queue management
> time, so the results are less useful than the case with SSDs. Control
> theory works, once a
>
>
>
> I'm unclear on the best way to warm data... do you mean to simply `dd
> if=/volumes/myvol/data of=/dev/null`? I have always been under the
> impression that ARC/L2ARC has rate limiting how much data can be added to
> the cache per interval (i can't remember the interval). Is this not the
On Thu, Dec 6, 2012 at 12:35 AM, Albert Shih wrote:
> Le 01/12/2012 ? 08:33:31-0700, Jan Owoc a ?crit
>
> > 2) replace the disks with larger ones one-by-one, waiting for a
> > resilver in between
>
> This is the point I don't see how to do it. I've 48 disk actually from
> /dev/da0 -> /dev/da47 (
On Dec 6, 2012, at 5:30 AM, Matt Van Mater wrote:
>
>
> I'm unclear on the best way to warm data... do you mean to simply `dd
> if=/volumes/myvol/data of=/dev/null`? I have always been under the
> impression that ARC/L2ARC has rate limiting how much data can be added to the
> cache per inte