On Sep 26, 2012, at 4:28 AM, Sašo Kiselkov wrote:
> On 09/26/2012 01:14 PM, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris) wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>>
>>> Got me wondering: how man
On Sep 26, 2012, at 10:54 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
> Here's another one.
>
> Two identical servers are sitting side by side. They could be connected to
> each other via anything (presently using crossover ethernet cable.) And
> obviously they bot
On Wed, Sep 26, 2012 at 12:54 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> Here's another one.
>
> ** **
>
> Two identical servers are sitting side by side. They could be connected
> to each other via anything (pr
On Wed, Sep 26, 2012 at 10:28 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:
> When I create a 50G zvol, it gets "volsize" 50G, and it gets "used" and "
> refreservation" 51.6G
>
> ** **
>
> I have some filesystems alr
"head units" crash or do weird things, but disks persist. There are a couple of
HA head-unit solutions out there but most of them have their own separate
storage and they effectively just send transaction groups to each other.
The other way is to connect 2 nodes to an external SAS/FC chassis. cr
If you're willing to try FreeBSD, there's HAST (aka high availability
storage) for this very purpose.
You use hast to create mirror pairs using 1 disk from each box, thus
creating /dev/hast/* nodes. Then you use those to create the zpool one the
'primary' box.
All writes to the pool on the primar
Here's another one.
Two identical servers are sitting side by side. They could be connected to
each other via anything (presently using crossover ethernet cable.) And
obviously they both connect to the regular LAN. You want to serve VM's from at
least one of them, and even if the VM's aren't
When I create a 50G zvol, it gets "volsize" 50G, and it gets "used" and
"refreservation" 51.6G
I have some filesystems already in use, hosting VM's, and I'd like to mimic the
refreservation setting on the filesystem, as if I were smart enough from the
beginning to have used the zvol. So my que
Excellent thanks to you both. I knew of both those methods and wanted
to make sure i wasn't missing something!
On Wed, Sep 26, 2012 at 11:21 AM, Dan Swartzendruber wrote:
> **
> On 9/26/2012 11:18 AM, Matt Van Mater wrote:
>
> If the added device is slower, you will experience a slight drop in
On 9/26/2012 11:18 AM, Matt Van Mater wrote:
If the added device is slower, you will experience a slight drop in
per-op performance, however, if your working set needs another SSD,
overall it might improve your throughput (as the cache hit ratio will
increase).
Thanks for your
On 09/26/2012 05:18 PM, Matt Van Mater wrote:
>>
>> If the added device is slower, you will experience a slight drop in
>> per-op performance, however, if your working set needs another SSD,
>> overall it might improve your throughput (as the cache hit ratio will
>> increase).
>>
>
> Thanks for yo
>
> If the added device is slower, you will experience a slight drop in
> per-op performance, however, if your working set needs another SSD,
> overall it might improve your throughput (as the cache hit ratio will
> increase).
>
Thanks for your fast reply! I think I know the answer to this questi
On 09/26/2012 05:08 PM, Matt Van Mater wrote:
> I've looked on the mailing list (the evil tuning wikis are down) and
> haven't seen a reference to this seemingly simple question...
>
> I have two OCZ Vertex 4 SSDs acting as L2ARC. I have a spare Crucial SSD
> (about 1.5 years old) that isn't gett
I've looked on the mailing list (the evil tuning wikis are down) and
haven't seen a reference to this seemingly simple question...
I have two OCZ Vertex 4 SSDs acting as L2ARC. I have a spare Crucial SSD
(about 1.5 years old) that isn't getting much use and i'm curious about
adding it to the pool
On 09/26/2012 01:14 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> Got me wondering: how many reads of a block from spinning rust
>> suffice for it to ult
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> Got me wondering: how many reads of a block from spinning rust
> suffice for it to ultimately get into L2ARC? Just one so it
> gets into a recent-read list of the ARC and then ex
16 matches
Mail list logo