> On 01 Aug 2016, at 19:30, Michelle Sullivan wrote:
>
> There are reasons for using either…
Indeed, but my decision was to run ZFS. And getting a HBA in some
configurations can be difficult because vendors insist on using
RAID adapters. After all, that’s what most of their customers demand.
If anyone is interested, as Michelle Sullivan just mentioned. One problem I
found when looking for an HBA is that they are not so easy to find. Scoured
the internet for a backup HBA I came across these -
http://www.avagotech.com/products/server-storage/host-bus-adapters/#tab-12Gb1
Can only speak f
> On 01 Aug 2016, at 15:12, O. Hartmann wrote:
>
> First, thanks for responding so quickly.
>
>> - The third option is to make the driver expose the SAS devices like a HBA
>> would do, so that they are visible to the CAM layer, and disks are handled by
>> the stock “da” driver, which is the ide
On Mon, 1 Aug 2016 11:48:30 +0200
Borja Marcos wrote:
Hello.
First, thanks for responding so quickly.
> > On 01 Aug 2016, at 08:45, O. Hartmann wrote:
> >
> > On Wed, 22 Jun 2016 08:58:08 +0200
> > Borja Marcos wrote:
> >
> >> There is an option you can use (I do it all the time!) to make
> On 01 Aug 2016, at 08:45, O. Hartmann wrote:
>
> On Wed, 22 Jun 2016 08:58:08 +0200
> Borja Marcos wrote:
>
>> There is an option you can use (I do it all the time!) to make the card
>> behave as a plain HBA so that the disks are handled by the “da” driver.
>>
>> Add this to /boot/loader.c
On Wed, 22 Jun 2016 08:58:08 +0200
Borja Marcos wrote:
> > On 22 Jun 2016, at 04:08, Jason Zhang wrote:
> >
> > Mark,
> >
> > Thanks
> >
> > We have same RAID setting both on FreeBSD and CentOS including cache
> > setting. In FreeBSD, I enabled the write cache but the performance is the
> >