Re: ZFS...

2019-05-09 Thread Borja Marcos via freebsd-stable
> On 9 May 2019, at 00:55, Michelle Sullivan wrote: > > > > This is true, but I am of the thought in alignment with the zFs devs this > might not be a good idea... if zfs can’t work it out already, the best thing > to do will probably be get everything off it and reformat… That’s true, I

Re: ZFS...

2019-05-07 Thread Borja Marcos via freebsd-stable
> On 8 May 2019, at 05:09, Walter Parker wrote: > Would a disk rescue program for ZFS be a good idea? Sure. Should the lack > of a disk recovery program stop you from using ZFS? No. If you think so, I > suggest that you have your data integrity priorities in the wrong order > (focusing on small,

Re: ZFS...

2019-05-03 Thread Borja Marcos via freebsd-stable
> On 3 May 2019, at 11:55, Pete French wrote: > > > > On 03/05/2019 08:09, Borja Marcos via freebsd-stable wrote: > >> The right way to use disks is to give ZFS access to the plain CAM devices, >> not thorugh some so-called JBOD on a RAID >> controller

Re: ZFS...

2019-05-03 Thread Borja Marcos via freebsd-stable
> On 1 May 2019, at 04:26, Michelle Sullivan wrote: > >mfid8 ONLINE 0 0 0 Anyway I think this is a mistake (mfid). I know, HBA makers have been insisting on having their firmware getting in the middle, which is a bad thing. The right way to use disks is to give ZFS ac

Re: ZFS...

2019-04-30 Thread Borja Marcos via freebsd-stable
> On 30 Apr 2019, at 15:30, Michelle Sullivan wrote: > >> I'm sorry, but that may well be what nailed you. >> >> ECC is not just about the random cosmic ray. It also saves your bacon >> when there are power glitches. > > No. Sorry no. If the data is only half to disk, ECC isn't going to sav

Crazy default kern.maxusers?

2019-03-28 Thread Borja Marcos via freebsd-stable
Hi :) I am setting an Elasticsearch cluster using FreeBSD 12-STABLE. The servers have 64 GB of memory and I am running ZFS. I was puzzled when despite having limited vfs.zfs.arc_max to 32 GB and assigning a 16 GB heap (locked) to Elasticsearch, and with around 10 GB of free memory, I saw the

Problem with Emulex OCE 10 GbE cards

2019-03-20 Thread Borja Marcos via freebsd-stable
Hello, I am trying to use several Emulex OpenConnect cards and the driver fails to attach them. oce0: mem 0x92c0-0x92c03fff,0x92bc-0x92bd,0x92be-0x92bf irq 38 at device 0.7 on pci2 oce0: oce_mq_create failed - cmd status: 2 oce0: MQ create failed device_attach: oce0 atta