On Sun, Apr 28, 2019 at 4:03 PM Karl Denninger wrote:
> On 4/20/2019 15:56, Steven Hartland wrote:
> > Thanks for extra info, the next question would be have you eliminated
> > that corruption exists before the disk is removed?
> >
> > Would be interesting to add a zpool scrub to confirm this isn
On 4/20/2019 15:56, Steven Hartland wrote:
> Thanks for extra info, the next question would be have you eliminated
> that corruption exists before the disk is removed?
>
> Would be interesting to add a zpool scrub to confirm this isn't the
> case before the disk removal is attempted.
>
> Regard
No; I can, but of course that's another ~8 hour (overnight) delay
between swaps.
That's not a bad idea however
On 4/20/2019 15:56, Steven Hartland wrote:
> Thanks for extra info, the next question would be have you eliminated
> that corruption exists before the disk is removed?
>
> Would be i
Thanks for extra info, the next question would be have you eliminated
that corruption exists before the disk is removed?
Would be interesting to add a zpool scrub to confirm this isn't the case
before the disk removal is attempted.
Regards
Steve
On 20/04/2019 18:35, Karl Denninger wr
On 4/20/2019 10:50, Steven Hartland wrote:
> Have you eliminated geli as possible source?
No; I could conceivably do so by re-creating another backup volume set
without geli-encrypting the drives, but I do not have an extra set of
drives of the capacity required laying around to do that. I would
Have you eliminated geli as possible source?
I've just setup an old server which has a LSI 2008 running and old FW
(11.0) so was going to have a go at reproducing this.
Apart from the disconnect steps below is there anything else needed e.g.
read / write workload during disconnect?
mps0: p
On 4/13/2019 06:00, Karl Denninger wrote:
> On 4/11/2019 13:57, Karl Denninger wrote:
>> On 4/11/2019 13:52, Zaphod Beeblebrox wrote:
>>> On Wed, Apr 10, 2019 at 10:41 AM Karl Denninger wrote:
>>>
>>>
In this specific case the adapter in question is...
mps0: port 0xc000-0xc0ff mem
On 4/11/2019 13:57, Karl Denninger wrote:
> On 4/11/2019 13:52, Zaphod Beeblebrox wrote:
>> On Wed, Apr 10, 2019 at 10:41 AM Karl Denninger wrote:
>>
>>
>>> In this specific case the adapter in question is...
>>>
>>> mps0: port 0xc000-0xc0ff mem
>>> 0xfbb3c000-0xfbb3,0xfbb4-0xfbb7 irq
On 4/11/2019 13:52, Zaphod Beeblebrox wrote:
> On Wed, Apr 10, 2019 at 10:41 AM Karl Denninger wrote:
>
>
>> In this specific case the adapter in question is...
>>
>> mps0: port 0xc000-0xc0ff mem
>> 0xfbb3c000-0xfbb3,0xfbb4-0xfbb7 irq 30 at device 0.0 on pci3
>> mps0: Firmware: 20.00
On Wed, Apr 10, 2019 at 10:41 AM Karl Denninger wrote:
> In this specific case the adapter in question is...
>
> mps0: port 0xc000-0xc0ff mem
> 0xfbb3c000-0xfbb3,0xfbb4-0xfbb7 irq 30 at device 0.0 on pci3
> mps0: Firmware: 20.00.07.00, Driver: 21.02.00.00-fbsd
> mps0: IOCCapabilitie
On 4/10/2019 08:45, Andriy Gapon wrote:
> On 10/04/2019 04:09, Karl Denninger wrote:
>> Specifically, I *explicitly* OFFLINE the disk in question, which is a
>> controlled operation and *should* result in a cache flush out of the ZFS
>> code into the drive before it is OFFLINE'd.
>>
>> This should
On 10/04/2019 04:09, Karl Denninger wrote:
> Specifically, I *explicitly* OFFLINE the disk in question, which is a
> controlled operation and *should* result in a cache flush out of the ZFS
> code into the drive before it is OFFLINE'd.
>
> This should result in the "last written" TXG that the rema
On 4/9/2019 16:27, Zaphod Beeblebrox wrote:
> I have a "Ghetto" home RAID array. It's built on compromises and makes use
> of RAID-Z2 to survive. It consists of two plexes of 8x 4T units of
> "spinning rust". It's been upgraded and upgraded. It started as 8x 2T,
> then 8x 2T + 8x 4T then the cu
I have a "Ghetto" home RAID array. It's built on compromises and makes use
of RAID-Z2 to survive. It consists of two plexes of 8x 4T units of
"spinning rust". It's been upgraded and upgraded. It started as 8x 2T,
then 8x 2T + 8x 4T then the current 16x 4T. The first 8 disks are
connected to mo
On 4/9/2019 15:04, Andriy Gapon wrote:
> On 09/04/2019 22:01, Karl Denninger wrote:
>> the resilver JUST COMPLETED with no errors which means the ENTIRE DISK'S
>> IN USE AREA was examined, compared, and blocks not on the "new member"
>> or changed copied over.
> I think that that's not entirely cor
On 09/04/2019 22:01, Karl Denninger wrote:
> the resilver JUST COMPLETED with no errors which means the ENTIRE DISK'S
> IN USE AREA was examined, compared, and blocks not on the "new member"
> or changed copied over.
I think that that's not entirely correct.
ZFS maintains something called DTL, a d
I've run into something often -- and repeatably -- enough since updating
to 12-STABLE that I suspect there may be a code problem lurking in the
ZFS stack or in the driver and firmware compatibility with various HBAs
based on the LSI/Avago devices.
The scenario is this -- I have data sets that are
17 matches
Mail list logo