> If the device on the secondary node does not supports DELETE, but the
> device on the primary does, HAST will report to ZFS that DELETE
> succeeded (although it failed on the secondary), and ZFS will not
> disable TRIM. Pete, isn't this your case?
Afraid not, both machines are running normal "sp
On Fri, Oct 11, 2013 at 11:27:36AM +0100, Pete French wrote:
> > If the device on the secondary node does not supports DELETE, but the
> > device on the primary does, HAST will report to ZFS that DELETE
> > succeeded (although it failed on the secondary), and ZFS will not
> > disable TRIM. Pete, is
On Fri, Oct 11, 2013 at 01:42:39PM +0300, Mikolaj Golub wrote:
> You showed only "Remote request failed" errors from your logs. Do you
> have "Local request failed" errors too?
You should also see them in "local errors" statistics from `hastctl
list' output.
--
Mikolaj Golub
___
> You showed only "Remote request failed" errors from your logs. Do you
> have "Local request failed" errors too?
Yes, I have both - heres a fragment of the log:
Oct 9 11:06:47 serpentine-active hastd[1502]: [serp0] (primary) Remote request
failed (Operation not supported): DELETE(8594203648, 2
> You should also see them in "local errors" statistics from `hastctl
> list' output.
Unfortunately I think those couters were reset when the machine
panicd - they are all showing as zero.
-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.
- Original Message -
From: "Pete French"
If the device on the secondary node does not supports DELETE, but the
device on the primary does, HAST will report to ZFS that DELETE
succeeded (although it failed on the secondary), and ZFS will not
disable TRIM. Pete, isn't this your case?
> What do you see from:
> sysctl kstat.zfs.misc.zio_trim
>
> You should be seeing none zero unsupported and zero failed. If this is
> not the case its likely hast isnt setting bio_error to ENOTSUP.
Again, unfortunately, the systems are now running with trim diisabled
so these statsitcs have all be
Hi,
It seems that the ZFS messages no longer match entries in devd.conf, eg..
notify 10 {
match "system" "ZFS";
match "type""vdev";
action "logger -p kern.err 'ZFS: vdev failure, zpool=$pool type=$type'";
};
Doesn't match anything because messages now l
On 12/10/2013, at 11:21, Daniel O'Connor wrote:
> Doesn't match anything because messages now look like..
> Processing event '!system=ZFS subsystem=ZFS type=resource.fs.zfs.removed
> version=0 class=resource.fs.zfs.removed pool_guid=469710819
> vdev_guid=215223839'
>
> Does anyone have an upd