t the current
> list of jail id numbers, this method degrades.)
>
> Low priority for 'periodic daily' jobs might not help much, due to disk
> saturation, CPU cache thrashing, etc.
> -Walter
>
> On Thu, 16 Feb 2017, Dustin Wenz wrote:
>
>> The biggest offender
script it is and how much jitter you want. I
> am coincidentally changing how periodic manages jitter right now.
>
> -Alan
>
> On Thu, Feb 16, 2017 at 2:47 PM, Dustin Wenz wrote:
>> I have a number of servers with roughly 60 jails running on each of them. On
>> these host
I have a number of servers with roughly 60 jails running on each of them. On
these hosts, I've had to disable the periodic security scans due to overly high
disk load when they run (which is redundant in jails anyway). However, I still
have an issue at 3:01am where the CPU is consumed by dozens
Are you by chance using the ARC patch from this PR?
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594
Do you have a vfs.zfs.dynamic_write_buffer tunable defined, if so, what is it
set to?
- .Dustin
> On Dec 9, 2015, at 3:51 AM, Alexander Leidinger
> wrote:
>
> On Wed, 09 Dec
PF filed:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=205163
Please let me know if there is any more useful information I can include.
- .Dustin Wenz
> On Dec 9, 2015, at 4:24 AM, Jan Bramkamp wrote:
>
>
>
> On 09/12/15 01:04, Michael B. Eichorn wrote:
>&g
scheduled processes start, because all jails execute the same
scripts at the same time.
I've been able to alleviate this problem by disabling the security scans within
the jails, but leave it enabled on the root host. If this is not a known issue
in FreeBSD 10.2, I'll file a PR
As far as I know, pcpu has never worked for limiting beyond a single core. I
submitted a patch for kern/kern_racct.c that should fix it:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=189870
Please update the bug if it does or doesn't resolve the issue. Maybe that will
prompt a committer to
I'm seeing a condition on FreeBSD 9.1 (built October 24th) where I/O seems to
hang on any local zpools after several hours of hosting a large-ish Postgres
database. The database occupies about 14TB of a 38TB zpool with a single SSD
ZIL. The OS is on a ZFS boot disk. The system also has 24GB of p
I am having trouble with MPS becoming unresponsive in certain disk failure
conditions. So far, I've experienced this with 3TB Hitachi disks (0S03208) and
3TB Seagate Barracuda disks (ST3000DM001, firmware CC9D) while using the MPS
driver with an LSI SAS2116 controller on FreeBSD 8.2-STABLE.
In
Thanks; It's good to know that it's at least possible to make this work in some
instances.
Unfortunately, our SAS2008 controller is integrated with the logic board (a
SuperMicro X8DT6) connected to a SAS-113TQ backplane. It's not so much of an
expander; there are two breakout cables that go fro
I'm running a build of the mps(4) driver on FreeBSD 8.2 with an LSI SAS2008 bus
adapter. The code I'm using was current as of the last commit on 2011-02-25,
and is built for amd64.
I can't seem to get any better performance than about 250MB/s writes through
the controller. I'm testing with a z
11 matches
Mail list logo