On 3/13/2015 10:23 AM, Ryan Stone wrote:
> It's almost 5%
> on the 32 core machine:
This likely will harm package building.
--
Regards,
Bryan Drewery
signature.asc
Description: OpenPGP digital signature
On 03/18/2015 12:58, Mateusz Guzik wrote:
> On Wed, Mar 18, 2015 at 10:17:22AM -0400, John Baldwin wrote:
>> On Friday, March 13, 2015 06:32:03 AM Mateusz Guzik wrote:
>>> On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote:
Below is partial results from a profile of a parallel (-j7) "bui
[snip]
Hihi!
Do you have a shell script or something that I can run on the power8
box to see if nathan's pmap locking changes eliminate at least that
global pmap lock we're seeing on amd64?
-a
___
freebsd-current@freebsd.org mailing list
http://lists.
On Wed, Mar 18, 2015 at 10:17:22AM -0400, John Baldwin wrote:
> On Friday, March 13, 2015 06:32:03 AM Mateusz Guzik wrote:
> > On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote:
> > > Below is partial results from a profile of a parallel (-j7) "buildworld"
> > > on
> > > a 6-core machine th
On Friday, March 13, 2015 06:32:03 AM Mateusz Guzik wrote:
> On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote:
> > Below is partial results from a profile of a parallel (-j7) "buildworld" on
> > a 6-core machine that I did after the introduction of pmap_advise, so this
> > is not a new prof
On Fri, Mar 13, 2015 at 11:23:06AM -0400, Ryan Stone wrote:
> On Thu, Mar 12, 2015 at 1:36 PM, Mateusz Guzik wrote:
>
> > Workloads like buildworld and the like (i.e. a lot of forks + execs) run
> > into very severe contention in vm, which is orders of magnitude bigger
> > than anything else.
> >
[snip]
someone emailed me privately - no tracking/priority lending is
happening for readers. :(
-a
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-
Again, why's it not loaning priority to the lock-owning thread when
it's blocked? I thought that's what is supposed to happen.
-adrian
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe
On Thu, Mar 12, 2015 at 1:36 PM, Mateusz Guzik wrote:
> Workloads like buildworld and the like (i.e. a lot of forks + execs) run
> into very severe contention in vm, which is orders of magnitude bigger
> than anything else.
>
> As such your result seems quite suspicious.
>
You're right, I did me
On Thu, Mar 12, 2015 at 06:13:00PM -0500, Alan Cox wrote:
> Below is partial results from a profile of a parallel (-j7) "buildworld" on
> a 6-core machine that I did after the introduction of pmap_advise, so this
> is not a new profile. The results are sorted by total waiting time and
> only the t
Below is partial results from a profile of a parallel (-j7) "buildworld" on
a 6-core machine that I did after the introduction of pmap_advise, so this
is not a new profile. The results are sorted by total waiting time and
only the top 20 entries are listed.
max wait_max total wait_to
On Thu, Mar 12, 2015 at 11:14:42AM -0400, Ryan Stone wrote:
> I've just submitted a patch to Differential[1] for review that converts the
> VFS cache to use an rmlock in place of the current rwlock. My main
> motivation for the change is to fix a priority inversion problem that I saw
> recently.
On Thu, Mar 12, 2015 at 12:37 PM, Adrian Chadd wrote:
> Do you have access to any boxes that have more than 12 cores?
I have a 14-core hyperthreaded machine (so 28 logical cores), but it has no
disk (long story). I could do a build out of a memory disk though.
Also, to ask a stupid question - w
Also, to ask a stupid question - why wasn't the reader gifted a
temporary priority boost because you were trying to acquire the write
lock?
-adrian
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
T
Do you have access to any boxes that have more than 12 cores?
(like 36, 64, 80+ ?)
-adrian
On 12 March 2015 at 08:14, Ryan Stone wrote:
> I've just submitted a patch to Differential[1] for review that converts the
> VFS cache to use an rmlock in place of the current rwlock. My main
> motiva
I've just submitted a patch to Differential[1] for review that converts the
VFS cache to use an rmlock in place of the current rwlock. My main
motivation for the change is to fix a priority inversion problem that I saw
recently. A real-time priority thread attempted to acquire a write lock on
the
16 matches
Mail list logo