On 29/05/2013 9:41 AM, Ian Lance Taylor wrote:
On Tue, May 28, 2013 at 9:02 PM, Ryan Johnson
wrote:
Maybe I misunderstood... there's currently a (very small) cache
(unwind-dw2-fde-dip.c) that lives behind the loader mutex. It contains 8
entries and each entry holds the start and end addresses f
On 05/30/2013 04:30 PM, Ryan Johnson wrote:
> Is there a way for libgcc_s to interpose on dlopen/dlclose if (and only if)
> those are present? If so, the wrappers could increment an atomic version
> counter, which would be plenty accurate for invalidating an object header
> cache, without requiring
On 29/05/2013 4:13 AM, Richard Biener wrote:
On Wed, May 29, 2013 at 2:47 AM, Ian Lance Taylor wrote:
On Mon, May 27, 2013 at 3:20 PM, Ryan Johnson
wrote:
I have a large C++ app that throws exceptions to unwind anywhere from 5-20
stack frames when an error prevents the request from being serv
On Thu, 2013-05-30 at 11:28 +0200, Jakub Jelinek wrote:
> On Thu, May 30, 2013 at 11:21:08AM +0200, Torvald Riegel wrote:
> > On Tue, 2013-05-28 at 20:30 +0200, Florian Weimer wrote:
> > > On 05/28/2013 08:09 PM, Václav Zeman wrote:
> > >
> > > > If the bottleneck is really in glibc, then you shou
On Thu, May 30, 2013 at 11:21:08AM +0200, Torvald Riegel wrote:
> On Tue, 2013-05-28 at 20:30 +0200, Florian Weimer wrote:
> > On 05/28/2013 08:09 PM, Václav Zeman wrote:
> >
> > > If the bottleneck is really in glibc, then you should probably ask them
> > > to fix it. Could the mutex be changed r
On Tue, 2013-05-28 at 20:30 +0200, Florian Weimer wrote:
> On 05/28/2013 08:09 PM, Václav Zeman wrote:
>
> > If the bottleneck is really in glibc, then you should probably ask them
> > to fix it. Could the mutex be changed rwlock instead?
>
> rwlocks don't eliminate hardware contention, so I doub
On Wed, 2013-05-29 at 10:06 -0400, Ryan Johnson wrote:
> On 29/05/2013 9:41 AM, Ian Lance Taylor wrote:
> > On Tue, May 28, 2013 at 9:02 PM, Ryan Johnson
> > wrote:
> >> Maybe I misunderstood... there's currently a (very small) cache
> >> (unwind-dw2-fde-dip.c) that lives behind the loader mutex.
On 29/05/2013 9:41 AM, Ian Lance Taylor wrote:
On Tue, May 28, 2013 at 9:02 PM, Ryan Johnson
wrote:
Maybe I misunderstood... there's currently a (very small) cache
(unwind-dw2-fde-dip.c) that lives behind the loader mutex. It contains 8
entries and each entry holds the start and end addresses f
On Tue, May 28, 2013 at 9:16 PM, Ryan Johnson
wrote:
> On 29/05/2013 12:01 AM, Ian Lance Taylor wrote:
>>
>> On Tue, May 28, 2013 at 8:50 PM, Ryan Johnson
>> wrote:
>>>
>>> For example, it should be reasonably safe to let __cxa_allocate_exception
>>> call dl_iterate_phdr in order to build a list
On Tue, May 28, 2013 at 9:02 PM, Ryan Johnson
wrote:
>
> Maybe I misunderstood... there's currently a (very small) cache
> (unwind-dw2-fde-dip.c) that lives behind the loader mutex. It contains 8
> entries and each entry holds the start and end addresses for one loaded
> object, along with a point
On 29/05/2013 3:36 AM, Jakub Jelinek wrote:
Hi!
On Wed, May 29, 2013 at 12:02:27AM -0400, Ryan Johnson wrote:
Note, swapping the order of dl_iterate_phdr and _Unwind_Find_registered_FDE
IMHO is fine.
I think what you're saying is that the p_eh_frame_hdr field could
end up with a dangling poin
On Wed, May 29, 2013 at 2:47 AM, Ian Lance Taylor wrote:
> On Mon, May 27, 2013 at 3:20 PM, Ryan Johnson
> wrote:
>>
>> I have a large C++ app that throws exceptions to unwind anywhere from 5-20
>> stack frames when an error prevents the request from being served (which
>> happens rather frequent
Hi!
On Wed, May 29, 2013 at 12:02:27AM -0400, Ryan Johnson wrote:
Note, swapping the order of dl_iterate_phdr and _Unwind_Find_registered_FDE
IMHO is fine.
> I think what you're saying is that the p_eh_frame_hdr field could
> end up with a dangling pointer due to a dlclose call?
>
> If so, my a
On 29/05/2013 12:01 AM, Ian Lance Taylor wrote:
On Tue, May 28, 2013 at 8:50 PM, Ryan Johnson
wrote:
For example, it should be reasonably safe to let __cxa_allocate_exception
call dl_iterate_phdr in order to build a list of object headers valid at the
time unwind begins. It already calls malloc
On 28/05/2013 11:49 PM, Ian Lance Taylor wrote:
On Tue, May 28, 2013 at 6:19 PM, Ryan Johnson
wrote:
That last point makes me really wonder why we bother grabbing the mutex
during unwind at all... at the very least, it would seem profitable to
verify the object header cache at throw time---perh
On Tue, May 28, 2013 at 8:50 PM, Ryan Johnson
wrote:
>
> For example, it should be reasonably safe to let __cxa_allocate_exception
> call dl_iterate_phdr in order to build a list of object headers valid at the
> time unwind begins. It already calls malloc, so allocating space for a cache
> (holdin
On 28/05/2013 11:05 PM, Alan Modra wrote:
On Tue, May 28, 2013 at 09:19:48PM -0400, Ryan Johnson wrote:
On 28/05/2013 8:47 PM, Ian Lance Taylor wrote:
On Mon, May 27, 2013 at 3:20 PM, Ryan Johnson
wrote:
I'm bringing the issue up here, rather than filing a bug, because I'm not
sure whether th
On Tue, May 28, 2013 at 6:19 PM, Ryan Johnson
wrote:
>
> That last point makes me really wonder why we bother grabbing the mutex
> during unwind at all... at the very least, it would seem profitable to
> verify the object header cache at throw time---perhaps using the nadds/nsubs
> trick---and ref
On Tue, May 28, 2013 at 09:19:48PM -0400, Ryan Johnson wrote:
> On 28/05/2013 8:47 PM, Ian Lance Taylor wrote:
> >On Mon, May 27, 2013 at 3:20 PM, Ryan Johnson
> > wrote:
> >>I'm bringing the issue up here, rather than filing a bug, because I'm not
> >>sure whether this is an oversight, a known pro
On 28/05/2013 8:47 PM, Ian Lance Taylor wrote:
On Mon, May 27, 2013 at 3:20 PM, Ryan Johnson
wrote:
I have a large C++ app that throws exceptions to unwind anywhere from 5-20
stack frames when an error prevents the request from being served (which
happens rather frequently). Works fine single-t
On Mon, May 27, 2013 at 3:20 PM, Ryan Johnson
wrote:
>
> I have a large C++ app that throws exceptions to unwind anywhere from 5-20
> stack frames when an error prevents the request from being served (which
> happens rather frequently). Works fine single-threaded, but performance is
> terrible for
On 05/28/2013 08:09 PM, Václav Zeman wrote:
If the bottleneck is really in glibc, then you should probably ask them
to fix it. Could the mutex be changed rwlock instead?
rwlocks don't eliminate hardware contention, so I doubt they'd be a win
here.
You'd need brlocks or some RCU-like approac
On 05/28/2013 12:20 AM, Ryan Johnson wrote:
> Hi all,
>
> (please CC me in replies, not a list member)
>
> I have a large C++ app that throws exceptions to unwind anywhere from
> 5-20 stack frames when an error prevents the request from being served
> (which happens rather frequently). Works fine
On Mon, May 27, 2013 at 06:20:21PM -0400, Ryan Johnson wrote:
> I'm not sure whether this is an oversight, a known problem that's
> hard to fix, or a feature (e.g. somehow required for reliable
> unwinding). I suspect the former, because _Unwind_Find_FDE tries a
> call to _Unwind_Find_registered_FD
Hi all,
(please CC me in replies, not a list member)
I have a large C++ app that throws exceptions to unwind anywhere from
5-20 stack frames when an error prevents the request from being served
(which happens rather frequently). Works fine single-threaded, but
performance is terrible for 24 t
25 matches
Mail list logo