Hi! On Tue, Aug 24, 2021 at 07:56:26AM +0000, Christophe Leroy wrote: > Use bcl 20,31,+4 instead of bl in order to preserve link stack.
You use $+4 actually, which is clearer than .+4 or just +4 (and I am surprised that the latter even works btw, I never knew :-) -- either way it looks like a typo). > - bl invstr /* Find our address */ > + bcl 20,31,$+4 /* Find our address */ > invstr: mflr r6 /* Make it accessible */ You can remove the label now. This isn't true in all cases, but here you can (all times it is called "invstr"). > @@ -85,7 +85,7 @@ skpinv: addi r6,r6,1 /* > Increment */ > addi r6,r6,10 > slw r6,r8,r6 /* convert to mask */ > > - bl 1f /* Find our address */ > + bcl 20,31,$+4 /* Find our address */ > 1: mflr r7 Here, too. > @@ -1045,7 +1045,7 @@ head_start_47x: > sync > > /* Find the entry we are running from */ > - bl 1f > + bcl 20,31,$+4 > 1: mflr r23 > tlbsx r23,0,r23 > tlbre r24,r23,0 And here. > @@ -1132,7 +1132,7 @@ _GLOBAL(switch_to_as1) > bne 1b > > /* Get the tlb entry used by the current running code */ > - bl 0f > + bcl 20,31,$+4 > 0: mflr r4 > tlbsx 0,r4 > @@ -1166,7 +1166,7 @@ _GLOBAL(switch_to_as1) > _GLOBAL(restore_to_as0) > mflr r0 > > - bl 0f > + bcl 20,31,$+4 > 0: mflr r9 > addi r9,r9,1f - 0b And these. > --- a/arch/powerpc/mm/nohash/tlb_low.S > +++ b/arch/powerpc/mm/nohash/tlb_low.S > @@ -199,7 +199,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_476_DD2) > * Touch enough instruction cache lines to ensure cache hits > */ > 1: mflr r9 > - bl 2f > + bcl 20,31,$+4 > 2: mflr r6 > li r7,32 > PPC_ICBT(0,R6,R7) /* touch next cache line */ > @@ -414,7 +414,7 @@ _GLOBAL(loadcam_multi) > * Set up temporary TLB entry that is the same as what we're > * running from, but in AS=1. > */ > - bl 1f > + bcl 20,31,$+4 > 1: mflr r6 > tlbsx 0,r8 > mfspr r6,SPRN_MAS1 And these too. There does not see to be a warning for usused local labels, it would be useful in this case :-) Segher