Advait,
lets try to answer your question:
void try1(void)
{
char a[20]; <- Request for 20 bytes on the stack
unsigned char i;
for(i=0;i<20;i++)
a[i] = com1_msgbuf[i];
}
the assembly for this function is this: (from the .lss file)
void try1(void)
{
5a40: cf 93 push r28
5a42: df 93 push r29
5a44: cd b7 in r28, 0x3d ; 61
5a46: de b7 in r29, 0x3e ; 62 Increment r28,r29 by 20
5a48: 64 97 sbiw r28, 0x14 ; 20
5a4a: 0f b6 in r0, 0x3f ; 63 Save SREG contains the global Int-Flag which is change by CLI/SEI
5a4c: f8 94 cli Disable Int (set i-Flag in SREG to zero)
5a4e: de bf out 0x3e, r29 ; 62 set the SPH to new value
5a50: 0f be out 0x3f, r0 ; 63 Restore previous I-Flag
5a52: cd bf out 0x3d, r28 ; 61 restore SPL part
What I want to know is why did the compiler save SREG in r0, disable all the interrupts, restored only the upper byte of the stack pointer, restore SREG and then copy the lower byte of the stack pointer. (5a4a to 5a52).The normal follow-up question is: Why is the SREG restored inbetween the change of the SP? If you read the chapter for Int handling (somewhere around the Status Register) in the Atmel CPU description you will see the I-Flag is handled at least 4 clock cycles after the the change of the SREG. Therefore the movw at 5a54 is the first statement that could be interrupted!HTHKnut
_______________________________________________ AVR-GCC-list mailing list AVR-GCC-list@nongnu.org http://lists.nongnu.org/mailman/listinfo/avr-gcc-list