We are currently working on a tool to help developers evaluate the power
consumption of their program.

The principle is simple: when the program is executed on the POSE emulator,
we stored the different opcodes executed and then associate each of these
opcodes with an energy. At the end we just add to get the results.

To do this, the energy was previously calculated by executing each
instruction in an infinite loop, measure the current drawn and then multiply
by the voltage (constant = 3V), and the time (number of cycles / clock
frequency).

This work really well for pieces of C code: When we execute part of C codes
and compare the actual current drawn to the estimated one, we get less than
five percent difference.

The problem comes when we put some OS calls in the code. The actual current
measured is 30 to 40 % less then the predicted one. This is wierd because we
are still dumping the opcodes executed during a call of the OS correctly so
we should get good results.

My questions are the following:
        - is there any mechanism that is not implemented in the pose
emulator ?
        - is there a doze mode where the processor put in a low power
consumption state that would be used when an API is called ?
        - Where else could this difference come from ?

We are also looking for two things:

            - benchmarks to evaluate the accuracy of our simulator (we will
compare the expected current to the real measured one).
            - applications such as file compression, file
encryption/decryption, FFT computation (we'll try to optimize their energy
consumption).

Is there any place I could find some of these ?



Thank you in advance for your help. If we judge this program accurate enough
we'll  certainly make its source public...


        laurent lazard
        master student at Oregon State University, ECE department







-- 
For information on using the ACCESS Developer Forums, or to unsubscribe, please 
see http://www.access-company.com/developers/forums/

Reply via email to