Currently we define I2C_TIMEOUT like this:

#define I2C_TIMEOUT     (CONFIG_SYS_HZ / 4)

I'm seeing some I2C instability on a new board I'm working on, especially with 
SPD.  If I change the above to 

#define I2C_TIMEOUT     (CONFIG_SYS_HZ / 2)

The problems go away (or at least, so far appear to).  Can someone tell me why 
we choose (CONFIG_SYS_HZ / 4) to begin with?  The way we use I2C_TIMEOUT is 
confusing:

        while (readb(&i2c_dev[i2c_bus_num]->sr) & I2C_SR_MBB) {
                if ((get_ticks() - timeval) > usec2ticks(I2C_TIMEOUT))
                        return -1;
        }

CONFIG_HZ is 1000, so I2C_TIMEOUT is equal to 250.  However, the way it's used, 
250 isn't the number of ticks per second, it's used as number of microseconds.  
If CONFIG_HZ is changed to 100, does that mean that we want to call 
usec2ticks(25)?

I think what we should be doing is this:

#define I2C_TIMEOUT     1000

Surely, one millisecond is not too long of a timeout?

-- 
Timur Tabi
Linux kernel developer at Freescale
_______________________________________________
U-Boot mailing list
U-Boot@lists.denx.de
http://lists.denx.de/mailman/listinfo/u-boot

Reply via email to