Antonino Ingargiola wrote:
For my use case would be more sensible to accept the new data and discard the older one in the tty buffer: the tty buffer would be a moving window of the most recent incoming data. This because if someone does not read the incoming data maybe he's not interested in it. When he finally reads the data (assuming there was buffer underrun) he's likely interested to the more recent chunk of incoming data and not to an "head" of the data firstly received. Since I acquire measurement data from serial this make perfect sense for me. But does this make sense in general too?
There is no one policy here that will make everyone happy. Some will want all the data before some was lost, others the data after some was lost. The real goal is to minimize any data loss.
However, whatever policy the buffer uses, the fundamental point it's that when I flush the input buffer I should be sure that each byte read after the flush is *new* (current) data and not old one. This because when the input stream can be stopped I can check that there are 0 byte in the buffer, but when the stream can't be stopped I must use a flush-and-sleep (multiple times) heuristic before I can read a single *reliable* byte.
The flush minimizes stale data the application must process. As Alan said in his response there are other sources of stale data beyond the kernel's control. But we absolutely should be flushing all buffers we control. In the end, if more reliability is needed the application must implement its own discipline of framing (defining block boundaries) and checking (CRC) on the raw data stream. -- Paul - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/