Dear "Denk Wolfgang", > In message > <[EMAIL PROTECTED]> you > wrote: > > > > > This should give you raw serial driver performacne while a serial > > > data transfer is running, while keeping functionality for all other > > > use cases. > > > > > > What do you think? > > > > First we need to have a good and accepted solution to reduce the time in > > NetLoop e.g. read only the env when changed. Then the polling is not > > anymore critical path. > > Hm... sorry, but I disagree. With my suggestion above, the time spent > in NetLoop() does not matter any more at all. So no optimizations > there will be needed to get your code working.
If you know how to implement the behaviour like VTIME I'm fine, but I don't understand how it can work. Is it correct to say: To check if data is received at our nc we have run NetLoop(). If yes, one run cost me 15 Milliseconds, so 150 character are potentially lost on the serial. Of course when I'm on the serial I stay longer on the serial and read more. > > > The main problem from my point of view is the echo of the received data > > to serial and also to nc. This is done now immediately, character by > > character and this takes time (more than we have). > > Sorry. I don't get it. It seems you bring up a new topic here. > > Less than 6 hours before this you wrote: "The polling of the serial > driver is too slow to get all characters. ... we added hooks to > measure the time for tstc() execution. The measured time are: ... nc > 15 Milliseconds". > > My interpretation was (and is) that it's the *input* processing which > is your major concern. And I showed a way to solve this problem ( at > least I think that my suggestion will solve it). > > > Now you bring up a new topic - the time needed to output the > characters. May be we should try and solve problems sequentially - if > we throw all isses we see into one big pot we might not be able to > swallow this. Sorry I did not tell you the full story (I also do not understand all). > > BTW: did you measure any times for the character output? > What I know is, that reducing the time spend in the functions for nc by calling getenv() only when the env is changed is listed below: nc tstc() before 15 Milliseconds after 60 Microseconds nc getc() before 5 Microseconds after 5 Microseconds nc send_packet() before 90 Microseconds after 90 Microseconds For the receiving the "real job" is done in tstc(), getc() only take it from the input_buffer. The sending do not run the NetLoop() in "steady state". This explains that only the tstc() gets faster. > > BTW - reducing the console baud rate would be a trivial way to avoid > most of these issues ;-) Reducing the baud rate helps here the measurements (pasting a 200 character line) with 57600 6% of the characters are lost with 38400 0% of the characters are lost --> this would work > > Am I right when I say that between a read from character getc() until > > the next call of getc() we have 100 Microseconds to do all the > > required processing otherwise we lose data? > > On average, yes. The time for a single character might be longer (up to > close to 200 us) assumimg we are fast enough then to catch the third > char. All this assuming a console baudrate of 115 kbps. I agree with this when we assume that one character is received in the buffer/bd and 2 can be held in the HW-FIFO. When this would be the case then I should receive always the 3 first characters and then we have losses. But this is not the case we already loose the second. Do you have an explanation for this? Best regards, Stefan Bigler _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot