> The former technique appears more general, less reliant on prior
> filtering, and immune to long strings of 1s or 0s. On the other hand,
> the latter technique is simpler, requires fewer calculations and less
> memory.
>
> So if the sample stream is known to have sufficient zero crossings and
>
So if the sample stream is known to have sufficient zero crossings and
has been properly filtered, do you see any hazards to going with the
latter technique?
Looking for zero crossings doesn't work as well when you have a low SNR,
or you have multipath. Multipath can make the bits non-sy
Johnathan Corgan, el 09/06/06 13:08:
So if the sample stream is known to have sufficient zero crossings and
has been properly filtered, do you see any hazards to going with the
latter technique?
If "sufficient" is really sufficient, then it should be safe. I have implemented
circuits using th
I'd like to hear your thoughts comparing "center of goodness" vs. "zero
crossing adjust" techniques for recovering bit timing and deframing in
an oversampled NRZ sample stream (I'm sure there are better names for
these algorithms!)
Take an incoming sample stream which represents an 8X oversampled