Hi Daniel,

On 03/05/2023 18:20, Marcus Müller wrote:
¹ this is the standard situation for me to start a flamewar on the nature of DSSS: from a coding perspective, DSSS is just a repetition code. Repetition codes are what we call "bad", so instead of concatenating a r = 1/F DSSS-repetition-code after a r=4/5 Turbo code, we'd be much much better off just using a r=b/R "proper" code to begin with. I guess the thing is that decoding complexites of very low rate code decoders are usually not fun at bad input SNRs.
On 06.05.23 09:45, Daniel Estévez wrote:
Not to start a flamewar, since I can see what you mean here, but I'm wondering: is DSSS the same as repetition code? A commonly used technique to transmit data with DSSS is to use 2^m quasi-orthogonal codes to transmit m bits per "symbol". These 2^m codes may be different codes drawn from the same pool (Gold codes, etc.), or circular shifts of the same code (this concept is being explored for future GNSS signals under the name of CSK, code shift keying). The uncoded BER performance is pretty similar to m-FSK, since the "symbols" are nearly orthogonal.
Yep, that'd be rate-1; you don't need to squint much to see that the N-point-DFT has 
exactly N (=often 2ᵐ) such entries, and when you send N symbols in parallel, you get a 
DSSS-single-user-CDMA system called OFDM :D
But that's not on bit-level. The (Fast) Hadamard transform is one of these things to 
decompose an arbitrary 2ᵐ length binary vector into orthogonal binary vectors of the same 
length (and I was pointed to it when a student of mine, someone else I forget and I were 
discussing how to make two uncorrelated random vectors from a single 64bit random vector 
in the context of WGN generation).
I might be misremembering things, but I think that for large m the BER performance of uncoded m-FSK is better than uncoded BPSK, simply because the BER curve falls off more steeply.
I'll redefine things (sorry!): you're using an N-FSK, with N=2ᵐ, so that an FSK symbol 
carries m bits.
Thinking in AWGN & discrete time here, with fixed total energy E:
Considering this a bit of a geometric operation, taking an m-dimensional vector from {0,1}ᵐ and mapping it directly to a BPSK symbol vector from {+1,-1}ᵐ gives a bit-wise iid error probability that's inversely proportional to √m. For the BER curve, we know that we'll end up dividing the argument to our Q-function by √m, so d/(dE) BER becomes a scaling of -1/√m · f_{normal}(√m·…).
For N-FSK, I'll apply Parseval's Theorem to noise and signal, and find that the N-point 
DFT converts the frequency detection problem to a N-PPM detection problem (and that's 
optimal, since we concentrate all signal energy and keep noise energy maximally spread 
out; the DFT is the matched filterbank, if you will). The symbol error probability is
1 - P(no error), where no error occurs when each and every one of the other DFT output 
bins is lower in energy than the correct one.
WLOG, we put the Energy of the "correct" tone in bin X[0], and let all bins contain iid 
noise realisations:
      ⎧ √E + N[0] for k=0,
X[k] =⎨
      ⎩ N[k]      for k>0

noise and signal being uncorrelated,

       ⎧ E + N²[0]  for k=0,
X²[k] =⎨
       ⎩ N²[k]      for k>0

P(no error) = P(X²[0] > X²[1] ∧ X²[0] > X²[2] ∧ … ∧ X²[0] > X²[N-1])
introducing Z[k] = N²[k] - N²[0]
            = P(E > Z[1] ∧ E > Z[2] …)
            ≤ P(E > Z[1]) · P(E > Z[2]) · …

So, we can find at least a lower bound for the error probability (which probably ends up being fairly tight). That bound arrives from the (N-1)th power of the complementary CDF of a K=2 𝜒²-distribution.
I'm honestly too lazy to make a plot now, but intuitively, k=2 𝜒²'s PDF is exp(-x/2), to 
the (N-1)th power it's exp(-(N-1)x/2) = exp(-(2ᵐ-1)x/2)
which is going to be a lot smaller than the modified 1/√m-scaled normal PDF we got from 
the BPSK experiment, as long as the SNR is > 1.
In this sense, using different DSSS codes actually achieves some coding gain that justifies bandwidth expansion, whereas the alternative approach of using a single PN sequence modulated in BPSK is as you say a repetition code and doesn't provide any coding gain.
Yep, in the end this is finding a constellation in a higher-dimensional space that has 
better distance than the N times the embedding in 1d space. It's probably sphere packing 
all the way down.
Now, of course this utterly disagrees with my gut feeling. We're doing a base transform, 
keeping the bit energy constant. We invert the process under white noise and are supposed 
to get a gain?!
But that's the difference between "classical" DSSS (and the OFDM up there) with the 
consideration that N-FSK might be a better embedding in N-dimensional complex space than m 
BPSK symbols would be: we're not necessarily "going back" before making the decision. Some 
representations are smarter than others to make decisions.
Surely this idea is very far from getting us a capacity-achieving code (any simple FEC beats uncoded m-FSK even if m is pretty large). But on the other hand, the typical bandwidth expansions of a DSSS system are on the order of 100x or more. I would be curious to know if there's an r=1/100 code that is good and can be implemented in practice.
There's a former colleague of mine who did[1] error-correcting codes for quantum key 
exchange reconciliation if you're interested in low-rate codes
The idea is that you have like LDPC codes very much, and wonder how you can construct them 
for low rates so that you don't go directly insane while en- and even more importantly 
while decoding.
You then decide to doodle a small/moderate LDPC-typical Tanner graph, where you draw a 
couple of edges with more than one line. Call that a protograph. It has something that 
resembles an adjacency matrix (protomatrix) B with integer non-negative entries describing 
how many connections there are between each variable and check node.
You pick an integer S ≥ ||B||_max, call it lifting factor, and place S copies of the 
protograph next to each other (still multi-edges!). You then randomly permute the ends of 
the individual single-edges within the same "edge group" (i.e. edges that were copies of 
the same protoedge but not in the same protograph copy), until you get a single-edged 
graph, which is your LDPC Tanner graph.
That way, you can construct a long low-rate code from a smaller Tanner-like-graph, 
drastically reducing the number of short cycles.
Things get a bit unhandy even at the abstraction level "protograph" for extremer rates, so 
Kadir came up with an abstraction level on top of that (where you stop caring about which 
node of multi-edge degree ("type") is which, you just keep track of how many nodes of 
which types are connected to which during design).
For decoding, standard sum-product algorithm should do, so, "reasonably fast".

Probably there are diminishing returns when r -> 0, and decoding cost blows up. Hence the usual idea of concatenating a DSSS-repetition-code and an outer usual FEC (maybe not an r=4/5 Turbo, but r=1/6).
Well, the keywords "concatenating", "low-rate" and "outer FEC" combine in my head to 
suggest one looks into three things:
1. Turbo codes with something like a mother code rate of 1/3 (I think LTE has such a 
thing?) and a lot of repetitions. I think such things were used in deep space missions 
(?), and LTE NBIoT has that as well(?) Not sure. Turbo codes seem to be mostly superseeded 
in practice by other iterative decoding-compatible codes.
2. Polar codes (which inherently allow low-rate construction just by freezing enough bits) 
combined with CRCs for list decoding. I bet this is the regime where multi-kernel Polar 
codes win over classical ones.
3. Turbo-Like Codes from the "cheap" Repeat and Accumulate class [2, Sec.5]. (repeat a 
block of N bits q times, apply qN length pseudorandom interleaving, and then just do the 
running 𝔽₂ sum and transmit the resulting stream.) Decoding can be done in a message 
passing way as well, aff3ct does that, I'm looking at the code, should be rather fast 
(i.e., megacodebits/s)
Cheers,
Marcus

[1] K. Gümüş and L. Schmalen, "Low Rate Protograph-Based LDPC Codes for Continuous Variable Quantum Key Distribution," 2021 17th International Symposium on Wireless Communication Systems (ISWCS), Berlin, Germany, 2021, pp. 1-6, doi: 10.1109/ISWCS49558.2021.9562244. https://arxiv.org/abs/2107.06242 https://www.youtube.com/watch?v=1IddgXzkvVw [5] Divsalar, Dariush, Hui Jin, and Robert J. McEliece. "Coding theorems for" turbo-like" codes." Proceedings of the annual Allerton Conference on Communication control and Computing. Vol. 36. University Of Illinois, 1998. https://ieeexplore.ieee.org/stampPDF/getPDF.jsp?tp=&arnumber=866412&ref=aHR0cHM6Ly9pZWVleHBsb3JlLmllZWUub3JnL2RvY3VtZW50Lzg2NjQxMi9hdXRob3Jz

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to