This is the
talk page for discussing improvements to the
64b/66b encoding article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||
|
As I understand it the statement "This means that there are just as many 1s as 0s in a string of two symbols, and that there are not too many 1s or 0s in a row" is not correct. This is true for 8b/10b encoding as the rigid encoding ensures DC balance over two symbols. My understanding is that 64b/66b encoding will have DC balance due to the averaging effect of the scrambling, but it is not guaranteed over two symbols. —Preceding unsigned comment added by 128.222.37.58 ( talk) 16:25, 24 August 2010 (UTC)
The initial state of the scrambler is known, the transformation function of the scrambler is also known. So it should be possible to send chosen payload that modifies the scrambler state in a way that it outputs only 0x00000000 or 0xFFFFFFFF and violates DC balance a lot. Are there any known attacs based on this? What would happen on a 10GbE-Link in this case? -- RokerHRO ( talk) 09:53, 16 March 2015 (UTC)
"128b/130b (...) uses a different scrambling polynomial: x23 + x21 + x16 + x8 + x5 + x2 + 1" – different to what? No other LFSR polynomial is mentioned in the article. Maybe it should be specified which polynomial is used, or at least its length. — Cousteau ( talk) 05:39, 12 May 2015 (UTC)
The current text is:
It's the last sentence that I'm concerned about, the statement on the odds of producing a 65-bit sequence of all '0's or all '1's. Within the 66b/64b scheme, the first two preamble bits must be '01' or '10'. So the 65-bit run length problem being discussed can only occur if the following 64 bits all match the second bit of the preamble. For each preamble pattern, there is only one 64-bit pattern out of the 2^64 possibilities where all bits are the same value.
So the odds are 1 in 2^64, meaning the last sentence should read, "At 10 Gigabits per second, the expected event rate of a 66-bit block with a 65-bit run-length, assuming random data, is 2^64/(10^10) seconds or about once every 131.5 years."
Any comments please, before I look to making this change. ToaneeM ( talk) 07:42, 10 January 2018 (UTC)
The opening section says:
″The overhead can be reduced further by doubling the payload size to produce the 128b/130b encoding used by PCIe 3.0 and 128b/132b encoding used by USB 3.1 and Display Port 2.0.
But the overhead is not reduced with 128b/132b (2/64 = 4/128).
94.137.103.165 ( talk) 13:22, 30 September 2020 (UTC)
The overhead discussion near the beginning of the article is incorrect. Here is the original text:
The protocol overhead of a coding scheme is the ratio of the number of raw payload bits to the number of raw payload bits plus the number of added coding bits. The overhead of 64b/66b encoding is 2 coding bits for every 64 payload bits or 3.125%. This is a considerable improvement on the 25% overhead of the previously-used 8b/10b encoding scheme, which added 2 coding bits to every 8 payload bits.
That paragraph should read like this:
The protocol overhead of a coding scheme is the ratio of the number of added coding bits to the number of raw payload bits. The overhead of 64b/66b encoding is 2 coding bits for every 64 payload bits or 3.125%. This is a considerable improvement on the 25% overhead of the previously-used 8b/10b encoding scheme, which added 2 coding bits to every 8 payload bits.
134.238.168.56 ( talk) 17:16, 2 April 2023 (UTC)