Can error correction and detection be done without adding extra bits?Which cryptographic operations can be optimized with a GPU? When shouldn't it be done?Suggestion for proof of retrievability (via coding theory)What is the link between the parity check matrix, double-error-correcting codes and APN permutations?Is McEliece secure with non-binary Goppa codes?
To find islands of 1 and 0 in matrix
Japanese reading of an integer
Why didn’t Christianity spread southwards from Ethiopia in the Middle Ages?
Rampant sharing of authorship among colleagues in the name of "collaboration". Is not taking part in it a death knell for a future in academia?
How could Nomadic scholars effectively memorize libraries worth of information
Why does the Rust compiler not optimize code assuming that two mutable references cannot alias?
Is there a way to know the composition of a Team GO Rocket before going into the fight?
How to prevent command substitution on the command line?
How can I kill my goat?
What is this 4 sharp symbol and what does it mean?
Is it possible to attain stream-entry if one is only following "the 5 precepts"?
Anti-cheating: should there be a limit to a number of toilet breaks per game per player?
Why does Canada require mandatory bilingualism in a lot of federal government posts?
What would the United Kingdom's "optimal" Brexit deal look like?
How did the Axis intend to hold the Caucasus?
2 weeks and a tight budget to prepare for Z-day. How long can I hunker down?
What container to use to store developer concentrate?
Golden Guardian removed before death related trigger
Did Vladimir Lenin have a cat?
Did the Americans trade destroyers in the "destroyer deal" that they would later need themselves?
Summoning A Technology Based Demon
Sci-fi change: Too much or Not enough
Dobbs Murder Mystery : A Picture worth 1000 words?
What are the cons of stateless password generators?
Can error correction and detection be done without adding extra bits?
Which cryptographic operations can be optimized with a GPU? When shouldn't it be done?Suggestion for proof of retrievability (via coding theory)What is the link between the parity check matrix, double-error-correcting codes and APN permutations?Is McEliece secure with non-binary Goppa codes?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I have gone through error detection and correction techniques like Hamming codes, and BCH codes require extra parity bits for detection and correction. While sending data, we always seem to introduce error correction by adding parity and use parity while checking for errors.
Can error detection and correction be done without adding any extra bits?
cryptographic-hardware coding-theory
$endgroup$
|
show 2 more comments
$begingroup$
I have gone through error detection and correction techniques like Hamming codes, and BCH codes require extra parity bits for detection and correction. While sending data, we always seem to introduce error correction by adding parity and use parity while checking for errors.
Can error detection and correction be done without adding any extra bits?
cryptographic-hardware coding-theory
$endgroup$
15
$begingroup$
Are you familiar with the pigeonhole principle? Because it's trivial to reduce this question to that.
$endgroup$
– MSalters
Jul 18 at 14:03
11
$begingroup$
This doesn't seem to be at all about encryption
$endgroup$
– ilkkachu
Jul 18 at 17:05
1
$begingroup$
@ilkkachu It's about coding theory, which can be loosely considered to be in the field of cryptography. I would say questions on that topic can certainly be on-topic here. I'm on the fence whether this particular question is though...
$endgroup$
– marcelm
Jul 19 at 1:11
1
$begingroup$
@marcelm, no, look at the dictionary definition from e.g. Wiktionary: "encryption: The process of obscuring information to make it unreadable without special knowledge, key files, and/or passwords". Or the short snippet under the "encryption" tag on the site.
$endgroup$
– ilkkachu
Jul 19 at 6:59
2
$begingroup$
@ilkkachu Cryptology is far wider than just encryption. Quoting the wikipedia article on Coding Theory: "Codes are used for data compression, cryptography, ...", and the crypto.SE help center: "... and subsidiary topics that generally make up cryptology ...". I maintain that questions on coding theory are on-topic here. In fact, crypto.SE has a coding-theory tag.
$endgroup$
– marcelm
Jul 19 at 9:11
|
show 2 more comments
$begingroup$
I have gone through error detection and correction techniques like Hamming codes, and BCH codes require extra parity bits for detection and correction. While sending data, we always seem to introduce error correction by adding parity and use parity while checking for errors.
Can error detection and correction be done without adding any extra bits?
cryptographic-hardware coding-theory
$endgroup$
I have gone through error detection and correction techniques like Hamming codes, and BCH codes require extra parity bits for detection and correction. While sending data, we always seem to introduce error correction by adding parity and use parity while checking for errors.
Can error detection and correction be done without adding any extra bits?
cryptographic-hardware coding-theory
cryptographic-hardware coding-theory
edited Jul 19 at 19:28
Peter Mortensen
1212 bronze badges
1212 bronze badges
asked Jul 18 at 5:31
Krishna BalupalaKrishna Balupala
686 bronze badges
686 bronze badges
15
$begingroup$
Are you familiar with the pigeonhole principle? Because it's trivial to reduce this question to that.
$endgroup$
– MSalters
Jul 18 at 14:03
11
$begingroup$
This doesn't seem to be at all about encryption
$endgroup$
– ilkkachu
Jul 18 at 17:05
1
$begingroup$
@ilkkachu It's about coding theory, which can be loosely considered to be in the field of cryptography. I would say questions on that topic can certainly be on-topic here. I'm on the fence whether this particular question is though...
$endgroup$
– marcelm
Jul 19 at 1:11
1
$begingroup$
@marcelm, no, look at the dictionary definition from e.g. Wiktionary: "encryption: The process of obscuring information to make it unreadable without special knowledge, key files, and/or passwords". Or the short snippet under the "encryption" tag on the site.
$endgroup$
– ilkkachu
Jul 19 at 6:59
2
$begingroup$
@ilkkachu Cryptology is far wider than just encryption. Quoting the wikipedia article on Coding Theory: "Codes are used for data compression, cryptography, ...", and the crypto.SE help center: "... and subsidiary topics that generally make up cryptology ...". I maintain that questions on coding theory are on-topic here. In fact, crypto.SE has a coding-theory tag.
$endgroup$
– marcelm
Jul 19 at 9:11
|
show 2 more comments
15
$begingroup$
Are you familiar with the pigeonhole principle? Because it's trivial to reduce this question to that.
$endgroup$
– MSalters
Jul 18 at 14:03
11
$begingroup$
This doesn't seem to be at all about encryption
$endgroup$
– ilkkachu
Jul 18 at 17:05
1
$begingroup$
@ilkkachu It's about coding theory, which can be loosely considered to be in the field of cryptography. I would say questions on that topic can certainly be on-topic here. I'm on the fence whether this particular question is though...
$endgroup$
– marcelm
Jul 19 at 1:11
1
$begingroup$
@marcelm, no, look at the dictionary definition from e.g. Wiktionary: "encryption: The process of obscuring information to make it unreadable without special knowledge, key files, and/or passwords". Or the short snippet under the "encryption" tag on the site.
$endgroup$
– ilkkachu
Jul 19 at 6:59
2
$begingroup$
@ilkkachu Cryptology is far wider than just encryption. Quoting the wikipedia article on Coding Theory: "Codes are used for data compression, cryptography, ...", and the crypto.SE help center: "... and subsidiary topics that generally make up cryptology ...". I maintain that questions on coding theory are on-topic here. In fact, crypto.SE has a coding-theory tag.
$endgroup$
– marcelm
Jul 19 at 9:11
15
15
$begingroup$
Are you familiar with the pigeonhole principle? Because it's trivial to reduce this question to that.
$endgroup$
– MSalters
Jul 18 at 14:03
$begingroup$
Are you familiar with the pigeonhole principle? Because it's trivial to reduce this question to that.
$endgroup$
– MSalters
Jul 18 at 14:03
11
11
$begingroup$
This doesn't seem to be at all about encryption
$endgroup$
– ilkkachu
Jul 18 at 17:05
$begingroup$
This doesn't seem to be at all about encryption
$endgroup$
– ilkkachu
Jul 18 at 17:05
1
1
$begingroup$
@ilkkachu It's about coding theory, which can be loosely considered to be in the field of cryptography. I would say questions on that topic can certainly be on-topic here. I'm on the fence whether this particular question is though...
$endgroup$
– marcelm
Jul 19 at 1:11
$begingroup$
@ilkkachu It's about coding theory, which can be loosely considered to be in the field of cryptography. I would say questions on that topic can certainly be on-topic here. I'm on the fence whether this particular question is though...
$endgroup$
– marcelm
Jul 19 at 1:11
1
1
$begingroup$
@marcelm, no, look at the dictionary definition from e.g. Wiktionary: "encryption: The process of obscuring information to make it unreadable without special knowledge, key files, and/or passwords". Or the short snippet under the "encryption" tag on the site.
$endgroup$
– ilkkachu
Jul 19 at 6:59
$begingroup$
@marcelm, no, look at the dictionary definition from e.g. Wiktionary: "encryption: The process of obscuring information to make it unreadable without special knowledge, key files, and/or passwords". Or the short snippet under the "encryption" tag on the site.
$endgroup$
– ilkkachu
Jul 19 at 6:59
2
2
$begingroup$
@ilkkachu Cryptology is far wider than just encryption. Quoting the wikipedia article on Coding Theory: "Codes are used for data compression, cryptography, ...", and the crypto.SE help center: "... and subsidiary topics that generally make up cryptology ...". I maintain that questions on coding theory are on-topic here. In fact, crypto.SE has a coding-theory tag.
$endgroup$
– marcelm
Jul 19 at 9:11
$begingroup$
@ilkkachu Cryptology is far wider than just encryption. Quoting the wikipedia article on Coding Theory: "Codes are used for data compression, cryptography, ...", and the crypto.SE help center: "... and subsidiary topics that generally make up cryptology ...". I maintain that questions on coding theory are on-topic here. In fact, crypto.SE has a coding-theory tag.
$endgroup$
– marcelm
Jul 19 at 9:11
|
show 2 more comments
4 Answers
4
active
oldest
votes
$begingroup$
In general, no. Let us say you have a data vector $x$ of $k$ bits and one bit is flipped by an error. There is no way of detecting, let alone correcting this, unless the errored data vector $x'$ is not another valid data vector.
If the errored vector $x'$ is not a valid data vector and you can do detection, then all $k$ bits cannot be used as arbitrary data bits, thus your data rate is less than $k$ bits, implying you are using parity in some sense.
More simply, if there are no parity bits, all $2^k$ data vectors are valid, and thus all errors result in another valid data vector, making all errors undetectable.
Since correction is harder than detection, the same argument rules out correction as well.You can't correct if you can't detect.
$endgroup$
1
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
add a comment |
$begingroup$
No, because of the Pigeonhole Principle.
Let's say you want to be able to send arbitrary $k$-bit messages. There are $2^k$ possible bit-patterns, and $2^k$ possible intended messages.
Now let's say you want to add error correction. This means that, on top of the $2^k$ correct messages that you want to be able to send, you also want to be able to send some number of incorrect messages (that will get detected and fixed at the other end). Each incorrect message must be distinguishable from every valid message, or else there'd be no way to correct it.
But there are only $2^k$ bit-patterns available; if you want to be able to distinguish $2^k$ correct messages, plus some additional number of incorrect messages, you'll need more than $k$ bits.
(Adding one bit for parity, for example, gives $2^k+1$ possible patterns, enough for $2^k$ correct messages and $2^k$ incorrect messages.)
$endgroup$
add a comment |
$begingroup$
For the general case kodlus answer explained it is not possible. For detecting or correcting errors you need to have redundancy. But many kind of information have included redundancy:
- Some file formats have a fixed header
- Some file formats/protocols only use part of the available symbol space
- Some information only allow certain information in certain context
- ...
A typical way to detect errors in transmission in protocol that only allow 7-bit-ASCII codes was/is to use the 8th bit as parity bit. So you remove redundant information and substitute it with information that allow error-detection or even error-correction.
Or if you send request of type a and expect response of type A but get a response of type B, you know that something went wrong.
$endgroup$
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
add a comment |
$begingroup$
It is not possible to implement error correction without adding parity bits. However, in some cases it may be possible to 'steal' some bits from some other part of the protocol. This is what is done with the FEC implementations for Ethernet.
Let's take 10GBASE-R Ethernet as an example. 10GBASE-R Ethernet without FEC is transferred using the 64b/66b line code - 64 bit data blocks are scrambled with an LFSR, then a 2 bit sync header is attached to each block, and the data is sent serially at 10.3125 Gbps. The 10GBASE-KR FEC is designed to add forward error correction while maintaining the same data rate of 10 Gbps before encoding and 10.3125 Gbps on the wire. This is done by taking 32 blocks 64b/66b encoded data, stripping off one of the sync bits on each block to free up 32 bits, adding 32 parity bits of shortened cyclic code (2112, 2080), scrambling the result, and then sending that on the wire at 10.3125 Gbps.
There are trade-offs to this approach, though: since the sync headers have been effectively removed, the block lock time is vastly increased (the PHY needs to check up to 2112 blocks at 2112 bits each instead of 66 blocks of 66 bits each), extending the time required to bring a link up by several orders of magnitude.
The newer Ethernet Reed-Solomon FEC does something similar, but with an even larger block size. 64b/66b data is transcoded four blocks at a time into 256b/257b, 20 257 bit blocks are broken up into 514 10 bit symbols, those are encoded with RS(528,514) to generate 14 10-bit parity symbols for 5280 total symbols, which are then packed up and sent as a 5280 bit block. The original data, encoded with 64b/66b, would also take up 66*4*20 = 5280 bits.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "281"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f72025%2fcan-error-correction-and-detection-be-done-without-adding-extra-bits%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In general, no. Let us say you have a data vector $x$ of $k$ bits and one bit is flipped by an error. There is no way of detecting, let alone correcting this, unless the errored data vector $x'$ is not another valid data vector.
If the errored vector $x'$ is not a valid data vector and you can do detection, then all $k$ bits cannot be used as arbitrary data bits, thus your data rate is less than $k$ bits, implying you are using parity in some sense.
More simply, if there are no parity bits, all $2^k$ data vectors are valid, and thus all errors result in another valid data vector, making all errors undetectable.
Since correction is harder than detection, the same argument rules out correction as well.You can't correct if you can't detect.
$endgroup$
1
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
add a comment |
$begingroup$
In general, no. Let us say you have a data vector $x$ of $k$ bits and one bit is flipped by an error. There is no way of detecting, let alone correcting this, unless the errored data vector $x'$ is not another valid data vector.
If the errored vector $x'$ is not a valid data vector and you can do detection, then all $k$ bits cannot be used as arbitrary data bits, thus your data rate is less than $k$ bits, implying you are using parity in some sense.
More simply, if there are no parity bits, all $2^k$ data vectors are valid, and thus all errors result in another valid data vector, making all errors undetectable.
Since correction is harder than detection, the same argument rules out correction as well.You can't correct if you can't detect.
$endgroup$
1
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
add a comment |
$begingroup$
In general, no. Let us say you have a data vector $x$ of $k$ bits and one bit is flipped by an error. There is no way of detecting, let alone correcting this, unless the errored data vector $x'$ is not another valid data vector.
If the errored vector $x'$ is not a valid data vector and you can do detection, then all $k$ bits cannot be used as arbitrary data bits, thus your data rate is less than $k$ bits, implying you are using parity in some sense.
More simply, if there are no parity bits, all $2^k$ data vectors are valid, and thus all errors result in another valid data vector, making all errors undetectable.
Since correction is harder than detection, the same argument rules out correction as well.You can't correct if you can't detect.
$endgroup$
In general, no. Let us say you have a data vector $x$ of $k$ bits and one bit is flipped by an error. There is no way of detecting, let alone correcting this, unless the errored data vector $x'$ is not another valid data vector.
If the errored vector $x'$ is not a valid data vector and you can do detection, then all $k$ bits cannot be used as arbitrary data bits, thus your data rate is less than $k$ bits, implying you are using parity in some sense.
More simply, if there are no parity bits, all $2^k$ data vectors are valid, and thus all errors result in another valid data vector, making all errors undetectable.
Since correction is harder than detection, the same argument rules out correction as well.You can't correct if you can't detect.
answered Jul 18 at 5:43
kodlukodlu
10.3k2 gold badges14 silver badges33 bronze badges
10.3k2 gold badges14 silver badges33 bronze badges
1
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
add a comment |
1
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
1
1
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
Not to challenge your statements, which are correct, but now you have me pondering whether you can correct something you can't detect in quantum cryptography. I'm thinking there may be situations where you have enough information to know how to correct a signal, but you disrupt the communication if you try to actually detect it.
$endgroup$
– Cort Ammon
Jul 18 at 19:00
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
$begingroup$
@CortAmmon It's the same idea. You can test (...somehow, don't ask me the specifics) whether a given qubit has been tested before, and know whether it was looked at -- but in doing so, you're 'sacrificing' that bit of information. You also can't check what it used to be without, again, sending more information somehow. That might be in the same qubit, of course, but then you're still using some transmitted information. This is a fundamental property of information in general, not of classical memory.
$endgroup$
– Nic Hartley
Jul 18 at 21:57
add a comment |
$begingroup$
No, because of the Pigeonhole Principle.
Let's say you want to be able to send arbitrary $k$-bit messages. There are $2^k$ possible bit-patterns, and $2^k$ possible intended messages.
Now let's say you want to add error correction. This means that, on top of the $2^k$ correct messages that you want to be able to send, you also want to be able to send some number of incorrect messages (that will get detected and fixed at the other end). Each incorrect message must be distinguishable from every valid message, or else there'd be no way to correct it.
But there are only $2^k$ bit-patterns available; if you want to be able to distinguish $2^k$ correct messages, plus some additional number of incorrect messages, you'll need more than $k$ bits.
(Adding one bit for parity, for example, gives $2^k+1$ possible patterns, enough for $2^k$ correct messages and $2^k$ incorrect messages.)
$endgroup$
add a comment |
$begingroup$
No, because of the Pigeonhole Principle.
Let's say you want to be able to send arbitrary $k$-bit messages. There are $2^k$ possible bit-patterns, and $2^k$ possible intended messages.
Now let's say you want to add error correction. This means that, on top of the $2^k$ correct messages that you want to be able to send, you also want to be able to send some number of incorrect messages (that will get detected and fixed at the other end). Each incorrect message must be distinguishable from every valid message, or else there'd be no way to correct it.
But there are only $2^k$ bit-patterns available; if you want to be able to distinguish $2^k$ correct messages, plus some additional number of incorrect messages, you'll need more than $k$ bits.
(Adding one bit for parity, for example, gives $2^k+1$ possible patterns, enough for $2^k$ correct messages and $2^k$ incorrect messages.)
$endgroup$
add a comment |
$begingroup$
No, because of the Pigeonhole Principle.
Let's say you want to be able to send arbitrary $k$-bit messages. There are $2^k$ possible bit-patterns, and $2^k$ possible intended messages.
Now let's say you want to add error correction. This means that, on top of the $2^k$ correct messages that you want to be able to send, you also want to be able to send some number of incorrect messages (that will get detected and fixed at the other end). Each incorrect message must be distinguishable from every valid message, or else there'd be no way to correct it.
But there are only $2^k$ bit-patterns available; if you want to be able to distinguish $2^k$ correct messages, plus some additional number of incorrect messages, you'll need more than $k$ bits.
(Adding one bit for parity, for example, gives $2^k+1$ possible patterns, enough for $2^k$ correct messages and $2^k$ incorrect messages.)
$endgroup$
No, because of the Pigeonhole Principle.
Let's say you want to be able to send arbitrary $k$-bit messages. There are $2^k$ possible bit-patterns, and $2^k$ possible intended messages.
Now let's say you want to add error correction. This means that, on top of the $2^k$ correct messages that you want to be able to send, you also want to be able to send some number of incorrect messages (that will get detected and fixed at the other end). Each incorrect message must be distinguishable from every valid message, or else there'd be no way to correct it.
But there are only $2^k$ bit-patterns available; if you want to be able to distinguish $2^k$ correct messages, plus some additional number of incorrect messages, you'll need more than $k$ bits.
(Adding one bit for parity, for example, gives $2^k+1$ possible patterns, enough for $2^k$ correct messages and $2^k$ incorrect messages.)
answered Jul 19 at 0:59
DraconisDraconis
1912 bronze badges
1912 bronze badges
add a comment |
add a comment |
$begingroup$
For the general case kodlus answer explained it is not possible. For detecting or correcting errors you need to have redundancy. But many kind of information have included redundancy:
- Some file formats have a fixed header
- Some file formats/protocols only use part of the available symbol space
- Some information only allow certain information in certain context
- ...
A typical way to detect errors in transmission in protocol that only allow 7-bit-ASCII codes was/is to use the 8th bit as parity bit. So you remove redundant information and substitute it with information that allow error-detection or even error-correction.
Or if you send request of type a and expect response of type A but get a response of type B, you know that something went wrong.
$endgroup$
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
add a comment |
$begingroup$
For the general case kodlus answer explained it is not possible. For detecting or correcting errors you need to have redundancy. But many kind of information have included redundancy:
- Some file formats have a fixed header
- Some file formats/protocols only use part of the available symbol space
- Some information only allow certain information in certain context
- ...
A typical way to detect errors in transmission in protocol that only allow 7-bit-ASCII codes was/is to use the 8th bit as parity bit. So you remove redundant information and substitute it with information that allow error-detection or even error-correction.
Or if you send request of type a and expect response of type A but get a response of type B, you know that something went wrong.
$endgroup$
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
add a comment |
$begingroup$
For the general case kodlus answer explained it is not possible. For detecting or correcting errors you need to have redundancy. But many kind of information have included redundancy:
- Some file formats have a fixed header
- Some file formats/protocols only use part of the available symbol space
- Some information only allow certain information in certain context
- ...
A typical way to detect errors in transmission in protocol that only allow 7-bit-ASCII codes was/is to use the 8th bit as parity bit. So you remove redundant information and substitute it with information that allow error-detection or even error-correction.
Or if you send request of type a and expect response of type A but get a response of type B, you know that something went wrong.
$endgroup$
For the general case kodlus answer explained it is not possible. For detecting or correcting errors you need to have redundancy. But many kind of information have included redundancy:
- Some file formats have a fixed header
- Some file formats/protocols only use part of the available symbol space
- Some information only allow certain information in certain context
- ...
A typical way to detect errors in transmission in protocol that only allow 7-bit-ASCII codes was/is to use the 8th bit as parity bit. So you remove redundant information and substitute it with information that allow error-detection or even error-correction.
Or if you send request of type a and expect response of type A but get a response of type B, you know that something went wrong.
answered Jul 18 at 20:31
H. IddenH. Idden
1611 bronze badge
1611 bronze badge
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
add a comment |
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Nice points. Yes, the already available symbol places are not data and may be used for parity checks. Your last paragraph essentially indicates a higher layer protocol which uses information outside the defined channel.
$endgroup$
– kodlu
Jul 19 at 4:38
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
$begingroup$
Thank goodness. “In general”, for k bits where all 2^k encodings are USED and EQUIPROBABLE you need extra bits. But often many encodings are not used - e.g. in many computer instruction sets and datatypes. Even if all are used, if they are not equiprobable you may compress using a variable length code, and then add error checking bits to that. So the average number of bits may be less than or equal to k, although some low probability compressed encodings + error bits may be larger. This really amounts to distinguishing the k physical bits from the number of bits of information carried by them.
$endgroup$
– Krazy Glew
Jul 26 at 5:07
add a comment |
$begingroup$
It is not possible to implement error correction without adding parity bits. However, in some cases it may be possible to 'steal' some bits from some other part of the protocol. This is what is done with the FEC implementations for Ethernet.
Let's take 10GBASE-R Ethernet as an example. 10GBASE-R Ethernet without FEC is transferred using the 64b/66b line code - 64 bit data blocks are scrambled with an LFSR, then a 2 bit sync header is attached to each block, and the data is sent serially at 10.3125 Gbps. The 10GBASE-KR FEC is designed to add forward error correction while maintaining the same data rate of 10 Gbps before encoding and 10.3125 Gbps on the wire. This is done by taking 32 blocks 64b/66b encoded data, stripping off one of the sync bits on each block to free up 32 bits, adding 32 parity bits of shortened cyclic code (2112, 2080), scrambling the result, and then sending that on the wire at 10.3125 Gbps.
There are trade-offs to this approach, though: since the sync headers have been effectively removed, the block lock time is vastly increased (the PHY needs to check up to 2112 blocks at 2112 bits each instead of 66 blocks of 66 bits each), extending the time required to bring a link up by several orders of magnitude.
The newer Ethernet Reed-Solomon FEC does something similar, but with an even larger block size. 64b/66b data is transcoded four blocks at a time into 256b/257b, 20 257 bit blocks are broken up into 514 10 bit symbols, those are encoded with RS(528,514) to generate 14 10-bit parity symbols for 5280 total symbols, which are then packed up and sent as a 5280 bit block. The original data, encoded with 64b/66b, would also take up 66*4*20 = 5280 bits.
$endgroup$
add a comment |
$begingroup$
It is not possible to implement error correction without adding parity bits. However, in some cases it may be possible to 'steal' some bits from some other part of the protocol. This is what is done with the FEC implementations for Ethernet.
Let's take 10GBASE-R Ethernet as an example. 10GBASE-R Ethernet without FEC is transferred using the 64b/66b line code - 64 bit data blocks are scrambled with an LFSR, then a 2 bit sync header is attached to each block, and the data is sent serially at 10.3125 Gbps. The 10GBASE-KR FEC is designed to add forward error correction while maintaining the same data rate of 10 Gbps before encoding and 10.3125 Gbps on the wire. This is done by taking 32 blocks 64b/66b encoded data, stripping off one of the sync bits on each block to free up 32 bits, adding 32 parity bits of shortened cyclic code (2112, 2080), scrambling the result, and then sending that on the wire at 10.3125 Gbps.
There are trade-offs to this approach, though: since the sync headers have been effectively removed, the block lock time is vastly increased (the PHY needs to check up to 2112 blocks at 2112 bits each instead of 66 blocks of 66 bits each), extending the time required to bring a link up by several orders of magnitude.
The newer Ethernet Reed-Solomon FEC does something similar, but with an even larger block size. 64b/66b data is transcoded four blocks at a time into 256b/257b, 20 257 bit blocks are broken up into 514 10 bit symbols, those are encoded with RS(528,514) to generate 14 10-bit parity symbols for 5280 total symbols, which are then packed up and sent as a 5280 bit block. The original data, encoded with 64b/66b, would also take up 66*4*20 = 5280 bits.
$endgroup$
add a comment |
$begingroup$
It is not possible to implement error correction without adding parity bits. However, in some cases it may be possible to 'steal' some bits from some other part of the protocol. This is what is done with the FEC implementations for Ethernet.
Let's take 10GBASE-R Ethernet as an example. 10GBASE-R Ethernet without FEC is transferred using the 64b/66b line code - 64 bit data blocks are scrambled with an LFSR, then a 2 bit sync header is attached to each block, and the data is sent serially at 10.3125 Gbps. The 10GBASE-KR FEC is designed to add forward error correction while maintaining the same data rate of 10 Gbps before encoding and 10.3125 Gbps on the wire. This is done by taking 32 blocks 64b/66b encoded data, stripping off one of the sync bits on each block to free up 32 bits, adding 32 parity bits of shortened cyclic code (2112, 2080), scrambling the result, and then sending that on the wire at 10.3125 Gbps.
There are trade-offs to this approach, though: since the sync headers have been effectively removed, the block lock time is vastly increased (the PHY needs to check up to 2112 blocks at 2112 bits each instead of 66 blocks of 66 bits each), extending the time required to bring a link up by several orders of magnitude.
The newer Ethernet Reed-Solomon FEC does something similar, but with an even larger block size. 64b/66b data is transcoded four blocks at a time into 256b/257b, 20 257 bit blocks are broken up into 514 10 bit symbols, those are encoded with RS(528,514) to generate 14 10-bit parity symbols for 5280 total symbols, which are then packed up and sent as a 5280 bit block. The original data, encoded with 64b/66b, would also take up 66*4*20 = 5280 bits.
$endgroup$
It is not possible to implement error correction without adding parity bits. However, in some cases it may be possible to 'steal' some bits from some other part of the protocol. This is what is done with the FEC implementations for Ethernet.
Let's take 10GBASE-R Ethernet as an example. 10GBASE-R Ethernet without FEC is transferred using the 64b/66b line code - 64 bit data blocks are scrambled with an LFSR, then a 2 bit sync header is attached to each block, and the data is sent serially at 10.3125 Gbps. The 10GBASE-KR FEC is designed to add forward error correction while maintaining the same data rate of 10 Gbps before encoding and 10.3125 Gbps on the wire. This is done by taking 32 blocks 64b/66b encoded data, stripping off one of the sync bits on each block to free up 32 bits, adding 32 parity bits of shortened cyclic code (2112, 2080), scrambling the result, and then sending that on the wire at 10.3125 Gbps.
There are trade-offs to this approach, though: since the sync headers have been effectively removed, the block lock time is vastly increased (the PHY needs to check up to 2112 blocks at 2112 bits each instead of 66 blocks of 66 bits each), extending the time required to bring a link up by several orders of magnitude.
The newer Ethernet Reed-Solomon FEC does something similar, but with an even larger block size. 64b/66b data is transcoded four blocks at a time into 256b/257b, 20 257 bit blocks are broken up into 514 10 bit symbols, those are encoded with RS(528,514) to generate 14 10-bit parity symbols for 5280 total symbols, which are then packed up and sent as a 5280 bit block. The original data, encoded with 64b/66b, would also take up 66*4*20 = 5280 bits.
edited Jul 19 at 21:53
answered Jul 19 at 21:38
alex.forencichalex.forencich
1312 bronze badges
1312 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Cryptography Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcrypto.stackexchange.com%2fquestions%2f72025%2fcan-error-correction-and-detection-be-done-without-adding-extra-bits%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
15
$begingroup$
Are you familiar with the pigeonhole principle? Because it's trivial to reduce this question to that.
$endgroup$
– MSalters
Jul 18 at 14:03
11
$begingroup$
This doesn't seem to be at all about encryption
$endgroup$
– ilkkachu
Jul 18 at 17:05
1
$begingroup$
@ilkkachu It's about coding theory, which can be loosely considered to be in the field of cryptography. I would say questions on that topic can certainly be on-topic here. I'm on the fence whether this particular question is though...
$endgroup$
– marcelm
Jul 19 at 1:11
1
$begingroup$
@marcelm, no, look at the dictionary definition from e.g. Wiktionary: "encryption: The process of obscuring information to make it unreadable without special knowledge, key files, and/or passwords". Or the short snippet under the "encryption" tag on the site.
$endgroup$
– ilkkachu
Jul 19 at 6:59
2
$begingroup$
@ilkkachu Cryptology is far wider than just encryption. Quoting the wikipedia article on Coding Theory: "Codes are used for data compression, cryptography, ...", and the crypto.SE help center: "... and subsidiary topics that generally make up cryptology ...". I maintain that questions on coding theory are on-topic here. In fact, crypto.SE has a coding-theory tag.
$endgroup$
– marcelm
Jul 19 at 9:11